Microsoft has been working on allowing driver developers to write Windows drivers in Rust, and the company has published a progress report detailing this effort. In the windows-drivers-rs GitHub repository you’ll find a bunch of Rust crates for writing Windows drivers in Rust.
Using these crates, driver developers can create valid WDM, KMDF, and UMDF driver binaries that load and run on a Windows 11 machine.
[…]Drivers written in this manner still need to make use of unsafe blocks for interacting with the Windows operating system, but can take advantage of Rust’s rich type system as well as its safety guarantees for business logic implemented in safe Rust. Though there is still significant work to be done on abstracting away these unsafe blocks (more on this below), these Rust drivers can load and run on Windows systems just like their C counterparts.
↫ Nate Deisinger at the Windows Driver Developer Blog
As mentioned above, there’s still work to be done with reducing the amount of unsafe Rust code in these drivers, and Microsoft is working on just that. The company is developing safe Rust bindings and abstractions, as well as additional safe structs and APIs beyond the Windows Driver Framework, but due to the complexity of Windows drivers, this will take a while.
Microsoft states that it believes memory-safe languages like Rust are the future of secure software development, but of course, in true Microsoft fashion, the company doesn’t want to alienate developers writing traditional drivers in C either.
As much as I have a personal distaste against Rust syntax, I have to commend their approach to relevancy.
Just a note, there are basically a very short list of proven languages: C, C++, Java, PHP, Python, and Assembly. Everything else is syntactic sugar
Now do not get me wrong, syntactic sugar can be helpful. TypeScript is for example better for many projects than plain JavaScript, or a language like Go can be useful for writing backend services or command line tools.
That being said, … the Rust developers seems to recognize this. Either explicitly, or implicitly steer for a relevancy goal with a two pronged attack: a niche in machine learning and integrating into established codebases for additional benefits (like security and reliability in driver authoring).
Will this work? Only time will tell.
However they are at least pushing in the right directions.
(list of proven languages) Not even LISP ?
Kochise,
LISP is a very good language. A historically important one, and has influence on many others. I even wrote an interpreter once back in the day.
However, my criteria for established languages is:
For example,
C has: Linux kernel
C++ has: Windows, macOS, Google Search, Chrome, (maybe) Firefox among others.
Java has: (again) Google Search, Android
PHP has: Facebook (though their own dialect), Wikipedia
Python has: YouTube (had?), PyTorch
Assembly: Basically all devices
Yes, there are portions in other languages in those code bases. But I was thinking about the main ones. And… it also shows languages are not standalone, but ecosytems.
You either use one of the established ones.
Or use “syntactic sugar” to target the libraries written in one of the established ones.
(Can “Go” for example exist without C standard libraries)?
There was a time when LISP could/should have taken over the world. Concise programming language, made for AI before it was a thing, but too many parenthesis perhaps.
Then Prolog entered the game.
I really wish things turned different.
Kochise,
While I can acknowledge LISP’s strengths, it feels more like an intermediate AST representation of a program than a programming language for humans to write software in.
https://en.wikipedia.org/wiki/Abstract_syntax_tree
However I do see the merit in logic programming languages and was drawn to Prolog. I also though haskel had some really cool features compared to the languages I was otherwise being exposed to in my career. Alas, like most people though, I ended up working on more mainstream languages because that’s where the work is at – this is an important driver in the language popularity contest and has little to do with merit.
Kochise,
Prolog? ha ha… I loved that. It was one of the inspirations that got me into coding, but it took about 20 years before I actually used Prolog
In our ML fundamentals course we implemented a Sudoku solver in Prolog
which we ran on the Prolog interpreter we wrote earlier…
In LISP… using the interpreter… once again we wrote as part of another project in that class.
Yes, our professor was crazy. But crazy in the good sense that we learned a lot.
But…
There was little chance things would be different.
C is after all a universal assembly language. As soon as K&R decided to write an operating system (Unix) and designed a language based on BCPL (later B), the fate was sealed.
C++? A collection of extensions on C, including template meta programming. By keeping the core design goal of being able to have zero overhead abstractions, they were relevant for all these years.
Java? A portable version of C++
PHP? A web version of C
Python? An interpreted version of C (at least until Python 2.0)
The pattern is obvious. There was almost no other way.
sukru,
It was mostly a matter of being at the right place at the right time. Otherwise C’s dominance doesn’t really seem inevitable and I doubt C would have been historically special on it’s own if not for it’s unix association.
In a universe where unix was based on pascal, you might be saying the exact same thing about pascal and that there was almost no other way 🙂
sukru, Erlang is a very nice evolution of Prolog. It’s green threads and message passing system is incredibly convenient. Sad the debugger isn’t quite coder friendly. But at least it’s integrated.
Alfman,
Yes C’s UNIX association is what gave its momentum. If it was any other generic assembly language it would be the kind today.
But Pascal? I have spent more then 10 lovely years with that language, but it was not for systems programming. I was able to do many low level stuff, including graphics, direct disc access, TSRs and so on, but it is obvious it would not catch on (except becoming really C like, for example NULL terminated strings, and so on).
Kochise,
I have only very small interactions with Erlang.
Although, I had spent some time really experimenting with F# back in my .Net days. It was a real capable language with access to extensive libraries and direct integration to Visual Studio.
However it was too slow for practical purposes. Your dictionary? An immutable, copy-on-write binary search tree structure. Lists and others are similar.
It was a good experience, but practicality caused me to end it.
(F# is basically OCaml inspired hybrid, functional first language .Net)
sukru,
I don’t consider C an assembly language any more than basic or pascal. Besides obvious syntax differences and things like modules, the program flow itself is quite similar. The main difference when writing code is that C doesn’t have a string type.
I worked for a few different companies that used pascal for systems programming, so I honestly don’t understand this objection. You don’t actually have to use pascal’s strings, which was really little more than syntactic sugar for a character array under the hood. You could use character arrays just like C and sometimes we did just that.
I am not sure about early versions of pascal, but stony brook pascal and turbo pascal from the late 90’s made it a breeze to use low level assembly right inside of procedures. Calling interrupts was even easier than with C.
As for NULL terminated strings, I feel that convention was a mistake even in the context of C programming. It would forever impair fast string operations in C. Pascal programs would often perform faster on string operations because C programs didn’t know the length of NULL terminated strings ahead of time. Also it lead to numerous ostrich algorithm faults. Standard C string functions were themselves inherently unsafe on day 1. Like “gets”, using it automatically means your program is vulnerable! All these things would have been caught and solved by someone with more experience, but then we’ve got the benefit of hindsight.
Alfman,
C, an extension of the older even simpler language B, has very few abstractions, and is as close as a “portable assembly” as you can get.
There is a reason why most kernels are (used to be) written in C. At least the actual low level portions. Same with firmware, real time embedded systems, and so on.
Rust is fighting to make an opening though.
@sukru
I like your criteria. However, “long running products serving at least a billion users” could lead to quite a different list.
The longest running software serving the most users may be written in COBOL.
Facebook owns WhatsApp but there are actually more WhatsApp users than Facebook users. WhatsApp is written in Erlang (quite a different beast).
I would consider Objective C as syntactic sugar over C but there are a lot of people running operating systems for Apple laptops, phones, and tablets that are based on Objective C. Does this put it in the same camp as C++?
There are about 3 billion people playing video games written in C# (Unity). Bing is written in C#.
For a long time, Python was mostly syntactic sugar over Fortran (Numpy).
And, of course, a massive swath of the web is written in JavaScript.
Lisp is indeed a “long running” language but I cannot think of anything that has achieved that kind of scale. Grammarly may be one of the biggest Lisp apps and it has fewer than 50 million users I think.
> Can “Go” for example exist without C standard libraries
Go does not need a C library. It has its own runtime.
However, that is not the case for PHP, JAVA, Python, or even C++ which do require a standard C library to work. If that is your criteria, we get a very different list again.
Assembly language should probably not even be on this list as it is just a different notation for machine language and it is of course different for every architecture.
LeFantome,
Just saw this. (so replies in incorrect order)
Yes, we the list might not be perfect.
In addition to JavaScript, COBOL is there (I keep forgetting it exists, but it does). It really powers large applications. Not sure they are user facing, but we can also tentatively include that.
ObjC? It powers APIs, but I don’t think any essential product requires it. I mentioned Unity, again API. Not sure any individual established game has a billion users. (Pokemon GO is close).
Erlang/WhatsApp? I need to look more into it. I know before FB bought them it was Erlang, but how much is it still? Let’s put that on the edge.
Go was there as example. Yes, it has its own libraries. But if you need to do anything serious, you start interfacing with C. (That is actually one of its strengths, a very simple C bridge, unlike, say… Python or Java). That is not a criteria, but a corollary of it.
The criteria is billion user applications / services.
The reason is it is a very good proxy for maturity and ecosystem.
You can do pretty much anything you want in C, C++, Java, JavaScript, COBOL?, PHP, Python, or Assembly. They are proven.
Ruby, Rust, R? You might need to implement many parts in C or use C/C++ bridges to complete your task.
@sukru
Some irony that you use TypeScript as your example of syntactic sugar when JavaScript is not among your list of proven languages.
Some would argue that C++ is syntactic sugar over C.
You can say that Kotlin is syntactic sugar for Java. But would you say the same of C#? Not me.
How many would agree that PHP should be on that list?
I think you have a list of language preferences more than a list of fundamental ones.
LeFantome,
My criteria was
(I shared that above in another reply.
Yes, you are right. JavaScript can be there. It powers applications like GMail, that definitely has a billion users.
C#, Kotlin? I’m not sure. They are very good languages (I have spent many years coding in C#). But they have not been an essential part of programming.
It would be close as Pokémon GO, written in Unity, which uses C# has over a billion downloads. But that is not exactly billion “users”.
Anyway… a billion is arbitrary, but gives a very good solid boundary.
In an alternate universe, C# would win over Java, as it is definitely the better designed cousin. So we would have a different discussion.
PHP? Definitely there. Enabled dynamic programming on the web for the masses, while many others like DreamWeaver failed — for not being C clones! (okay — half joking). And it powers Wikipedia and Facebook (a dialect), two essential services. Not to mention WordPress (this very side), Etsy, Flickr, Tumblr, and many others (at least one point in their lives)
Anyway…
Too much nitpicking.
I see this being a common stumbling block across the linux and windows kernels. C interfaces will keep hindering rust’s approach to safety until the C interfaces get replaced with rust ones. It’s logistically difficult to update so much code, and also unpopular with developers who don’t want to switch.
I see this as a very tough balance. Technically there’s a good case for designing kernels to be memory safe from the start rather than trying to mix highly incompatible models together. But in the real world the issue is always the same: new operating systems/kernels face serious adoption challenges and chicken and egg problems. No drivers, no users, no marketshare, etc. Cramming memory safe language into a legacy code base, even if the hybrid approach defeats the benefits, may be the only way forward within the confines of real world operating system popularity contests.
Alfman,
It is not a difficult problem, but an impossible one. Our CPU and overall computer design makes it necessary to have unsafe codes, whether in C or in Rust.
How do you access hardware buffers without de-referencing raw pointers, usually with magic addresses like oxB8000? How do you even build a safe memory allocator, without first building a raw list of allocatable regions on RAM? Or worse the virtual memory which could map into RAM, PCIe, disk, or even uninitialized at any time?
The only thing Rust will bring is explicitly marking all these pointers as “unsafe”. But we already know that in the kernel domain.
LISP to the rescue again : https://github.com/vygr/ChrysaLisp or https://github.com/froggey/Mezzano
Back in the days : https://en.wikipedia.org/wiki/Genera_(operating_system)
sukru,
Ok, but were talking about very little code there. I could say it’s impossible to write a C kernel because you need bits of assembly to program special CPU registers like descriptor tables. That’s true, but we can just create abstractions for those bits and move on, it’s not really an issue. In the same vein, we’re not going to reach 100% safe rust either, but we can create abstractions for the unsafe code and have them be orders of magnitude less than the safe code.
I should study more how redox handles it, but thinking off the cuff I think that ideally one would create a safe resource abstraction that knows about the hardware’s memory ranges (through bios tables or plug and play) and drivers would access those buffers through the safe abstraction. I understand that the drivers are privileged and still need to be trustworthy, but even so safe abstractions could still limit most of the need for unsafe sections proliferating throughout the kernel.
Safe abstractions would let drivers access their designated buffers without unsafe pointers. Ideally the safe abstractions would compile away into zero cost abstractions, although I accept we may need specific examples to convince ourselves that the compiler’s output is still optimal in practice.
Alfman,
Yes, safe abstractions help in writing higher level code. Your kprintf or CRC calculations might be completely in the safe zone. And, yes drivers become safer once you have abstractions over busses like PCI or memory access.
So, that gives a good usecase on device drivers or some high level code.
That being said… the pushback against rewriting core parts in Rust is not only for ideological (or stubbornness) reasons. The existing code is well tested and proven over decades. And replacing it will cause reintroducing more bugs. It will be a bad tradeoff.
Yes, every now end then they will come up with “hey! here we found an undiscovered user-after free memory bug in Linux kernel that has been there for 20 years! let us rewrite everything in Rust, and this will never happen again”
But that is a very wrong approach. As everyone who has maintained legacy codebases, especially with language and/or framework migrations know, memory bugs are but one of the class of errors. There are logic bugs, UI bugs, race conditions, concurrency and parallelism bugs, edge cases, regressions, and much more, At the end of the day, there are going to be new bugs, or new reintroduction of older, fixed bugs, while rewriting the codebase.
So, occasional new driver in Rust?
Great
Migrating the entire kernel?
Good luck!
sukru,
That’s the “don’t fix what ain’t broke” argument, and of course I appreciate there’s some merit to that. On the other hand these hidden latent faults are exactly what hackers including the NSA rely on create backdoors and use them in secret with software companies being none-the-wiser.
It actually is the *right* approach for an ideal world, we just don’t live in that world.
Note that in the root post, I did mention it making more sense to create a safe kernel from scratch. The problem of course is that new kernels are just far less likely to get critical mass and so that’s how we end up needing to hammer safe languages into unsafe kernels. It’s not a happy marriage, but this incremental approach and its compromises might be the only way to make safe kernels commercially relevant.
Alfman,
Yes, we are not living in that ideal world. We neither have the resources nor the time to rewrite software. And our competitors will definitely not sit idle by.
That is why all these “new kernels” are distractions that take away the momentum form Linux.
I would highly recommend reading this article:
https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
“Things You Should Never Do, Part I”
It is by Joel Spolsky, then the head of Office (VBA) at Microsoft talking about how competitors kept shooting themselves in the foot with rewrites while Microsoft kept chugging along.
It was a very eye opening read.
(Can’t seem to edit and move the comment, so I’ll keep it here. Sorry)
sukru,
In the short term, it’s too much work to start over. But long term Joel is wrong. Over time projects carry more legacy baggage and to never have a shot at redoing things with the benefit of hindsight holds back new generations indefinitely. We already see this legacy baggage in modern operating systems. Sometimes clearing the slate is not only beneficial, but essential to progress. You would recognize this as a local maxima. I expect Joel to recant his statements on this, perhaps on his death bed, because while there was some truth there, it was short sighted as an absolute. Rather than saying it should never be done as he did, we should instead be asking when a reset would be best because “never” is not an optimal answer.
Alfman,
You can always do incremental updates. Every time you touch a piece of code make it more modern. Every time compiler update breaks something, you refactor pieces of code for a better design.
But starting from scratch?
It rarely pays off. Especially when you are “winning” or at least have a good place in the competition.
sukru,
It doesn’t solve the local maxima problem. There are countless examples of this with both windows and linux.
We might want to make the kernel’s core components safe, but we give up on that because of all the legacy code this change would end up breaking. So we keep the kernel’s core unsafe. It’s not hard to see how this situation keeps us from making more progress on safe kernels. It’s hard to fix without a clean slate.
A different example is that linux has very poor support for asynchronous IO at the kernel level (so much so that posix AIO on linux is actually implemented in user space). Linux drivers were based on blocking kernel threads, with clunky hacks to support signals. Not only is it sub-optimal for AIO, but it’s responsible for unkillable linux processes that can end up in a dead state waiting on disk IO, which can cause frustration to users and admins when it happens. These were technical decisions made decades ago that are virtually impossible to change now without redoing all the drivers.
So there are things we just don’t change simply because it is difficult. The problems with sticking to old legacy code bases are twofold: 1) keep accruing baggage and technical debt over time, 2) make it difficult to embrace modern innovation. This doesn’t automatically mean old code needs to be replaced at every chance, but if it’s “never” replaced to fix it, then we’ll see the gap between where our software is at and where our software should be at just keeps growing. This is the local maxima problem.
Well, if a dominant company is unlikely to be displaced, even by a competitor with better software, then I’ll sadly agree with you. I think this says more about the state of competition than about what’s the best way to improve our software over the long term.
Alfman,
Sorry for being the negative person.
But do you think an effort like Redox has the chance to upseat Microsoft from their dominant position?
Or would that talent be better spent in improving Linux, say building an entirely new I/O layer in kernel instead?
(Sun and Apache tried this with Office. Where is OpenOffice today? Or whatever the name of the latest iteration is)?
sukru,
No I don’t. I know it’s lost in all the comments, but I tried speaking to this point earlier.
https://www.osnews.com/story/143263/towards-rust-in-windows-drivers/#comment-10452608
I actually think it might be less work to achieve the technical goals in a new kernel with a clean slate than to make very fundamental changes to linux. Even more so considering the resistance we’ve seen by linux devs. But I also feel new kernels/operating systems don’t have a good chance to beat out incumbents regardless of achieving technical goals.
Perhaps I gave the wrong impression; I didn’t mean to prescribe software resets as a solution to market incumbencies. I don’t really have a solution for that. The incumbents will likely remain at the top. Rust’s current path may be our best prospect of bringing some safety benefits to mainstream kernels: cram safe languages into existing unsafe kernels even if the result ends up being less optimal than a kernel designed for safety.
I felt it did ok until oracle came in and killed it. Sun had talented developers working on great things that advanced unix & FOSS. Their failure had more to do with finding a sustainable business model as cheap generic PCs (both windows and linux) ate away at sun’s marketshare. Sun bled until their assets could be taken over.
Alfman,
First I need to make a correction:
https://www.computerworld.com/article/3840480/libreoffice-downloads-on-the-rise-as-users-look-to-avoid-subscription-costs.html
Looks like Microsoft did not listen to their own advice and pushed users to LibreOffice.
(That being said, Oracle buying Sun… probably requires a much longer discussion)
I agree. They might have a better place to nurture themselves in the established Linux kernel instead of trying to reinvent the wheel.
“But it is NOT invented here!” 🙂
Btw, I’m not against all rewrites. I have done plenty myself. What I’m against is rewriting without an actual need, and only for basically ideological purposes.
That wastes time while competition moves forward.
sukru,
People hate the subscription model, who knows if it will work in the end. People absolutely hated the forced adobe subscriptions, but it succeeded anyway. 🙁
I was trying to go over some reasons why it’s not merely ideological and isn’t a waste of time over the long term. Without the opportunity to reset, ever, future generations become perpetually tied down to choices made long before them. We have a lot more experience and hindsight than original creators, who were actually quite inexperienced. It’s a problem that we have less ability to shape the future of technology. A periodic reset can actually help move us forward.
To put it a different way, meritocracy doesn’t happen if the winners of old races are presumed to be the winners of all time just because they won first. In the olympics, old records are meant to be broken in new contests; it represents progress and is celebrated. With engineering, we eventually get the chance to fix stuff because all physical creations eventually need to be replaced over natural causes (be it cars, bridges and buildings, space stations, etc). Engineers learn from the past but aren’t bound to it long term. In the software domain, software can potentially live forever, our failure to hold these kind of “software olympics” to challenge incumbents once they get to power holds back future progress. While Joelonsoftware’s advice is sensible for the short term, don’t throw away good work just to follow fads, he’s failed to account for the encrustification and stagnation that occurs when we blindly follow this never rewrite mandate for the long hall.
So with all this said, I hope I’ve made at least a mildly thought provoking case for why we need to be considering the merits of clearing the slate periodically rather than insisting it never be done.
Alfman,
I would recommend reading the article on why the space shuttle was sized based on the width of horse chariots in Ancient Rome.
Yes, it is fascinating, and supported by evidence. (The phenomenon is called “path dependence”)
No they don’t. At least without constant rejuvenation. Software is like a plant, and needs attention.
Try compiling a 20 year old source code on a modern compiler. Or running a Gnome 1.0 binary directly on latest Ubuntu. And you’ll quickly realize software is a constant state of decay.
Yes, cleaning the slate is important. However… again… only if absolutely necessary.
It was much easier for Linux to adopt a multimedia optimized scheduler for UI responsiveness than Haiku to catch up on the drivers and software support side. Ecosystem is king.
(Though, I love Haiku, and want them to be successful. That’s another matter).
sukru,
I am familiar with the story. It works as a metaphor although snopes pours some cold water on it.
https://www.snopes.com/fact-check/railroad-gauge-chariots/
Of course they are plastered over every version, but it remains to be seen whether our dominant platforms will ever get replaced by something truly new.
Who knows though, politics are fracturing globalism and giving rise to an anti-globalist sentiment that might be able to create new local pockets of competition. New challengers could capitalize on the backlash, although I can’t tell whether there is enough momentum to make a difference.
https://www.osnews.com/story/141877/made-ometer-helps-you-easily-and-quickly-avoid-american-products/
Ok, but “only if absolutely necessary” treats it as a binary condition. I’d like to treat it more as an optimization problem. There will be a point on the curve, even before absolute necessity, when it is more optimal for progress to clean the slate rather than carrying long term baggage and opportunity costs for decades. IOW getting rid of the baggage and escaping the local maxima can bring benefits even though it’s not absolutely necessary.
In principal remaining in a local maxima is perfectly sustainable. It may never be absolutely necessary to leave, but can we agree that it’s not the optimal strategy for progress over the long term?
I was hoping to end on an agreement, but if I misread this possibility, then forgive me for pushing so hard.
Alfman,
I agree that snopes might have weakened it, however the main argument of designs being influenced by earlier external decisions stays. Even for this case the secondary form “space shuttle design is heavily influenced by horses” is true, regardless of whether it was the Roman time horses, or horses (and mules) carrying carts from mines.
I’m not sure why you are stuck in a local maxima 🙂
Older, locally optimum designs are replaced by newcomers all the time. The best they can do is slow it down a little.
While people were optimizing older designs, Henry Ford came up with the Model T, mass produced it, and took away the market.
Same with Tesla. While others were trying to optimize the ICE (internal combustion engine) efficiency, they came up with an economically feasible long range Electric Vehicle, and the rest is history (Tesla Model Y was the best selling car across all models in the world in 2023)
This is true for computers as well. While EU was mandating GSM (TDMA) and Broadcom was trying to perfect it, Qualcomm engineers came up with CDMA and revolutionized the mobile phone industry (made Internet feasible for the first time).
Software?
Same, there would be many examples where the lazy incumbents lost ground trying to optimize their “local maximum” while an upstart ate their lunch.
(Microsoft -> IBM is one good example, so is Google -> Microsoft and then Facebook -> Google, and Tiktok -> Facebook, and …)
This is called the Innovator’s Dilemma. And, yes there is a book about it with that name 🙂
sukru,
That’s why I used the examples I did. Getting out of the local maxima is feasible – never say never, but if the changes require rewriting all the drivers anyway, then we might as well have a new fair contest to see what kernels deserve to have the work put in based on future merit instead of just giving the win to old incumbents.
In 1919 George Eastman divided his house in half to add 9 feet to the middle of it. He paid an inflation adjusted cost of $14M.
https://www.eastman.org/historic-mansion
Technically what he did was feasible. He did it because he could and didn’t care about how much work or money it took. Software is the same way. Yes we can spend inordinate sums of time and resources to re-engineering our way out of a local maxima, but ask yourself this: are you doing it because it makes the most sense, or has it become more about “ideological (or stubbornness) reasons”.
Yes, that happens when markets are still healthy before turning into duopolies and oligopolies, but typically not after. In the robber baron era, dominant corporations had to be stopped by government intervention and there’s little indication completion would have recovered on it’s own.
The EV market is young and even more so when Telsa started out. IMHO movement in young markets is expected. The government played a huge roll in building up the EV market with billions upon billions in subsidies, without which Tesla might not have made it. This isn’t to discredit the achievement, but going forward as subsidies dry up and the market matures, the reality is newcomers that haven’t made it to market yet are probably locked out of the market now.
I think all of these make sense in the context of market maturity; new markets bring about new opportunities. AI is the big one today. But mature markets tend to be poison for newcomers. In terms of operating systems, maybe newcomers can fight for morsels in the long tail, but the incumbents at the top are quite safe. I actually thought Fuchsia had a good shot because google is so well positioned to put a new kernel into mainstream products, but then all the layoffs happened.
Alfman,
We are really sidetracked.
Yes, and that is why large, and inefficient organizations ultimately succumb to up comers. That has always been the case… for thousands of years.
Sirius and XM had the duopoly on satellite radio, where are they today? Spotify ate their lunch even when they combined.
What about Dish and DirectTV?
Do you remember Blockbuster?
Or IBM that dominated business computing?
Or…
(You get the idea)
Established markets are fragile and inefficient. That are taken down from the outside.
I want to be pedantic and say “electric vehicles are not young, in fact they are older than internal combustion engines”
They were almost extinct for a hundred years but came back when the battery technology became viable.
And Tesla was not the “first” (at least this time). It was General Motors with their EV1:
https://en.wikipedia.org/wiki/General_Motors_EV1
It is a fascinating story. They made a car so good, they had to take it all down, recall all sold vehicles, tear them apart, and fire everyone.
Why?
It was a massive threat to their ICE business (again “Innovator’s Dilemma”, I highly recommend that book)
What happened to that team?
They built Tesla.
The government? Might have played a minuscule role, As someone buying a $120k vehicle had little care for a $7.5k subsidy. But it helped Tesla’s competitors (which were targeting the $30k market)
Anyway, this is again history.
So true.
That is why Linux thriving is important. And also hobby projects do actually help. How? They innovate ideas which later then be incorporated into Linux kernel. So they act as incubators. But they are not the future.
sukru,
If you’re referring the George Eastman house, I think it was a really good metaphor. If it costs $9M to build a new structure versus $14M to change an old one, then I think we need to be candid about the actual goal behind keeping the old structure, it’s not really about value or sustainability, but rather attachment.
Granted, the fate of desktop operating systems follows the fate of the desktop computing market. If desktop computing collapses, so too will desktop operating systems, naturally. But unless you’re predicting the collapse of desktop computing in the foreseeable future, the point about oligopolies staying in power still stands. Such dominance is typically irreversible unless there’s external intervention by forces bigger than the market leaders themselves.
You’re only looking at a single mechanism the government used to build up the EV market. I’d say it’s the totality of their initiatives that helped bring EVs to market along with subsidies for ancillary services like charging stations. And since Tesla sells solar panels, they got subsidies for that too. It wasn’t just the federal government either, California, Tesla’s home state provided even more subsidies.
I don’t have full historical data, but right now 1/3 of Tesla’s revenue comes from just carbon credits. These are government programs designed to make manufacturers of internal combustion engine vehicles subsidize EV vehicles.
https://carboncredits.com/teslas-carbon-credit-revenue-soars-to-2-76-billion-amid-profit-drop/
All this assistance was really quite socialist and we can debate the merits of it, but regardless it worked. Without it, history could have repeated itself and we might be watching “who killed the electric car – the next chapter” right now.
I agree it’s important. It is an aging code base with some legacy baggage, it deserves the “good enough” designation. My hope is that we don’t use this to become complacent on progress for the long term.
Alfman,
It looks like we will differ on how much government helped Tesla. But that is the side point
My main point was Tesla was helped by ineptitude and slowness of GM.
GM not only could not capitalize the same exact subsidies available to Tesla and all EV manufacturers, they could not even use their massive manufacturing knowhow and deep pockets against a startup.
Worse?
They actually made a great electric car, but since it would cannibalize their established model, they shut it down. Literally called back and destroyed the very thing they built.
Back to Linux…
If Linux acts like GM or other slow established players, they will lose.
If they learn to adopt even if their codebase is full of cruft, they will thrive.
And they are doing exactly that. By taking the advantages of Rust in their Linux codebase, they basically suffocate Redox.
sukru,
Agree. Both can be true though: ICE manufactures cannibalized their own EVs AND governments played an import role in growing the EV market. I don’t sympathize with GM’s bad business choices, but Tesla might not have thrived if it weren’t for green subsidies from federal and state governments. Today the market stands to loose ground under the Trump admin.
O/T, but you might find this interesting 🙂
“Lisa: Steve Jobs’ sabotage and Apple’s secret burial”
https://www.youtube.com/watch?v=rZjbNWgsDt8
Or they will thrive anyways because being in a dominant position gives them a huge advantage. I think you believe this too, it was implied by your question earlier “But do you think an effort like Redox has the chance to upseat Microsoft from their dominant position?” You understood that dominance makes unseating unlikely, so much so that you used it as a rhetorical device. And I agreed with you.
I don’t think it means linux will use rust as effectively as redox. But I don’t think it matters because the market isn’t competitive
Alfman,
Thanks, I will check that.