The Linux kernel has become such an integral, core part of pretty much all aspects of the technology world, and corporate contributions to the kernel make up such a huge chunk of the kernel’s ongoing development, it’s easy to forget that some parts of the kernel are still maintained by some lone person in Jacksonville, Nebraska, or whatever. Sadly, we were reminded of this today when the sole maintainer of a few DRM (no, not the bad kind) announced he can no longer maintain the gud, mi0283qt, panel-mipi-dbi, and repaper drivers.
Remove myself as maintainer for gud, mi0283qt, panel-mipi-dbi and repaper. My fatigue illness has finally closed the door on doing development of even moderate complexity so it’s sad to let this go.
↫ Noralf Trønnes
There must be quite a few obscure parts of the Linux kernel that are of no interest to the corporate world, and thus remain maintained by individuals in their free time, out of some personal need or perhaps a sense of duty. If one such person gives up their role as maintainer, for whatever reason, you better hope it’s not something your workflow relies, because if no new maintainer is found, you will eventually run into trouble.
I hope Trønnes gets better soon, and if not, that someone else can take over from him to maintain these drivers. The gud driver seems like a really neat tool for homebrew projects, and it’d be sad to see it languish as the years go by.
So goes the myth of “many eyes make bugs shallow”. There is no “many eyes” for most open-source code. With the exception of corporate-sponsored open-source code, most open-source code is an 1-2 man show.
This is sadly very true. And combined with the fact that some of those projects have become dependencies in others we rely on.
This will accelerate over the next few years as the last of the boomer generation retires and Really when Linus’ and his generation reach retirement (less than a decade away).
It will be fine and each new generation deserves its own era anyway. For example Windows is not all that relevant any more either. Times change.
Some people used to feel proprietary code is supreme, now you claim corporate sponsored open source code is supreme. Reality being all this claims are shallow and GNU/Linux runs the world.
kurkosdr,
I think it’s also true of most proprietary software too. People may wrongly assume the standards are higher in corporations.
But yeah I think you’re right. I had a motherboard with a chipset driver supported by a single developer, meaning fans/sensors. Everyone was making demands and he was getting nothing for his work. Due to kernel politics he wasn’t getting support from upstream either. It was just causing stress so he just threw in the towel and there were no new takers.
Given time and dedication, a single person really can achieve a lot and be extremely useful for FOSS. But it can also be thankless especially if you are supporting yourself financially and you have demanding users who don’t help. The burn out is real. What’s more is that if instead of creating your own work you try supporting other people’s code, you may not get the credit and recognition for it even if you end up doing more work than the original authors. Although I’ve experienced this same phenomenon in the corporate world too.
It’s quite a strange situation though, with so many FOSS advocates, the RTFM mentality, I thought that more people would be involved in the creation and maintenance of this “freedom” legacy, willingly stepping in when someone was stepping out. Obviously there are more Karen-like vocal brats than silent heroes out there. That doesn’t shine a good light on this fragile ecosystem.
Obligatory XKCD : https://xkcd.com/2347/
Read any “GNU/Linux” thread or backlash against something Red Hat has done and the “Karen-like vocal brats” problem is right in the spotlight. It is clear that many “communities” are filled with entitled takers that never contribute anything and have VERY strong opinions abou their rights and the obligations of others. Their core function seems to be mostly to complain about the people doing the actual work, be those individual contributors or paid developers at corporations.
Over the years, I have really come to appreciate the subtle difference between the definitions between Open Source and Free Software. Open Source is a better way to build a lot of software and even greedy corporations figure that out a lot of the time. We all benefit from that and I think it accounts for the vast majority of software in the ecosystem but Open Source is more of an engineering philosophy than a political one. No surprise then that the people most vocal about the politics are often contributing the least. Sadly, the “everybody pulling together for the common good” ideal works about as well in Free Software as it does in economics.
In Free Software, the Pareto Principle ( 80 / 20 rule ) is in full effect. I think that less than 5% of the “community” contributes 95% of the code. It may be less than 1%. The people that do though are real super-heroes. At the very least, we should treat them better.
LeFantome,
There’s undeniably a lot of friction there. Redhat’s case is particularly interesting because they are playing both sides of the field. They want to cut off downstream users who use the GPL’s redistribution rights, yet it’s these very same rights that entitle redhat to bundle thousands of FOSS packages into RHEL. They want the benefits of the GPL for themselves, but don’t want to extend those benefits to others.
It seems like we need to solve the problem of FOSS funding in order for contributors to be economically stable, but funding is challenging especially when FOSS licenses literally entitle others to give away one’s work for free.
Yeah, there are many freeloaders. I contribute a few patches, but I’m guilty as well. I think many corporations are using FOSS but not contributing back.
I agree, but I don’t know how to get there.
You can bet the people maintaining the Windows version of the driver are more, because that’s where the money is.
kurkosdr,
I wouldn’t call that a sure thing though. I’ve experienced countless devices becoming unsupported on windows. I’d even go as far as to say it genuinely concerns me when I buy new hardware because of these experiences. Planned obsolescence is very real.
My latest purchase is an oscilliscope and I find myself in the same boat once again, I absolutely hate being stuck with proprietary drivers with an uncertain future. I would far rather have open drivers that I can support if need be than proprietary ones that I can’t. The problem is that the majority of manufacturers don’t offer FOSS drivers to begin with, but that’s the way it is.
That’s true. For some companies, maintenance of a proprietary driver ends when the product it serves can’t make money anymore. No matter how many people are working on the Windows driver, they get re-assigned.
That’s an ongoing problem: Use the proprietary Windows driver that covers your needs now but may not work in the next Windows version, or use the Linux driver that may not cover your needs now (and generally be worse than the Windows driver)?
And if I may say, this is the main reason most Windows users keep running old versions of Windows for as long as possible. It was also one of the reasons Microsoft couldn’t give Windows 10 away for free to Windows 7 users (or even Windows 8.1 users!).
This is why I think Desktop Linux’s approach of chasing the shrinking ship of Windows of trying to support all the hardware in the world via third-party drivers is a mistake. The Apple model of having a well-defined list of compatible hardware for the things inside the PC and requiring USB hardware to use generic drivers is the right approach (for example, your oscilloscope would show up as a video capture device plus a serial device for giving you the raw numbers, and it wouldn’t need a driver in the first place). It’s also why I consider the Steam Deck OS a step in the right direction: it doesn’t pretend to work on every PC.
kurkosdr,
Having a list of officially supported hardware may make the selection easier, but even then hardware support can be an issue. Neither of the oscilloscopes I wanted are supported on apple PCs so I would have been in the same boat. 🙁
Many/most oscilloscopes aren’t really video capture devices. The UI is typically rendered on the computer and the software plays a very active role: spectrum analyzer, logic analysis, capturing, etc. It’s necessary to program voltage ranges and triggering, etc. It’s not supportable under a generic USB spec like mice or webcams are.
Ideally there would be generic FOSS software with the goal of connecting to every oscilloscope on the market. I understand your point about this not being so realistic and going against what you say, but I’d like to mention that there are already FOSS projects that work as you suggest, with a list of hundreds of supported devices.
https://sigrok.org/wiki/Supported_hardware#Oscilloscopes
Not only this, the sigrok software on linux is better than the official owon software on windows. This is the first place I looked for hardware. The problem is that while there are hundreds of supported oscilloscopes, all of them are older/cheaper models. I have one of those cheaper models and it is well supported but is inadequate for my needs.
Someone did the work of decompiling owon’s oscilloscope software for the 3000 series and replaced the windows code with linux code, I was actually hoping to piggyback off their effort, but owon completely changed the software and libraries for the 6000 series scopes. So the effort to port their new proprietary software (which isn’t very good mind you) was more than I’m willing to commit to especially since I’m not being paid.
The main reason Steam hardware compatibility is so high (even if unofficially) is because they are standing on the shoulders of others who have already added that hardware compatibility to linux. The odds of random PC hardware just working are very high.
I do come across exceptions though, like the 2.5gbe port on my newest motherboard that didn’t work. It actually was already fixed upstream, but debian stable hadn’t incorporated it yet (as it goes…). Now it’s fixed.
An area I consistently find poor on linux is bluetooth. Given that I consistently experience the same bluetooth problems across completely different hardware and distros, the evidence seems to suggest the bluez framework is shoddy. Android exhibits no such problems. IMHO it would make sense to replace the bluetooth stack in most linux distros with android’s, which is well supported.
Wow, I think I covered too much ground here…back on the real topic though: I think manufactures providing FOSS drivers would help with most of the long term support issues regardless of OS/platform. Even very niche operating systems could benefit greatly. But I just don’t know how to get there given that so many manufacturers are reluctant to participate in open source at all.
I doubt that a company such as Intel pays developers less for developing Linux drivers then the ones developing Windows ones. Historically speaking corporate sponsored Windows drivers got abandoned sooner then the Linux counterpart. If and once some driver gets abandoned for Windows, that is basically it, on where for Linux usually somebody continues to maintain it, indefinitely. Including fixing the bugs introduced in the past on where with Windows drivers forget about that after reaching EOL. So you see even if one or two people end up maintaining a driver, after corporate sponsorship is over, the advantage of more eyes can fix more bugs is still relevant here. Sometimes a decade after you though some hardware is not relevant any more. And a bug still get fixed or performance improved or for the driver to get modernized by adapting to modern frameworks.
They don’t pay developers less, they assign fewer developers to the Linux drivers, because the money is on Windows.
I mean a single developer can only do so much work, here i don’t see on how developing drivers for Windows or Linux would differ in therms of effort invested and salary received as an individual employee. So in the end a company has to hire a suitable number of developers to make the hardware work. And why would money be involved only on Windows? In corporate environments and regarding latests trends i would say that Linux is dominating. AFAIK Linux is the industry standard for AI too. So all in all here is where the money currently is, on Linux and developing for Linux. Microsoft knows it too and that is on why they fully embraced Linux. Google, for example, runs on Linux, without Linux there would be no Google. Most of the ARM oriented companies …
I am typing this on an ancient machine with a Haswell processor. In the NEXT version of the Linux kernel, Intel is releasing some improvements for the graphics hardware in this machine. Apparently this is because hardware as old as mine is still part of their official hardware support test bench.
Intel just released improvements to the PAE code used to support 32 bit processors in Linux.
As far as “where the money is”, it depends on what we mean. If you are talking about explicitly desktop support, yes Windows is where the money is. However, everywhere else, Linux dominates. Even Microsoft makes more money off Linux on Azure than desktop Windows at this point. What matters about Windows at this point is Microsoft Office.
@Alfman
Absolutely, a LOT of commercial software is staffed even more poorly. For one thing, if it is “well managed”, commercial software is staffed “minimally” whereas FLOSS receives contribution based on interest of developers. It is why many software categories are led by an Open Source alternative that proprietary options struggle to compete with (or decide it is not worth competing with). Not only may the FLOSS option have broader participation but that participation may include world-class talent. You would never pay talent like that to work on some classes of problem. Some things that are not “commercially important” can by very interesting, quite fun, and maybe even very useful..
On the other hand, most commercial software at least gets the “minimum” developer support that it needs. If paying customers rely on something, generally the supplier will ensure that developers are assigned to it. They may not be world class but they are likely to at least be competent. They may not enjoy it but this does not matter if it is their “job”. Of course, sometimes they are also talented and love it. The point here though is that they are there, doing what needs to be done. FLOSS cannot make the same promises.
When proprietary software is no longer interesting or important to its supplier, it is going to become orphaned by its developers and it may not even be possible for volunteers to step in even if they want to. When FLOSS, at least there is the possibility that somebody could step in. If you rely on the software, at least you have the option that it could be you.
Perhaps the best of all worlds is commercial Open Source. Projects get staffed, we all get to use it, and anybody that relies on it can continue to do so (if they are willing to do that work).
LeFantome,
I agree with pretty much everything you’ve said in the post.
We need more FOSS friendly companies & manufacturers. It’s hard to convince corporations to release their own software as FOSS when software is the product. But when it comes to hardware drivers it seems like they already have a sustainable business model in place: sell the hardware, give away the drivers. The drivers are already “free”, meaning no cost to users. Making the drivers open source would solve a lot of long term hardware support problems.
I strongly favor hardware with FOSS support, my wish is that more manufacturers would actually cater to us.
I don’t know if people really assume that, but I’ll say that standards aren’t higher – process is. But it’s single directional process – get it out the door through a faux production line, then forget it. In an slanted way, it’s the same problem. No one maintains anything.
In hardware it’s interesting, because there tends to be a long term maintenance window, with a plan and parts, and all that. I’ve often likened the challenges with commercial software development management (that’s a mouthfull) to hardware analogues, because basically, we just cut corners in the software world.
That’s okay for software of a certain size, mind you (think Unix style small scope programs – open source is actually pretty good at this). But when you start to scale up, we tend to just scale enough in terms of process, to get things out for release, but we don’t scale as far as they do in hardware spaces (think physical GPUs or maybe automobiles). If we did, software would become just as expensive and take just as long to develop as hardware, so it makes sense we don’t do it, and instead stick with hacks and emergency patches.
Not quite so. kurkosdr the upstream project to a Li8nux Distribution normal does not provide the LInux Distribution packaging maintainer.. The major used open source projects make it into Linux Distributions this on average adds 4 more people looking at the code. There is a reason why you see more open source projects on Linux because the distribution maintainer system has resulted in the software developed for Linux being a lot larger team than it first appears.
There is a bug count difference between the windows and linux versions of open source projects that does line up. So I am not sure that the “many eyes makes bugs shallow” is a myth because there is a relationship between number of people looking at the code and the types and number of bugs that is in the code.
kurkosr the 1 to 2 man show is counting just the upstream developers not the downstream people like distribution developers who are also looking at that code. Reality is most small team open source projects have more people looking at the code than most closed source small team because of distribution maintainers looking at the code that most people fail to count.
proprietary software small teams when you truly add up who looking at the code compared to open source small teams are generally smaller. Coverity now made by black duck has been funded quite a few time by USA government to compare code defect rate in open source vs proprietary software. Yes they found open source has a low defect rate and this does align to the fact more people are looking at the code. Yes it about half the number of defects in open source vs proprietary. Yes twice as many defects using proprietary.
Like it or not “many eyes makes bugs shallow” is most likely true point but there is a evil part in the details here. Solo developer vs Solo developer + 4 distribution maintainers the difference is basically 50% the defect rate. Solo developer + 20 distribution maintainers is about 49% of defective rate of the solo developer alone. There is for sure a law of diminishing returns throwing more humans at this problem after you have 4 distribution maintainers. There have been studies on this some went as far as finding projects that had up to 100 outside maintainers there was still small reduction in defect rate. To get to 100 percent defect free just throwing humans at it it was working out to be take the lines of code of the project times this by 10 and that is how many humans you need yes times that by ~`1.9 to get the number of eyeballs looking at the project (yes some people will be blind and some people will be missing eye reason why its not 2). Then you have to manage to this hurd of cats not to have projects forking and other disruptive things.
This is why we need automated code auditing mandated by places like github. Yes instead of a 50-40% reduction in defect rate using humans the automated code auditing can cause a 90%+ reduction in the defect rate. Yes we need machines for code auditing. Yes using more than 1 automated code auditing tools does result in a low defect rate than using a single one. This is the same as using more than 1 compiler effect.
“many eyes make bugs shallow” is most likely not myth from all studies so far into code quality. Yes open source does result in more eyes looking at the code in majority of cases. Problem is the quality of the eyes and the volume you in fact require to get major results.
kurkosdr “There is no “many eyes” for most open-source code. ” this is myth. The majority of open source is shipped by Distributions. Yes this myth comes from just counting developers working on projects not maintainers making the packages for distributions. Yes you look at github with like 2/3 of open source projects having a single developers then you look at how many of those are not some developer prototype fork that number cuts in half. So making it 50/50. Then 90% happens to be items picked up by one distribution or another and this being a different person doing the package than the single developer.
Yes the open source mode of here the source code built it yourself followed by you are too lazy so we have a distribution maintainer build the code for you equals more people looking at the code than the raw developer count. Now the problem is how to improve quality of those eyes now.
oiaohm,
I thought this as well after reading that sentence, however then I read the next sentence where kurkosdr qualifies the reasoning. It’s not the eyes to bugs relationship that’s being questioned so much as the assumption that open source automatically has many people looking at it.
Obviously this varies from project to project, some get a lot more resources and attention than others. Still, some projects do face labor problems and every now and then this can have drastic consequences for the projects involved.
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
Distro maintainers may catch bugs as users report them in testing distros. But if a vulnerability doesn’t otherwise cause a bug/crash, then it’s easier for it to fly under the radar.
Just to be clear though, I don’t consider this a criticism of FOSS projects specifically, rather corporate developers and FOSS developers have these kinds experiences in common. It’s just a consequence of under resourced projects that happen to be FOSS or proprietary when there aren’t enough resources.
1. readers aren’t automatically writers
2. is it really orphaned or have just no people had the time to step forward ?
It was less true when typical programming job was closer to 30k rather than 150k.
With the advent of AI coding weight see that again though.
Well, if things would slow down a little bit, perhaps maintainers would not be under so much stress.
There are millions of new libraries for absolutely everything and my fastest computer, in terms of responsiveness, remains my ThinkPad W530 with FreeBSD and SSD.
There has been tangible progress with things like Wayland and a few others, but we keep reinventing the wheel, and millions of hours of work in the form of code go to trash for little practical (emphasis on practical) benefit.
> we keep reinventing the wheel, and millions of hours of work in the form of code go to trash for little practical (emphasis on practical) benefit.
Yup, adding layers over layers of abstractions and f***ng nonsense “development technologies” like electron, react, etc.
That has nothing to do with this driver. I assure you electron/react are not used in Linux kernel development.
The technologies everyone wants to hate: electron/react have their place. They make creating complex cross platform apps easier. So while you might not like the memory used, they at least work on linux/bsd/other strange platforms that would otherwise not have them. I think I’m ok with the trade off for the most part. Memory is cheap compared to development no one would ever fund.
I should also say, I’d be willing to do things like what I did on a volunteer basis, but its difficult to keep up with mailing lists and of course I’d need some better understanding of the problem area, and sufficient devices to test. With this one, I absolutely do not have the domain or hardware knowledge to step in. It feels bad to see code rot, but better code than relationships and finances.
Within the kernel, I’m not in it enough to speak intelligently for all of the changes that do get made. When I was briefly a contributor, it seemed like there were random changes in data type naming conventions. that required lots of search and replace work, followed by test suite runs. It was annoying. I don’t know how common that is, and really those datatype renaming could have been part of a restructuring allowing for new architectures or something really useful. But as someone in charge of the code, it was a pain to deal with, seemingly busy work.
Bill Shooter of Bul,
This, in a pinch, is linux’s unstable ABI problem. Naturally when projects are new, structure may need to change more frequently, but over so many years and decades it’s a bit cringe-worthy to consider how much effort has been lost to churn, essentially busywork like you say. As a supporter and fan of linux, it’s time we solve the unstable ABI problem to reduce the churn and increase productivity. Some people fear stable ABI because they think of it as an all or nothing solution with no future updates, but it’s possible to compromise on medium term stability – for example at least provide stability within major versions rather than every kernel update potentially causing ABI breakages.
Obviously this has mostly been decided on ideological grounds, but we should acknowledge the real overhead and frustrations for developers and even end users. I’d like to see linux be a bit more pragmatic here.
I mean the unstable ABI is both a blessing and a curse. The ability to change them on a project as large of Linux keeps it pretty flexible, but at a pretty high cost.
Working on non opensource code, I still see this kind of behavior from other devs that suddenly decide the existing conventions were not ideal and they *need* to be changed. They of course do not, but it does require a lot of work and introduces subtle bugs. while disrupting everyones work.
And of course that means that design patterns that were popular a decade ago sometimes go unchanged, and newer devs have to lear nwhat people thought was cool a decade ago.
Bill Shooter of Bul,
I’m not against being able to change the ABI periodically, but linux is mature enough that breaking ABI changes shouldn’t be happening frequently.
Yeah, I’ve worked on some corporate software that has had countless developers and it’s a bit of a free for all, haha. Not only do principal developers come and go, but so do the managers and ownership too. It’s hard to be consistent with so much turnover.
One of the reasons I think safe languages are so beneficial isn’t so much because of bad developers (though that can be a problem), but because giving more responsibility to the compiler means memory semantics can be enforced consistently regardless of how many humans have worked on the code.
Interestingly Rust, a so called language of the future, opted for not supporting stable ABI. It might seem like a good idea at first, but as years go by you start to understand you shoot yourself in the foot with it. That is if you want to be bigger, general purpose and continue to be modern. For smaller and specialized projects, sure, why not, whenever that makes sense. Think of it like Amish movement. At some point it seemed like a good idea, on the long run there are more and more oppressing challenges involved, for example being Amish and introduction of smart phones.
Geck,
Rust does support ABIs and in fact it needs them to integrate into existing software via FFI ABIs. Naturally foreign interfaces are “unsafe”, but there’s also an abi stable crate that enables safe interfaces. Unlike C rust makes things more explicit.
https://docs.rs/abi_stable/latest/abi_stable/
https://github.com/rodrimati1992/abi_stable_crates/blob/master/readme.md
Most C compilers use the “cdecl” calling convention by default, which is based on C and defines architecture specific details. While C ABIs are by far the most common, there are tons of possible ABIs and calling conventions that support more features such as C++ class typing for example.
https://en.wikipedia.org/wiki/X86_calling_conventions
No, generally speaking Rust doesn’t have a stable ABI. There are some limited and rather convoluted options available, like for example trying to utilize C, but in general we can say stable ABI doesn’t exist for Rust. Things like dynamic linking, cross programming language interoperability, managing memory … not feasible. Or in practice at minimum you need to recompile an external (rust) driver or an library on where “rust os or dependency” changes, due to lets say a bug fix. This is basically your biggest critique of Linux, on how an external kernel drivers needs to be recompiled with different Linux kernel versions, on the other hand you support Rust and don’t blame Rust for it, on where Rust, by design, works in the same way. In some ways Rust is much worse in this regards, then Linux, Rust in reality just lacks any concept of an ABI in this sense altogether. Nonetheless it’s rather interesting that this approach was selected as being the future? In reality this will likely holds Rust adoption back, that is adoption outside of Rust ecosystem.
I guess it’s like go full GNU/Linux and it will work better then anything else available, by design. Ergo if you go full Rust, then i guess you can expect the biggest benefits too. With Linux it seems that the industry has adopted it in the past few decades, now being the dominant kernel, with Rust we’ll see in a decade or two if we can expect the same. In kernel space i am rather sceptical, unless a new kernel written in Rust extends the Rust ecosystem and it becomes the industry standard. Still, interestingly, it likely wouldn’t have a stable ABI. In this regard it would likely be “worse” then GNU/Linux is ATM.
Geck,
It does, but it needs to be done explicitly. Many windows compilers don’t output code that is compatible with the win32 ABI by default. If you try it won’t work. But you can enable the win32 ABI semantics when you explicitly call for it.
Technically it is feasible, but because C is the only interface supported by nearly all compiled languages, you are often limited to C ABIs, which I agree is basic. This is true even for C++ code too. Try to call C++ code from any other language and you usually end up with a C wrapper. It’s the same reason most rust code uses C ABIs rather than native ones.
To have a stable interface or not is not inherent to the choice of rust. Right now adding rust to linux constrains rust code to C interfaces. The rust developers have no pull and notoriously linux graybeards have already fought against changes. So I think we’ll get the most benefit in a new kernel. But it’s clear that any new kernel faces incredible headwinds in reaching critical mass regardless of how well it works.
Thousands of projects have developed kernels, obviously some will be good and some will be bad. Statistically the odds are very high that at least some of these kernels have better designs and implementations than the mainstream kernels we are using, but they don’t have the critical mass to get anywhere.
I guess times have changed, on how we do software. Basically you don’t technically need a “stable ABI” in this day and age, that made some sense on where you shipped your software on CDs and expected it to run afterwards. On how it works today is it’s a much more collaborate effort and majority of development is done on some hub. So technically “stable ABI” is not needed any more. Once you commit your code it will go through a pipeline that will do whatever is needed, for example do the recompiling, and delivering of software to end devices. In this sense and in terms of optimizations and pace of changes introduced “stable ABI” only represents a hurdle. So a decade or two from now it will be interesting to see if approaches having a concept of “stable ABI” or the ones that removed any existence of it, like for example Rust, will prevail. In short term i would say Rust is a major PITA, as currently even Linux still does things like stable ABI, Rust is now basically challenging Linux, to drop it completely. Unless i guess using some crud hacks, through C. For Linux to keep maintaining it to the extend it was done in the past. But if Rust code in Linux kernel will ever start to prevail, then likely any existence of stable API to be dropped as Rust developers are just not into it.
Geck,
That makes no sense as it seems to be conflating userspace and kernelspace…you know that linux uses a stable ABI for userspace right? Surely you must know this, but then why in the world would you choose to argue against stable ABI using userspace examples? It’s a bad example to make your case against stable ABIs.
Not true.
For better or worse almost all compiled languages have to interoperate with C calls. It has little to do with a languaeg, this is the reality that language designers live in.
Again, blatantly untrue.
I’ll just point to Redux OS as a counterexample to your false claims. Not only do they see the benefits of stable ABIs but they acknowledge that it’s not an all or nothing proposition till the end of time. Quoting myself “at least provide stability within major versions rather than every kernel update potentially causing ABI breakages.”, Redox OS is already doing this and I think linux devs could learn a thing or two from others.
https://doc.redox-os.org/book/libraries-apis.html#providing-a-stable-abi
You would like to nitpick here and to try to prove something but this was not the purpose of this discussion in the first place. Conceptually, you often argue against GNU/Linux, due to not providing stable ABI for (external) kernel module (drivers) but on the other hand you seem to be a strong proponent of Rust, on where a stable ABI is not even a concept included in the language. AFAIK Redox uses C for anything you call stable in this discussion, not Rust. So if the future is Rust, then forget about things like stable ABI. Rust doesn’t do that and if we really do get a Rust written kernel in the future, a kernel that will become an industry standard, then device drivers highly likely won’t work in the way you propose GNU/Linux to work. On top of that the whole thing will likely be a big monolith and not a micro kernel. As for Rust in Linux, here i feel this effort is likely doomed and i don’t see Rust making a dent in the next decade or so. The frustration is in my opinion growing and at some point it might become a policy. To not accept Rust in the upstream Linux kernel. Not due to hindering progress but simply due to technical challenges involved in maintaining all the mess involved.
Geck,
The ABI has to be done explicitly, just because it’s not done by default doesn’t mean you can’t do it. It’s similar to win32 conventions having to be explicitly declared. The same holds true in rust.
For better or worse, supporting C interfaces is an important goal whether we like it or not. This isn’t anything intrinsic to rust, practically all compiled languages live with this reality, at least for now.
It’s not true. Rust can do stable ABIs, it just has to be done explicitly. If you want to ignore it and claim the oppose, well that’s your problem.
Good as then eventually Rust will achieve what you are after. Some general purpose kernel will be written in Rust, from your personal preferences point of view a micro kernel, and the way it will work is you will be able to create and compile your device driver in Rust and distribute it on a CD in a binary form. Years or preferably decades after, when the newest kernel will be produced, using newest Rust toolchain … The device driver will still just work, no need to recompile it. So if this is really achievable with Rust, at least in its current form, as you claim, then great. If by any chance that is a pipe dream then good luck with that. But as said in this debate, maybe the future is indeed reserved for the ones not focusing on redistributing device drivers on CDs in the first place. Like for example GNU/Linux is doing and it seems that is rather successful at it. Note that this was debate about concepts and the possible future and not about some technicalities and current state of affairs.
https://www.theregister.com/2025/02/07/linus_torvalds_rust_driver/
Yeah, it likely won’t work. Social media pressure vs reality. The way Linux development model works is you can’t rely on internals to stay stable and during the development the only constant is change, using this approach that proved to be a key in Linux adoption and success. With more and more introduction of Rust a barrier would be implanted, on where a change would propagate through C codebase, as usual, but what would differ is changes would ultimately affect Rust portion of the kernel too, for them to stop working or to misbehave. A different maintainer proficient in Rust would then need to take care of that, this only resulting in ever growing frustration on both sides. Or what do people expect, for Linux maintainers to be proficient in both C and Rust? For free? And now the social media will attack Linus and cancel him? GTFO. We tried and the attempt seems to failed so either Rust Linux support is developed downstream, for now, until there is some substantial proof of concept it’s worth the hassle, or better to invest in Rust written kernel from the get go and lets see on how that one goes. Without he cultural clash. IMHO Linus foremost must protect integrity of Linux and if Rust will prove to be damaging then to cut it off completely. Companies like Google and Microsoft are free to continue to use Rust internally and lets see on what they end up doing with it, especially on when it comes to kernel space.