Google wants to see Rust programming language support within the Linux kernel so much so that they have contracted the lead developer working on “Rust for Linux” as the work aims to get mainlined.
Google is going public today with their formal support for Rust in the Linux kernel to enhance memory safety and that they have contracted developer Miguel Ojeda to further his work on Rust for the Linux kernel and related security efforts. This contract is going through at least the next year.
Making any meaningful statements about programming languages is far above my pay grade, so I’ll leave this one to you people to discuss.
Such things should be approached extremely conservatively. Rust is still a relatively young language and there are still loose ends involved when it comes to toolchains support, portability and in general usage. I personally would stick with C for now. In addition Google should publicly state what are their long term plans regarding Linux, considering the elephant in the room going by the name Fuchsia. And somebody with a bit of weight should ask Google when do they plan to produce a Linux based mobile phone with free and open source drivers. In addition when it comes to their own platform, Fuchsia, they do allow Rust in some areas but not in the kernel itself (Zircon). Claiming Rust is not yet up to such task for now. Due to not having a good track record of being established and used in production ready operating systems. Bottom line if they want a guinea pig they already have Zircon and can first start experimenting with Zircon. Reference: https://fuchsia.dev/fuchsia-src/contribute/governance/policy/programming_languages
Personally I think you’re trolling.
C/C++ have their flaws both as languages and their included libraries and need a jolly good cleaning up and refactoring. I don’t know enough to have an expert opinion but I am persuaded that current dominant OS and languages are not fit for purpose insofar as performance and security goes. The best time to start was probably ten years ago but we are where are.
Do I believe Google are the be all and end all and always acting in pure as the driven snow good faith? No but Fushia and Rust to my casual eye seem like a good place to start something. It’s also their OS and toolchain and I doubt they’re going to pay much attention to a pipsqeak on OS News. As for “somebody with a bit of weight” how do you know it wasn’t “somebody with a bit of weight” who initiated discussion which led to this? It’s been a general topic of discussion on and off for the past few years so perhaps Google thought they would start something rather than sit around talking about it?
HollyB,
You know what, I’m in complete agreement with this.
I think you would like d-lang’s take on refactoring C. They’ve kept the syntactic feel of C but cleaned up the worst aspects. IMHO it would be the perfect language to replace C if not for one detractor: garbage collection. Unfortunately the authors presumed the heap would be garbage collected. Now sometimes GC is extremely nice, but it can be problematic for low level and realtime development. I would have preferred for them to default to no GC and provided the language constructs for it to be optionally enabled only where needed/desired.
https://dlang.org/spec/garbage.html#op_involving_gc
Anyways I think most C devs looking to get away from the regular annoyances of C should take a look at d-lang!
Same here.
Alfman,
D also had another major disadvantage. Up until recently it was not fully open source, hence could not have been included in Linux (or other GPL projects).
They finally fixed it, However once people have moved on, it would be difficult to get their attention back.
From what I read, it was more “When we started Zircon in 2016, Rust most definitely was not ready and, now that we’ve put all this work into writing all this C++ as safely as we can, the barrier for adding a second language is significantly higher than elsewhere in the system, or than it would be if we were starting Zircon today.”
@All
No i am very serious about this. Asking others to introduce Rust in areas you don’t use it yourself (Zircon). Due to Rust being a relatively new language and not having a good track record in this area just yet is in my opinion a total no go. And it is a known fact you would introduce at minimal things like issues with Linux portability. You would probably need to introduce things like an ABI for maintaining Rust and C compatibility. And other rather hard technical and practical issues are involved. Now drivers, especially non existent ones. This is a perfect area to introduce Rust experiment to Linux. As the impact of all of the issues i mentioned earlier is minimal. Hence if Google would say please bring Rust to Linux in this area, driver development. For us to start mainlining next generation Pixel phone device drivers. In that case nobody could really complain all that much and it would happen with rather low level of opposition and resistance. As long as somebody would be prepared to do the work needed. Which i assume they are prepared to do. Bottom line, start producing some Rust Linux drivers and it will happen, otherwise no, as the potential benefits don’t outweigh the potential risks, for now. And before all of you go ballistic again, like in some other discussions. This is a fair and balanced proposition where all parties involved shouldn’t have much issue with.
I personally see it more as a way for Rust to get even better, faster.
What i mean is that if it is involved in such a big thing as the Linux kernel, that would up the speed to get all things working.
If you think “it is not mature language yet” every time you wish to include it in a big project, then it will not evolve and be mature in the end.
Or they could be reckless and make a fiasco out of Linux and Rust. As i said earlier, Google, no problem. Lets first start with some meaningful device drivers written in Rust. Win-win situation for all. If that is a no go, then Rust in Linux is a no go, for now. As Rust should be first tested with more experimental kernels, the ones that are not in such wide production use, as Linux. Such as for example Zircon kernel.
I would suggest just the opposite. Right now I would bet there are quite a few dark corners in Zircon so trying to fully understand the interaction will be tough. With Linux, well, it is an open book for its 20+ years of being in the general population and in heavy use/development by many people so you really only have one unknown item….rust. Where I’m going with this is that if you are attempting to fully understand both the explicit and implicit interactions between two items, it is always easier to have one that is understood.
It is in wide production use, in Firefox. Admittedly bugs in Firefox may not represent quite the same potential for catastrophe as in the Linux kernel, but anyway – to say Rust is untested in production software isn’t really accurate.
When I said “untested”, I meant the fact that it will be running in the kernel versus in user-space which are two completely different environments.
Over time, it will be less likely to have bugs in the Rust runtime, than an obscure barely maintained driver in the kernel tree.
As long as Rust gets more attention, and they keep using the memory safety features, it will be a long term win.
Linux already has too much code that needs “janitors”, making the computer do most of the work should be a good choice here:
https://marc.info/?l=kernel-janitors
And yet Drivers and “leaf” modules is exactly what the oroginator and google (who hired him) are talking about:
https://lkml.org/lkml/2021/4/14/1023
So that seems to answer your concerns.
Also as I pointed out elsewhere: Rust is a decade old now, can you explain your criteria for saying it isn’t mature. And I do not mean telling me to look at your other comments. The burden of proof is on you
Rust is 10 years old, and being used at Mozilla amongst others, and is a front end for both the GCC and LLVM stacks. How much more mature do you want it to be?
Don’t forget if it’s being used in the real world it’s going to knock it into shape a bit quicker too as well as attract more expert opinion so it will be refined quicker than everyone just sitting around looking at it. Using it even in a limited way within the Fushia kernel is going to iron out a few things when people take a really close look at it.
Most of the people involved with either project know more than me and are going to be way better coders at what they do so it’s not uncharitable to let them get on with their job. I’m sure if there are snags they will find them quick enough. Plenty will complain if it doesn’t work!
Google is already using Rust on Android’s own Linux fork.
https://security.googleblog.com/2021/06/rustc-interop-in-android-platform.html
If upstream Linux doesn’t want to have it, it will be just like clang.
Android Linux kernel is compiled with clang for years now, regardless of upstream still being GCC only.
As you see, Google doesn’t care about what FOSS crusaders might think or do.
Geck’s point is about Rust in the kernel, not in userland. I am sure he will correct me if I am wrong.
There is no mention of the kernel or drivers in the post you linked to, which seems to be all about the userland.
I personally believe there is little risk in the conservative approach that Google is proposing using Rust for drivers and leaf modules, as Rust is a mature platform that is in use out in the world. Including Redox, which has a kernel and userland written in Rust. I think there is enough use out there in the world to say it is ok to start experimenting with Rust in the Linux kernel
Have you actually read the article through the end?
> While Rust is intended to be the primary language for Bluetooth, migrating from the existing C/C++ implementation will happen in stages. Using cxx allows the Bluetooth team to more easily serve legacy protocols like HIDL until they are phased out by using the existing C++ support to incrementally migrate their service.
> Keystore implements AIDL services and interacts with apps and other services over AIDL. Providing this functionality would be difficult to support with tools like cxx or bindgen, but the native AIDL support is simple and ergonomic to use.
These are the earlier projects initially described on https://security.googleblog.com/2021/04/rust-in-android-platform.html which can be followed from the links on the original article.
As for Rust in Linux kernel, the faster we get rid of UNIX clones written in C, the better.
Unfortunately C is the Cobol of systems programming, in all senses, and will stay with us just as long.
Yes I did, those examples are about Rust in various parts of Android, not the mainline linux kernel which was the topic of discussion here.
And I am rusty on my android architecture, but I am not sure if the keystore, or the bluetooth subsystem are in the kernel or not. Can you tell me?
@moondevil I went back and dove into the code linked to in the article and none of it appears to be in kernelspace.
Portability is the biggest issue, imho. Linux supports so many cpus at this point, it would be difficult and not likely a great use of anyone’s time to go back and add support for dead end sparc , alpha and likely a host of other older or more obscure processor’s that linux supports.
Well rust has front ends for both GCC and LLVM so I am not sure just how hard it would be to go compile rust for those architectures, but I suspect it wouldn’t be a nightmare.
But kernel support for those architectures is probably going to be the bigger limiting factor. So you would more likely be backporting features and fixes to order kernels (al least from my experience) than trying to take 5.latest to alpha which means you wouldn’t have to worry about this. Feel free to disagree
The Linux kernel development community and especially Linus and others decide what they let into the Linux kernel. They’ve already decided to allow it in specialized areas. Google is just sponsoring the development of this by getting the lead developer of this on their team.
https://lwn.net/Articles/849849/
Is not GC on D lang an opt out feature? I thought it had also deterministic memory management facilities…
napard,
You can write code that manages it’s own memory, but you have to avoid some built in language features that are implemented assuming the use of a GC heap. So, unlike something like C++, you can’t use standard object types and libaries without GC and you have to avoid language functionality that might trigger. Also you have to be careful that the libraries you depend on don’t use these language features either.
https://dlang.org/spec/garbage.html#op_involving_gc
https://wiki.dlang.org/Memory_Management#Built-in_types_that_allocate_GC_memory
Apparently there is an effort to change this, but I’m not sure when it will happen or if it’s too late because an interface could fundamentally depend on GC, meaning they might have to break an interface. When you have a language that has reflection, it is so much easier to add GC than to remove it. This is the reason I believe they should have started with non-GC by default with GC as an option.
This is over a year old but Quake 3 has been translated into Rust. Game engines are relatively limited use cases but stress lots of different I/O systems and memory and CPU and GPU fairly well.
https://immunant.com/blog/2020/01/quake3/
Here’s an overview comparing Rust with C.
https://kornel.ski/rust-c-speed
The discussions I’ve read on execution speed suggest there is a lot of opportunity at the compiler end. Some peopel speculate Rust may run faster as language clarity may help more aggressive compile strategies. Slowness in some areas may simply be down to compiler architecture or compiler optimisation focus. For access to very low access C style functionality such as bit twiddling it’s possible Rust can load a C DLL with C used as a cross platform meta assembler.
For development I used my own portability layer with C then C++ frameworks layered on top. I provided my own functions for safe memory and safe thread use, and memory buffers including garbage collection. Rust has this built in so generally speaking it solves a lot of potential development problems in the real world as it creates a baseline safely above stupidity which C/C++ does not.
One nitpick I have with some of the discussions I read is (younger?) coders consigning “old” hardware to the scrapheap. I took great pride in my codebase being capable of compiling so it could run on everything back to the original release of Windows 95. (Some of this was due to the portability layer.) I also didn’t forget scaleability and targetted a P200 class machine as the baseline even when I had an Athlon 64 x2. Depending on design choices and the kind of game you are developing if you design for scaleability from the beginning you can still target modest platforms. But you have to make those design choices right at the beginning and code for them. It’s not that much hard work.
I suspect that’s more likely to be pragmatists.
Think of it as a compromise between support costs (developer effort, testing/validation, help desk, etc) and market share. You can assume “market share over time” has a tadpole shape – e.g. when Windows 11 is released (or “Alder Lake” 12th gen CPUs from Intel, or ….) you can expect market share to increase quickly from zero, but then reach a plateau in about 3 years, then slowly dwindle over the next 5+ years (the long tail of the tadpole). At some point, the dwindling market share becomes too small to justify the support costs, and it becomes “bad” (for economics) to continue support for “too old” operating systems and/or hardware.
In general; I think “10 to 15 years old” is probably about where the change happens most often (i.e.. operating systems and hardware that are 15+ years old are not worth bothering with now).
However; this assumes that the software was already released and is being used. If your software won’t be released for 5 years, then you should be making decisions based on what the (expected, predicted) market share of older operating systems and/or hardware will be in 5 years (and not what their market share is now). This creates a curious scenario – if software won’t be released for 15 years, then nothing that exists now matters (you’d want to extrapolate from today’s systems in an attempt to guess what might matter when the software is released).
The point is if you design well at the start critical code doesn’t change that much. It really isn’t that difficult to develop or maintain things.
Games are a special case but in spite of onerous system demands and hard breaks in OS support the reality is you can scale if you want. It’s more a design or political choice than anything measureable in terms of support. The older code doesn’t magically rot and there’s nothing wrong with old good code.
Some things will need a newer system and either cannot run at all or will be very slow. That’s where scaleability comes in either in terms of shortcuts or switching stuff off. As for my office orientated stuff I’m not running anything which couldn’t run on my old desktop. Some newer applications like Photoshop or similar do need more oomph or hardware features which are not present on my laptop but they will still run. I bet some code in there is ancient. Other graphics applications require hardware GPU features my laptop doesn’t have and refuse to run because they dropped the older code path but that was a design choice not an imperitive.
It’s been a while since I coded high performance graphics software (i.e. games) and I would need to take a detailed step though things to determine exactly what the cut off points are to maintain minimal acceptable gameplay and visuals and what the scaleability implementation issues are but I can tell most minimum specifications are more political decisions than what’s possible. Most developers starting from scratch today most likely won’t make the effort so will cut off at the generation of machine they began their project on (and possibly an upgrade or two if it drags on and they are a speed freak) but my design view is different.
An old machine is not worth the bother until it’s your machine!
It may be just be but when I coded I didn’t just code. I have values and things which interest me. So someone codes a new engine which will only display on screen characters in super-duper resolution and so on. It’s really trivial to crank the model and texturing and effects back or even slap it through a sprite generator if you have to. The same can be done for other assets and map assets. Yes there are gameplay considerations too such as hit detection and bounding boxes and this is likely more of an issue for map content but it’s nothing you cannot deal with. Pre-cached content means you don’t have to load all the high resolution assets so this reduces CPU and memory use. Ultra-large worlds are another problem and I will admit things can get very funky with this but again there’s nothing intrinsically stopping scaleablility. The difficult area is multi-player if you have different people playing with different render standards but how you manage this is still a design choice. These kinds of design choices interest and motivate me which is why I have a different approach.
Admittedly, if I started a fresh project today I would likely cut out a lot of stuff and streamline everything but the way I code engine frameworks is very much with portability and scaleability in mind which means slotting in backwards compatibility is no big deal. It’s one maybe two functions at most for some problems.
What does bother me is coders who seize on the latest thing and purposefully drop older codepaths when there is no need to. Like I said the code doen’t magically rot and the support issues are really really trivial.
For the release version of your software; do you enable optimizations for AVX2 in the compiler? Is your software worse (slower, less efficient) than it should be on newer CPUs because its not using AVX2 where it’d be beneficial?
Do you have 2 (or more) different versions of your executable for different CPUs (with the compiler doing different optimizations for different CPUs)? If you do; is there some kind of (run-time or install time) auto-selection involved, or do you get “I installed the wrong version of your software and it crashed so now I’m blaming you!” bug reports?
For a game; lets say there’s maybe 50 “different enough” CPUs (where using different compile time optimizations helps); and 20 different GPUs and 5 different versions of DirectX; and you don’t know if you need loading screens when the player moves from one zone/level to another (or if their disk/memory/CPU is fast enough to load the next zone/level in the background as the player approaches the new zone and avoid loading screens). Now there’s 2000 different permutations for you to test before release. Do you have a pool of 2000 computers to test your game on before release?
I actually agree with you (in theory) – it shouldn’t be like this.
My theory is that executable files from software developers should be byte-code and not native executables (with major sanity checks & static analysis done when source is compiled to byte-code); and the OS should compile the byte-code to native when software is installed and whenever any shared libraries are changed (so that the native code can be heavily optimized for the specific computer, including “literal whole program” optimization). My other theory is that interfaces/APIs should abstract hardware details – e.g. a game should be able to say “This area is water” and let the video driver figure out the best way to render water (and game developers shouldn’t need to write yet another “water shader” and deal with all the GPU differences themselves; and similar for lighting/shadow
and all “detail vs. frame rate” decisions; and all the other hassles involved when a game/game engine has to deal with a ridiculously low-level interface/API that failed to abstract low level/hardware details).
This isn’t the kind of world we live in though.
For our current reality; you must have some kind of “too old, not supported” cut-off (e.g. nobody expects the latest release of Photoshop to work on a Commodore64 from 1982). If you must have a cut-off point; then you must decide where that cut-off point is (and you’re going to have some people who complain that your latest release won’t work on their Commodore64 from 1982, or their old “32-bit only” laptop from 1996, or maybe even their 64-bit Windows 7 machine that the OS developer hasn’t supported for 5+ years).
Yes – it’s possible for the “too old, not supported” cut-off to be too early or too late. It’s part of why I suggest using a pragmatic approach based on “hassle vs .market share” to find the right cut-off point.
It’s just really hard for people to think of a scenario where “original release of Windows 95” is the right cut-off point in 2021.
You basically have rediscovered how mainframes work and allow applications written in the 60s to take advantage of modern hardware, for example, IBM i ILE
https://www.ibm.com/docs/en/i/7.3?topic=introduction-what-is-history-ile
Unfortunately except for Java, .NET and Swift (on watch OS), not many seem to be that much into it.
I’m also not into JIT techniques (you can’t do expensive optimizations at run-time, so it can’t even compete with the “optimized ahead of time for generic CPU and not your CPU” approach). I think that both LLVM and CIL (.NET) are able to be used ahead of time (e.g. when software is installed), they just mostly aren’t.
It is common practice for GPU though (shaders, GPGPU); and you’re right about old mainframes (I’m not claiming it’s a new idea, just saying it’s better than other approaches & could help solve part of the “too old, not supported” problem).
@Brendan
You’re basically correct or aiming in the right direction for coding for portability.
Aside from coding for the basics from the get go (portability layer handling different OS and compiler and version and bit lengths etcetra) and having a good stucture with abstracting early the essential issue is you’re coding to API’s. With a good well abstracted structure it is trivial to add support for a new API. While both hardware and software implementations can and do have their quirks you don’t code for bare metal and even where you do you still abstract it. Some of that may or may not be put in the lowest level portability layer. More often that not it gets spun off in its own higher level portability layer i.e support for this CPU or support for that API with abstractions wrapped around them so you have a common interface you can interact with in your main generic code.
I could and did test against multiple hardware and software implementations. Different compilers and drivers and OS have their quirks and can shake out bugs you didn’t know you had because X is more conformant than Y. Now you can discover something is none conformant and has a show stopper quirk but this goes beneath your common interfaces and abstractions or in the portability layers. That or you pester the IHV to fix their driver.
The pragmatics of a cut of point in my experience are largely political at the top although there can be pragmatics. For game its possible to have graceful scaling but this relies on design decisions at the start. For other applications it’s usually a fast path versus slow path thing and whether you decide to keep or throw away substitute code i.e. GPU rendering or suchlike versus a software implementation. Some business logic kind of stuff or AI or other heavy computation or data processing may need new hardware to be done in real time whereas some end users may be happy to wait. Games do have a framerate target (hence scaling) while other apps have a usability target i.e. do you want to render the task in ten seconds or ten hours.
Depending on the kind of application you want to code may be surprised what runs on Windows 95. Quite a lot. Obviously you’re going to have a hard time finding a Vulcan driver which runs on Windows 95 so if you have a Vulcan only application this provides your cut off point but with the right design choices and scaleability there’s quite a few games which could easily work on older graphics API. Most graphics developers are familiar with checking for extensions and capabilities so scalability at this level isn’t an additional task.
So really you need to examing your design choices and scaleability and portability from the beginning then weigh this against your CPU and GPU and data throughput buget and see if your minimal acceptable end user experience can be catered for. The majority can run on very modest hardware while some simply cannot. New projects may simply want all the bells and whistles of a new API and other projects may simply decide they don’t want to code for older API.
I know this stuff inside out to the point where I don’t even have to think about it when scoping a project but then other people know things I don’t like some of the newer APIs. Whatever the cut off point and their age the only point I’m making really is portability, scaleability, and versioning, and abstractions, and designing for expansion need to be baked in from the beginning. The reason is an application coded from scratch today using new API’s may at its core have a lifecycle of 20+ years. Putting that work in as step one is an up front cost but can pay dividends over the product and mainenance lifecycle.
I don’t think we’re disagreeing fundamentally more exploring the issues and from my point of view putting portability etcetera into the process early instead of an “I wish we had done that” afterthought.
@Moondevil
I really liked Sun’s approach to version compatibility and long term support. I took that as an inspiration for coding in forwards compatibility.
@HollyB
The pragmatic cut off point is what makes sense economically, plain and simple. Most likely those with older hardware are much less likely to spend a lot of money on software, and by cutting them off you are saving work during development.
Adding all kinds of extra abstractions and portability layers can be great, if you really need them, if you don’t then they are just extra cost during development which add no extra performance at best or hurt your performance everywhere at worst. For every good abstraction i have seen i have seen at least 10 that just made finding the code that did something useful harder.
Further up you said code doesn’t magically rot. This is not necessarily true, it still need to be tested and maintained and can still contain bugs or security issues and at the very least, someone need to know it exists.