This is the first post in what will hopefully become a series of posts about a virtual machine I’m developing as a hobby project called Bismuth. This post will touch on some of the design fundamentals and goals, with future posts going into more detail on each.
But to explain how I got here I first have to tell you about Bismuth, the kernel.
↫ Eniko Fox
It’s not every day the a developer of an awesome video game details a project they’re working on that also happens to be excellent material for OSNews. Eniko Fox, one of the developers of the recently released Kitsune Tails, has also been working on an operating system and virtual machine in her spare time, and has recently been detailing the experience in, well, more detail. This one here is the first article in the series, and a few days ago she published the second part about memory safety in the VM.
The first article goes into the origins of the project, as well as the design goals for the virtual machine. It started out as an operating systems development side project, but once it was time to develop things like the MMU and virtual memory mapping, Fox started wondering if programs couldn’t simply run inside a virtual machine atop the kernel instead. This is how the actual Bismuth virtual machine was conceived.
Fox wants the virtual machine to care about memory safety, and that’s what the second article goes into. Since the VM is written in C, which is anything but memory-safe, she’s opting for implementing a form of sandboxing – which also happens to be the point in the development story where my limited knowledge starts to fail me and things get a little too complicated for me. I can’t even internalise how links work in Markdown, after all (square or regular brackets first? Also Markdown sucks as a writing tool but that’s a story for another time).
For those of you more capable than me – so basically most of you – Fox’ series is a great series to follow along as she further develops the Bismuth VM.
Running all programs in a VM? Sounds familiar.
It should be interesting to see where this series goes and if it covers the same implementation issues as Inferno, or if it goes in a completely different direction. The memory management article touched on some familiar topics.
Perhaps I deployed too many EC2 instances today but, at first, I totally misread what was meant by VM here. It does not mean VM like a hypervisor, virtualbox, or KVM. This is VM like Java Virtual Machine (JVM), the .NET Common Language Runtime (CLR), or Web Assembly (WAVM).
It kind of reminds me of UVM:
https://github.com/maximecb/uvm
The second article, on memory management, was an interesting read.
LeFantome,
You’re right it always been a problem to use the same term for such different concepts. We’re left with terminology that conflates very different topics. Personally I would avoid using “VM” to describe managed languages. We should just call it “ML” and it would take care of the ambiguity once and for all.
“Introduction to Bismuth ML”
See how much better that is!
🙂
Nobody is going to listen to us but it does seem like creating a new term to eliminate the collision would makes sense. We agree on that, VM is best suited for the idea of a truly virtual machine–one were what is provided is simulated hardware that is capable of hosting a “real” OS ( one designed for real hardware ). I think we agree on that too.
I am not sure that ML is the best term for the other though as they do not offer what is meant by “managed languages” elsewhere. They are abstract machines that execute the equivalent of an assembly or machine language which can be coded by humans but is actually intended to be more of a target for other “real” languages ( maybe even managed ones ).
In .NET, the managed languages are C# and F# for example. They compile to CIL ( Common Intermediate Language ) which is what runs on the CLR ( Common Language Runtime ). Similarly, the JVM does not actually execute Java but rather bytecode that could have been compiled from Kotlin, Scala, Groovy, Java, or something else. The UVM I linked to includes a C compiler that compiles down to the intermediate language that is native to it. The Bismith project from this article features its own intermediate language but I assume that, it too, is meant be be more of an abstraction than an actual language to write programs in.
Perhaps “Abstract Machine” would be a better fit for what Bismuth is. It is not a virtualization of a “real” machine such as what a VM like Qemu provides. You cannot install an OS written for “real” hardware on Bismuth. So, “virtualizing the machine” is not the point. However, it is meant to be a “machine” in the sense that it is meant to be a target for software to run on. It offers a low-level target language for code and facilities for memory management, networking, storage, graphics, etc. But it does not simulate any other environment. Software that runs on Bismuth has to be compiled for Bismuth. Like in the UVM example though, you could write a C compiler that generates that code and provides some level of portability to other environments ( like higher languages are supposed to do ). In UVM, you could absolutely write an execution engine with a garbage collector that runs compiled C# code unmodified. Porting C# to Bismuth ( creating a CLR for Bismuth ) would be similar exercise to creating an environment for ARM or Intel.
I think “Abstract Machine” works. What do you think?