MenuetOS sits in an interesting nexus between astonishing technical achievement and computerised work of art. The super-speedy, pre-emptive multitasking operating system is still, despite adding more driver support, more included applications, an improved GUI and digital TV support over the years, capable of fitting on a floppy disk (assuming you can find one).
MenuetOS is a technical marvel. Not only is it written entirely in assembly, it also shoves a fully capable multitasking operating system on a single floppy disk.
This fancy OS proves that all these layers between the computer and the user written in all these high(er) level programming languages waste an enormous amount of CPU and memory recourses.
It’s not so amazing what some old computers like an Amiga and Commodore 64 could/can do, but more disappointing how modern computers perform.
It’s often times not the languages per se, but more the way programmers handle computing resources. Sure Java and C# are at times bloated (because the programmers who wrote those also were wasting resources), but take any near-machine language like C or (parts of) C++ and you can easily get close to or match the speed of any hand-written assembly implementation, at a fraction of the time requirements and with far fewer bugs. Keep in mind that 10% of your code runs 90% of the time, so the key is to focus your resources on what really matters, rather than taking either extreme approach (all high-level, or all assembly).
In regards to MenuetOS, the main reason why modern OSes tend to be so large is because:
a) they contain tons of multimedia (high-res icons, full CD-quality tunes, etc)
b) they simply do a lot (there’s almost zero extra cruft in a kernel, almost all of it is executable code that actually does something)
It’s possible to compromise on a), but if you want to actually get things done, you don’t want to lose b). And no, compiler-produced assembly is not inherently slower and/or larger. What most do is give developers options to choose from (would you like this loop to be small or fast?).
Thanks for the insightful comment!
I also think there is another factor and that is hardware limitations. The Commodore 64 and Amiga, for example, had limited hardware and certainly in the case of the C64 near impossible to upgrade.
So code was written to run well on those machines. Programmers came up with tricks to improve performance or use less memory. With PCs came a period where you were just required to add memory, a faster CPU, bigger hard disk, a new PC. Now I think were are in an age where the hardware is often more than enough to run most applications without breaking a sweat, so there’s no incentive for programmers to make code efficient or smaller. They code something and it works fine, so why spend time optimizing it?
Honestly it’s hard to blame them and I would do the same thing.
But then you see (and hear) a demo run on a 1 Mhz Commodore 64 with 64 kB of RAM and you start to wonder if your modern computer should’t be able to run much much faster.
That is true.
Machines are today more than capable of processing anything.
But … instead of having OSes faster, we have OSes that have no evolution despite graphics.
Windows is one example. xp->vista, windows7->8 .
Linux is another, with the example of gnome shell, for instance, having a *lot* written in scripting, uncompiled, languages.
There were times where the next version of something in the linux world was faster then the previous with more features. But now the competition is big for beautiful and *new* applications. Instead of just being good applications.
Bigger OSes also bring more capabilities. Menuet supports only x86. To support ARM, the size would double. How many archs does linux support? And graphic cards. and other hardware.
Of course, maybe this can all be optimized.
I used to wonder what-if there was a Linux desktop computer. You know, like an Amiga or ZX Spectrum, so the software already knew what it would be running on. No guessing the hardware, not having to determine the capacities, not writing workaround in case a few were missing.
A system could be much more optimized, faster, cleaner. And easier to sell to the common people.
In the case of Linux all that hardware variety is also one of its strengths and as it’s also used on servers it’s not something you can or should take away.
But how knows, a Linux for everything and a Linux for the Linux One computer?
Come to tink of it: I would LOVE a sleek looking computer with a Tux on it and a keyboard & mouse in the same style with penguins on them.
Without the Microsoft Windows tax it should be cheaper than other PCs.
Like this?
https://www.system76.com/laptops/
The laptops look great, they just need to add a Tux.
However the desktops look like they bought some generic cases.
A number of years ago I bought some Tux case badges and it did give me a nice feeling have them on my PCs.
IMHO it’s an complete and utterly shit to write sensible code (like some parts of the OS) in interpreted languages.
Yep,
My first personal computer was a 5 MHz PC-XT on DOS 2.11.
At first, it was OK…..a rarely I could type faster than it could echo the characters (text mode) onto the screen.
Now, I have a dual core multi-threading 2.5 GHz on Windows 7 64-bit.
At first, it was OK….then came the security updates and now I can type much faster than it can echo on the screen (graphical mode).
So, from my user perspective, there has not been any real gains.
Oh, I forgot to mention having now 8 GB of memory compared to the lowly 0.512 MB of my first system.
To combat bloat, maybe developers should be coding on and using the average system users have at home or are forced to use at work…..
Small is beautiful. The main odd feature of MenuetOS is that it is distributed for a 64 bit processor (and closed source) while still targeting a floppy drive as real/virtual boot device which were last available before 64 bit X86 CPUs came to market.
Migration to USB boot is hopefully on the path to release 1.0.
Simply … the OSses have evolved, not in speed but in unneeded features and *design*.
It is better to use Windows XP then windows 7. even without 64 bit support. Because it is simply faster and gets the job done.
Linux distributions have been doing the same. But maybe can have this note better than me, but it seems that the main cause is the abusive use of scripted languages like bash, python, javascript.
Brand new floppy drives are still available even today, although I’ve only seen external drives recently.
And I have to ad that modern compilers optimize better than a programmer can optimize the ASM by hand.
There’s really no reason why you should use ASM instead of C or C++ on a PC.
Long time ago compilers weren’t as good at optimizing the code as today and you could do interesting stuff under DOS in ASM. That’s why I learned and wrote a bit of x86 assembly many years ago. But I can be much more productive in C, C++.
As I understand, this is a hobby OS, so the guys who are wroting this do it for fun. The choice of the language doesn’t matter if you do it for fun.
twitterfire,
“And I have to ad that modern compilers optimize better than a programmer can optimize the ASM by hand.”
“There’s really no reason why you should use ASM instead of C or C++ on a PC.”
Sounds like something a C programmer would say Joking aside, it depends. The compilers might do better in a great many cases with average programmers, however it’s still possible to find edge cases where they still do worse.
I absolutely prefer using C over Asm for all the usual reasons, I wince when I think about maintaining and porting large amounts of assembly code. However code that benefits from direct access to special CPU features like arithmetic flags will be at a disadvantage in C because they are completely unrecognized in the C language.
It’s complicated to fix this aspect of C in any kind of portable way because some architectures do not have the same semantics or even implement flags. Rather than define a C standard for flags (and force all architectures to emulate them), the C language just ignores that flags exist, which forces programmers to use more complex flagless algorithms instead. In some cases the compiler can reverse engineer these algorithms and optimize them to use flags, other times it cannot.
Unfortunately it’s not always just a matter of making the compiler more intelligent. Even if it could recognize that flag based algorithm A (which cannot be expressed directly in C) is similar to algorithm B (which can be expressed in C), it still doesn’t necessarily imply algorithm B can be converted back to algorithm A. For example, B might produce different results for certain numerical boundary conditions that aren’t important to the programmer, but the compiler cannot make the assumption that the algorithms boundary conditions were unintentional. It’s forced to generate machine code with equivalent semantics as algorithm B was expressed.
So, while I’d agree with you in general that developing at a higher level is preferable, there are still times when C code can be more difficult to optimize because the optimal code for the platform cannot be expressed in it. Although I think it would be hard for inexperienced assembly developers to be able to recognize these cases.
For general OS: Just forget scripting and middle frameworks and use something like vala, that compile to C code.
The main issue in general OSses, mainly windows and linux (distributions), is that they abuse of scripting (or java, or C#, or javascript) to get the feature X and Y.
So … instead of moving to the low level ASM … maybe, we could just pass it to the less lower level C to get them improved.
Writting ASM code is only worth it in base OS or ultra complex optimized application (one in millions).
gass,
“The main issue in general OSses, mainly windows and linux (distributions), is that they abuse of scripting (or java, or C#, or javascript) to get the feature X and Y.”
I wouldn’t actually consider java or c# to be scripting languages. They compile down to bytecode and then to native code before running, whereas a scripting language gets parsed/emulated the whole way through. Java and C# can have high computational performance, but often their use of garbage collected objects are far less efficient memory-wise. It’s disappointing when a .net/Java program needs the better part of a gig of ram to open up a 500K data file.
“Writting ASM code is only worth it in base OS or ultra complex optimized application (one in millions).”
Of course MenuetOS has done that, however I don’t think anyone else is advocating moving higher level code to ASM, for most of us the costs would outweigh the benefits.
“So … instead of moving to the low level ASM … maybe, we could just pass it to the less lower level C to get them improved.”
Most of the core OS stuff for windows and linux is already written in C. I guess the new trend it to make more use of scripting in desktop environments, and to be honest I don’t know the extent to which these effect performance.
“It is better to use Windows XP then windows 7. even without 64 bit support. Because it is simply faster and gets the job done.”
I agree with you there, many of us would still be happily using it if it were available & supported because most of the changes were microsoft (and others) pushing “features” rather than customers demanding them. Never the less, it’s difficult to keep selling an OS with static goals, even if those goals were perfect for what 95% of the market wants. Change isn’t always for us consumers, sometimes it’s to advance corporate agendas.
“Linux distributions have been doing the same. But maybe can have this note better than me, but it seems that the main cause is the abusive use of scripted languages like bash, python, javascript.”
I think it’s very unlikely that bash interpreter itself is responsible for slow performance so much as what the scripts were instructing the system to do; the exact same process written in C would probably not perform any noticeably better. The scripts give linux a great deal of flexibility.
Consider a simple bash test that does nothing but launch another bash process:
#test.sh
for i in {1..1000}
do
./test2.sh
done
#test2.sh
echo x
time ./test.sh > /dev/null
real 0m1.889s
user 0m0.024s
sys 0m0.124s
Here the userspace time for 1001 invocations of bash was a mere 0.024s. I don’t know how many times bash gets invoked while booting linux, but I think it’s less than 1000. A real script can obviously be more complex, however note that the overwhelming majority of time was lost to system overhead in spawning new processes rather than running inside the bash script. I’m just trying to put it into perspective. I encourage you to create more complex tests if you remain skeptical.
well … talking about full OSses … with gui to compare (menuet, widows linux+(gnome, kde, etc))
Where the bloat comes, then? If not from memory hungry applications?
The main issue with Java and C# (while not scripted, they are compiled for a man-in-the-middle framework) is that they seem inefficient (because of garbage collecting maybe). They use lots of cpu and memory, to do simple things and are slow (to the user at least).
The scripted (ok, bash was a bad one … but python, ajavascript and other … ). For example, python is some kind of simplified (compiled?) at runtime.
If it is not the languages that make the OSses this heavy … is it their design? Are they badly written? Too many abstraction layers?
Just to remind that even tough the OSses have many features today then before, and talking windowsXP vs Windows7, or windows7 VS windows8, what do they to differently that justifies the performance difference?
The wording could be a little misleading, so I just want to point out that you don’t need a floppy disk to try MenuetOS.
If you’d like to try this (amazing) operating system in a live environment, it’s incredibly easy to create a bootable CD by using the floppy disk image and the instructions on the website.
There’s probably a way to install it without using a CD too.
One word : Rufus !
Kochise
Try as I might I couldn’t find a simple option for either running a live CD or installing Menuet in VirtualBox.
Pardon me, where does it shove it?
I think this is an impressive feat. Though I am reminded that some distributions of Linux fit (or used to fit) on a single floppy disk. Tom’s Root Boot, for example provided a powerful OS on a single floppy. Perhaps it wasn’t as shiny, but TRB saved me a couple of times doing computer repair or system recovery.