Linked by Thom Holwerda on Thu 21st Nov 2013 18:48 UTC, submitted by Rohan Pearce
OSNews, Generic OSes

MenuetOS sits in an interesting nexus between astonishing technical achievement and computerised work of art. The super-speedy, pre-emptive multitasking operating system is still, despite adding more driver support, more included applications, an improved GUI and digital TV support over the years, capable of fitting on a floppy disk (assuming you can find one).

MenuetOS is a technical marvel. Not only is it written entirely in assembly, it also shoves a fully capable multitasking operating system on a single floppy disk.

Thread beginning with comment 577251
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Comment by MOS6510
by saso on Fri 22nd Nov 2013 00:09 UTC in reply to "Comment by MOS6510"
saso
Member since:
2007-04-18

This fancy OS proves that all these layers between the computer and the user written in all these high(er) level programming languages waste an enormous amount of CPU and memory recourses.

It's often times not the languages per se, but more the way programmers handle computing resources. Sure Java and C# are at times bloated (because the programmers who wrote those also were wasting resources), but take any near-machine language like C or (parts of) C++ and you can easily get close to or match the speed of any hand-written assembly implementation, at a fraction of the time requirements and with far fewer bugs. Keep in mind that 10% of your code runs 90% of the time, so the key is to focus your resources on what really matters, rather than taking either extreme approach (all high-level, or all assembly).

In regards to MenuetOS, the main reason why modern OSes tend to be so large is because:
a) they contain tons of multimedia (high-res icons, full CD-quality tunes, etc)
b) they simply do a lot (there's almost zero extra cruft in a kernel, almost all of it is executable code that actually does something)
It's possible to compromise on a), but if you want to actually get things done, you don't want to lose b). And no, compiler-produced assembly is not inherently slower and/or larger. What most do is give developers options to choose from (would you like this loop to be small or fast?).

Reply Parent Score: 8

RE[2]: Comment by MOS6510
by MOS6510 on Fri 22nd Nov 2013 09:12 in reply to "RE: Comment by MOS6510"
MOS6510 Member since:
2011-05-12

Thanks for the insightful comment!

I also think there is another factor and that is hardware limitations. The Commodore 64 and Amiga, for example, had limited hardware and certainly in the case of the C64 near impossible to upgrade.

So code was written to run well on those machines. Programmers came up with tricks to improve performance or use less memory. With PCs came a period where you were just required to add memory, a faster CPU, bigger hard disk, a new PC. Now I think were are in an age where the hardware is often more than enough to run most applications without breaking a sweat, so there's no incentive for programmers to make code efficient or smaller. They code something and it works fine, so why spend time optimizing it?

Honestly it's hard to blame them and I would do the same thing.

But then you see (and hear) a demo run on a 1 Mhz Commodore 64 with 64 kB of RAM and you start to wonder if your modern computer should't be able to run much much faster.

Reply Parent Score: 3

RE[3]: Comment by MOS6510
by gass on Fri 22nd Nov 2013 10:16 in reply to "RE[2]: Comment by MOS6510"
gass Member since:
2006-11-29

That is true.
Machines are today more than capable of processing anything.
But ... instead of having OSes faster, we have OSes that have no evolution despite graphics.

Windows is one example. xp->vista, windows7->8 .
Linux is another, with the example of gnome shell, for instance, having a *lot* written in scripting, uncompiled, languages.
There were times where the next version of something in the linux world was faster then the previous with more features. But now the competition is big for beautiful and *new* applications. Instead of just being good applications.

Bigger OSes also bring more capabilities. Menuet supports only x86. To support ARM, the size would double. How many archs does linux support? And graphic cards. and other hardware.
Of course, maybe this can all be optimized.

Reply Parent Score: 3

RE[2]: Comment by MOS6510
by twitterfire on Fri 22nd Nov 2013 19:33 in reply to "RE: Comment by MOS6510"
twitterfire Member since:
2008-09-11

And I have to ad that modern compilers optimize better than a programmer can optimize the ASM by hand.

There's really no reason why you should use ASM instead of C or C++ on a PC.

Long time ago compilers weren't as good at optimizing the code as today and you could do interesting stuff under DOS in ASM. That's why I learned and wrote a bit of x86 assembly many years ago. But I can be much more productive in C, C++.

As I understand, this is a hobby OS, so the guys who are wroting this do it for fun. The choice of the language doesn't matter if you do it for fun.

Reply Parent Score: 3

RE[3]: Comment by MOS6510
by Alfman on Fri 22nd Nov 2013 23:29 in reply to "RE[2]: Comment by MOS6510"
Alfman Member since:
2011-01-28

twitterfire,

"And I have to ad that modern compilers optimize better than a programmer can optimize the ASM by hand."

"There's really no reason why you should use ASM instead of C or C++ on a PC."

Sounds like something a C programmer would say ;) Joking aside, it depends. The compilers might do better in a great many cases with average programmers, however it's still possible to find edge cases where they still do worse.

I absolutely prefer using C over Asm for all the usual reasons, I wince when I think about maintaining and porting large amounts of assembly code. However code that benefits from direct access to special CPU features like arithmetic flags will be at a disadvantage in C because they are completely unrecognized in the C language.

It's complicated to fix this aspect of C in any kind of portable way because some architectures do not have the same semantics or even implement flags. Rather than define a C standard for flags (and force all architectures to emulate them), the C language just ignores that flags exist, which forces programmers to use more complex flagless algorithms instead. In some cases the compiler can reverse engineer these algorithms and optimize them to use flags, other times it cannot.

Unfortunately it's not always just a matter of making the compiler more intelligent. Even if it could recognize that flag based algorithm A (which cannot be expressed directly in C) is similar to algorithm B (which can be expressed in C), it still doesn't necessarily imply algorithm B can be converted back to algorithm A. For example, B might produce different results for certain numerical boundary conditions that aren't important to the programmer, but the compiler cannot make the assumption that the algorithms boundary conditions were unintentional. It's forced to generate machine code with equivalent semantics as algorithm B was expressed.


So, while I'd agree with you in general that developing at a higher level is preferable, there are still times when C code can be more difficult to optimize because the optimal code for the platform cannot be expressed in it. Although I think it would be hard for inexperienced assembly developers to be able to recognize these cases.

Reply Parent Score: 3