Linked by Hadrien Grasland on Fri 28th Jan 2011 20:37 UTC
OSNews, Generic OSes It's recently been a year since I started working on my pet OS project, and I often end up looking backwards at what I have done, wondering what made things difficult in the beginning. One of my conclusions is that while there's a lot of documentation on OS development from a technical point of view, more should be written about the project management aspect of it. Namely, how to go from a blurry "I want to code an OS" vision to either a precise vision of what you want to achieve, or the decision to stop following this path before you hit a wall. This article series aims at putting those interested in hobby OS development on the right track, while keeping this aspect of things in mind.
Order by: Score:
Comment by galvanash
by galvanash on Fri 28th Jan 2011 21:43 UTC
galvanash
Member since:
2006-01-25

Very nice article.

Reply Score: 2

Linux
by vivainio on Fri 28th Jan 2011 21:54 UTC
vivainio
Member since:
2008-12-26

Protip - if you have a knack on kernels, dive into Linux. You will immediately become a sought after developer in the global job market, and can probably secure a premium salary wherever you go.

If you mention your own "revolutionary" kernel project in your CV, you are likely to be deemed slightly delusional and potentially dangerous ;-).

Reply Score: 3

RE: Linux
by Alfman on Fri 28th Jan 2011 22:37 UTC in reply to "Linux"
Alfman Member since:
2011-01-28

Absolutely.
Look at the plan9 project, where they fixed many of the technical issues with Linux interfaces.

Do they have a good reason to write a better OS?
Yes.

Does anybody care about plan9?
No.

Given that nobody cares, was the effort justified?
Maybe. It's nice to fix a broken interface. In the end though, those of us who have to deal with the outside world will still have to deal with the broken interfaces despite the fact that something better exists.

Building an OS is a noble person goal, but it should not be undertaken under the impression that it will change the world. The market is too saturated to care.

BTW. Building a malloc implementation is really not that difficult. I've built one which significantly outperforms GNU's malloc in MT processes by using lock free primitives.

Reply Score: 1

RE[2]: Linux
by tylerdurden on Sat 29th Jan 2011 00:56 UTC in reply to "RE: Linux"
tylerdurden Member since:
2009-03-17

Absolutely.
Look at the plan9 project, where they fixed many of the technical issues with Linux interfaces.


Huh? Are you sure you were looking at Plan 9?

Reply Score: 2

RE[3]: Linux
by renox on Sat 29th Jan 2011 13:12 UTC in reply to "RE[2]: Linux"
renox Member since:
2005-07-06

Well first the GP made the classic Linux/Unix mistake: Plan9 started in the 80s whereas Linux started in 1991.

But you made the mistake of thinking that interfaces are only GUI which isn't the case, so he wasn't incorrect here..

Reply Score: 1

RE[4]: Linux
by cheemosabe on Sat 29th Jan 2011 13:25 UTC in reply to "RE[3]: Linux"
cheemosabe Member since:
2009-11-29

Plan9 started in the 80s whereas Linux started in 1991.


Your point being? Linux started as a Unix clone and never really got passed that. It didn't bring anything new other than the fact that it was opensource. Indeed it's a very advanced and well written clone, and a very useful one. Plan9 provided the next step before Linux even started, but nobody embraced it.

Linux advanced by adding (new technologies, good stuff, well written, but still adding). Plan9 was a fundamental change in design.

Edited 2011-01-29 13:28 UTC

Reply Score: 3

RE[5]: Linux
by tylerdurden on Sat 29th Jan 2011 17:55 UTC in reply to "RE[4]: Linux"
tylerdurden Member since:
2009-03-17

The point is that Plan 9 has nothing to do with Linux. So the implication that Plan 9's goal was to fix the issues with an OS which was not existing when the project got started is kind of silly.

Reply Score: 2

RE[6]: Linux
by Alfman on Sat 29th Jan 2011 19:55 UTC in reply to "RE[5]: Linux"
Alfman Member since:
2011-01-28

"The point is that Plan 9 has nothing to do with Linux. So the implication that Plan 9's goal was to fix the issues with an OS which was not existing when the project got started is kind of silly."


Renox called me on my error when I used "Linux" instead of the more generic term "*nix".

However, my point about other (and even better) alternatives existing is still correct.

You need to learn more about Plan9 to see how it is better. Obviously Plan9 devs had the benefit of hindsight and could develop better interfaces. So did Linus for that matter, but he chose to do a Unix clone rather than try his hand on improving the model.

If it wasn't Linux today, who knows what would have taken it's place? All we know is that there were viable alternatives.

Microsoft's very existence is proof that connections and timing can outweigh technical merit. Sad, but ultimately true.

Reply Score: 2

RE[4]: Linux
by tylerdurden on Sat 29th Jan 2011 17:51 UTC in reply to "RE[3]: Linux"
tylerdurden Member since:
2009-03-17


But you made the mistake of thinking that interfaces are only GUI which isn't the case, so he wasn't incorrect here..


And you made the mistake of thinking that somehow my post implied that I thought that interfaces are only GUI. I have absolutely no clue how you were able to jump to that conclusion. Does that make you double mistaken?

Reply Score: 2

RE[5]: Linux
by renox on Sun 30th Jan 2011 10:51 UTC in reply to "RE[4]: Linux"
renox Member since:
2005-07-06

I have absolutely no clue how you were able to jump to that conclusion.


Because you said 'looking at' and it is well-know that Plan9 has a poor GUI, eventhough its design is considered by many as superior, so I thought that there was a confusion..
So obviously you don't agree that Plan9 design is superior to *nix: could you explain what you don't like about Plan9?

Reply Score: 2

Best Article on OSNews in *ages*
by FreeGamer on Fri 28th Jan 2011 22:25 UTC
FreeGamer
Member since:
2007-04-13

Brilliant. More please! ;)

Reply Score: 3

I tried to develop my own OS
by biffuz on Fri 28th Jan 2011 22:45 UTC
biffuz
Member since:
2006-03-27

Ah, this reminds me of when I tried to develop my "operating system" when I was 14, with Turbo Pascal on my 286 and later a Pentium 60 MHz. Actually it was a big DOS program with all its "applications" hardcoded inside, I planned to make it load external programs later because I had no idea on how to do it!

My OS had a GUI - in VGA 640x480x16 only - with overlapping but unmovable windows and several 3D controls roughly copied from UNIX pictures I often seen on computer magazines, and supported a mouse - via the DOS mouse driver, of course.
It was not multitasking, this was a planned feature; again, I had no idea on how to do it, but a design feature was to have each app to take over the entire screen, so this was not a real problem; you could switch to another app with a button on the top of the screen (a sort of taskbar). When you switched app, the running app saved your work to a temporary file, so when you switched back to it, it reloaded the file and let you resume where you left.

The apps included a file manager, a text file editor, an hex file editor, a bitmap viewer/editor which supported 1 and 4 bit color depths (standard palettes only), and I was working on a MS Works-like database that never worked and a spreadsheet which got far enough to let you enter data on cells (but not formulas) and plot different kinds of 2D charts! The text editor and the hex editor were even able to print on a character printer connected to a parallel port!
The text editor supported only a monospaced font - of course I was planning to support the non-monospaced fonts which were included in Turbo Pascal - but let you select colors and underline (it couldn't print either of them, of course).
Some utilities like a calculator, a post-it app, a unit converter, a calendar and a minesweeper were always available from a Utilities menu.

The best part is that I actually gained some money from it, because the father of a friend of mine bought two copies and used them in his company's office! He used it to write documents and letters because he said that Windows and Office were too bloated and wanted something simpler. Years later, I helped him move his stuff to BeOS when I shown it to him.

I was so sad when I realized that I lost the source code :-(

Reply Score: 5

RE: I tried to develop my own OS
by Alfman on Fri 28th Jan 2011 22:54 UTC in reply to "I tried to develop my own OS"
Alfman Member since:
2011-01-28

Just think, had we been born just a decade earlier, it would be our operating systems everyone would be running today.

Reply Score: 1

RE: I tried to develop my own OS
by tylerdurden on Sat 29th Jan 2011 03:53 UTC in reply to "I tried to develop my own OS"
tylerdurden Member since:
2009-03-17

You keep using that word OS as in "Operating System" which does not mean what you think it does. ;-)

I think you meant to say PS, as in "Productivity Suite."

Reply Score: 2

RE[2]: I tried to develop my own OS
by biffuz on Sat 29th Jan 2011 13:11 UTC in reply to "RE: I tried to develop my own OS"
biffuz Member since:
2006-03-27

My goal was to make an OS :-D

Now I know how an OS works, but sixteen years ago I started with the applications because that was all I was able to do. Hey, I was 14, all my literature were computer magazines and old programming books from the local public library, and in 1994 Internet was mostly a curious word!

Reply Score: 3

tylerdurden Member since:
2009-03-17

Well, my goal was to become king of the world. And yet people still don't refer to me as your majesty :-(

Reply Score: 2

biffuz Member since:
2006-03-27

Probably you didn't follow this list: http://www.eviloverlord.com/lists/overlord.html

Reply Score: 2

RE: I tried to develop my own OS
by Soulbender on Sat 29th Jan 2011 04:18 UTC in reply to "I tried to develop my own OS"
Soulbender Member since:
2005-08-18

Turbo Pascal FTW!

Reply Score: 2

Obligitory Joke
by zizban on Fri 28th Jan 2011 23:24 UTC
zizban
Member since:
2005-07-06

But does it run Linux?

Reply Score: 2

Good Article, One note:
by Bill Shooter of Bul on Fri 28th Jan 2011 23:28 UTC
Bill Shooter of Bul
Member since:
2006-07-14

Yes the linux kernel is pretty darn complex. That is granted. There are books written on the subject that do a good job of explaining the basics of the kernel. Its been a while since I looked at any of them. I had one for 2.4.

Although, I'm sort of kicking myself for taking its advice. I bought it with the idea of getting into hacking the scheduler. The section on the scheduler had some big bold text that said essentially " THIS SECTION IS WELL OPTIMISED, DO NOT TRY HACKING HERE YOU WILL NOT COME UP WITH ANYTHING BETTER, EVER". And yes, scheduler algorithms can be complex, but its obviously been improved since then. I wish I would have ignored that part.

Edited 2011-01-28 23:30 UTC

Reply Score: 2

RE: Good Article, One note:
by Lennie on Fri 28th Jan 2011 23:45 UTC in reply to "Good Article, One note:"
Lennie Member since:
2007-09-22

It sounds more like it meant to say: _you_ will not come up with anything better. Because it has already had so many people look at it and from just reading the code it will not be clear why thing are the way they are.

Although it can definitely use improvement on the interactive side. People have been doing a lot of work on that lately though.

I just wish a distribution will come out 'soon' with 2.6.39 when it is ready I've seen so many good changelog entries from 2.6.37 and 2.6.38 and promisses for 2.6.39.

Because I think Linux has a lot of potential as a desktop and I keep hoping it will deliver what people want. It seems to be improving every time, but progress feels slow.

Reply Score: 2

Bill Shooter of Bul Member since:
2006-07-14

I don't really understand your comment. The book one the 2.4 version of the linux kernel had some stern language warning the user to not try to improve the scheduler. Obviously, its been improved. I wish I would have ignored it and spent more time trying to understand schedulers.

Reply Score: 2

RE[3]: Good Article, One note:
by bertzzie on Sat 29th Jan 2011 03:00 UTC in reply to "RE[2]: Good Article, One note:"
bertzzie Member since:
2011-01-26

I think it's written like that because many people try to suggest a "new and improved" scheduler too much. And because of that, the kernel devs (the book I read is from one of the the kernel dev) got tired and just write something like that ;)

Edited 2011-01-29 03:01 UTC

Reply Score: 1

RE[3]: Good Article, One note:
by tylerdurden on Sat 29th Jan 2011 04:00 UTC in reply to "RE[2]: Good Article, One note:"
tylerdurden Member since:
2009-03-17

Yeah, the scheduler has been improved. However, it has not been improved by people who were having a first go at understanding the internals of an Operating System.

You have to learn how to walk before you can think about running a marathon.

Reply Score: 2

Arguments are overstated
by james_parker on Sat 29th Jan 2011 01:03 UTC
james_parker
Member since:
2005-06-29

While it's highly unlikely that any given individual would produce a "revolution in computing", it certainly isn't impossible. Linux, of course, was initially a single person's effort, and Unix was initially a two-person effort. FORTH, of course had a large impact on computing (and was essentially an OS as well as programming language) and that was a single person's effort, as were CP/M and QDOS (AKA MS-DOS); admittedly, CP/M cribbed a fair amount from DEC's RT-11 OS.

And while emulating an existing OS API might not be the path to success, providing Posix compatibility will certainly expedite porting applications (shells, compilers, etc.) .

As for capital, should someone come up with something beneficially revolutionary, venture capitalists might be persuaded to make a monetary contribution in exchange for part ownership.

Of course, perhaps only one such effort in 1000 is likely to have that sort of success.

Also, as for a gateway to an exciting job, one might be able to parlay such an effort into PhD topic, and a PhD would certainly increase the job opportunities.

Reply Score: 4

RE: Arguments are overstated
by Alfman on Sat 29th Jan 2011 02:25 UTC in reply to "Arguments are overstated"
Alfman Member since:
2011-01-28

"While it's highly unlikely that any given individual would produce a 'revolution in computing', it certainly isn't impossible. Linux, of course, was initially a single person's effort, and Unix was initially a two-person effort..."

That's just the point: small/individual efforts succeeded back then because the market was empty. Many of us are capable of doing what Linus did with Linux, but it doesn't matter any more. Efforts today are in vein, being "better" is not really as significant as being first or having the stronger marketing force.

I'm not trying to downplay Linus' achievement in the least, but it is likely his pet project would be totally irrelevant if he started in today's market.

Reply Score: 4

RE[2]: Arguments are overstated
by tylerdurden on Sat 29th Jan 2011 04:05 UTC in reply to "RE: Arguments are overstated"
tylerdurden Member since:
2009-03-17

Linux was not the first by a long shot. There were plenty of open source unix-like OS by the time he started writing a single line of code: Minix, and BSDs for example.

There are plenty of opportunities for new stuff to come out of someone's pet project. In fact most interesting stuff usually comes from "pet projects" because once a product/project is stablished they tend to gather such inertia that they become pigeon-holed or develop a certain level of tunnel vision. Thus missing some of the interesting stuff in the periphery that those "pet projects" have more freedom to explore.

Reply Score: 2

RE[3]: Arguments are overstated
by openwookie on Sat 29th Jan 2011 04:28 UTC in reply to "RE[2]: Arguments are overstated"
openwookie Member since:
2006-04-25

When Linus started writing code for Linux, Minix cost $69 and was not yet freely distributable (not until 2000!) and BSD was tied up in lawsuit with AT&T. Hurd was intended as the kernel for the GNU system, but was not yet (and still isn't) complete.

Linux was totally a success due to being at the right place, at the right time.

Edited 2011-01-29 04:31 UTC

Reply Score: 4

RE[3]: Arguments are overstated
by Alfman on Sat 29th Jan 2011 04:33 UTC in reply to "RE[2]: Arguments are overstated"
Alfman Member since:
2011-01-28

"Linux was not the first by a long shot. There were plenty of open source unix-like OS by the time he started writing a single line of code: Minix, and BSDs for example."

Exactly! If Linux has started a few years later, FreeBSD (or another variant) would have "won" and it would be grabbing all the attention instead of Linux.

Same can be said for Microsoft/DOS. Timing is everything.

Reply Score: 1

RE: Arguments are overstated
by Neolander on Sat 29th Jan 2011 07:49 UTC in reply to "Arguments are overstated"
Neolander Member since:
2010-03-08

While it's highly unlikely that any given individual would produce a "revolution in computing", it certainly isn't impossible. Linux, of course, was initially a single person's effort, and Unix was initially a two-person effort. FORTH, of course had a large impact on computing (and was essentially an OS as well as programming language) and that was a single person's effort, as were CP/M and QDOS (AKA MS-DOS); admittedly, CP/M cribbed a fair amount from DEC's RT-11 OS.

I agree that it is possible (albeit improbable), but I've written this for a few reasons :
1/Due to it being so unlikely, I think it's nice as a dream ("I would like to bring the next revolution") but not as a goal ("I will bring the next revolution"). I'd love my OS to have some impact in many, many years, but if it has none (which is likely) I'm ready to admit that it's a success anyway as long as I am happy with it and its hypothetical future users are too.
2/Trying to write something revolutionary is one of the paths to feature bloat. Once you want to impress your user base, it's easy to fall for shiny features, start to add lots and lots more, and end up eating up 13GB of HDD space for something which is not more useful than a calculator, a notepad, a primitive word processor, a web browser and a file explorer.
3/Matching an existing operating system, given where they are today, is a matter of years. To stay motivated for that long, I think it's best to feel the thing is rewarding as it is, not as it's supposed to be in a very long time.

And while emulating an existing OS API might not be the path to success, providing Posix compatibility will certainly expedite porting applications (shells, compilers, etc.) .

Yes, but is it a good idea to begin with the idea that you'll make your OS POSIX compatible ? Emulating POSIX is a big task, and since hobby OS developers are generally one or two per project, they must chose their goals very carefully. This one costs a lot of energy while in the end you get nothing but a "linux clone" (in the view of its potential users, I know that linux != POSIX), which has yet to prove that it's better than the original.

Also, as for a gateway to an exciting job, one might be able to parlay such an effort into PhD topic, and a PhD would certainly increase the job opportunities.

I mentioned research teams as a way to monetize OS development, a PhD is one of the ways to get in there ;)

Edited 2011-01-29 07:56 UTC

Reply Score: 1

RE: Arguments are overstated
by vodoomoth on Mon 31st Jan 2011 14:18 UTC in reply to "Arguments are overstated"
vodoomoth Member since:
2010-03-30

Also, as for a gateway to an exciting job, one might be able to parlay such an effort into PhD topic, and a PhD would certainly increase the job opportunities.

Not in France and not by any account I've heard of/read about and not by my experience. Having a PhD has done nothing as far as landing me an interesting or rewarding (note that I didn't write "exciting") job. And God knows I have looked in every direction. Out of spite (and to be honest, because the opportunity appeared on my radar), after five months in my current position, I am turning my head and heart towards working for myself. Unfortunately, it means the project I was working on will go commercial, at least in the beginning.

Reply Score: 2

Very enjoyable
by fury on Sat 29th Jan 2011 01:07 UTC
fury
Member since:
2005-09-23

This was an excellent article. Very well informed and hit the issues right on the head. I was tickled to see C# mentioned as a system language since I was one of the SharpOS developers :-D

I look forward to the rest of the series!

Reply Score: 2

WereCatf
Member since:
2006-02-15

I enjoy articles like these, they differ so much from the regular stuff here on OSNews and are more like the stuff people actually expect to see here. It would be great if we got more some day ;)

As for the topic at hand (coding my own OS): I have always wanted to start coding an OS of my own, but I am really bad at actually starting something. I have no delusions of it ever reaching more than 1 user or something like that, the only reason why I'd want to code an OS of my own is simply to learn. Nothing beats learning kernel internals, memory handling and all that like actually coding it all yourself from scratch!

Reply Score: 3

Neolander Member since:
2010-03-08

As for the topic at hand (coding my own OS): I have always wanted to start coding an OS of my own, but I am really bad at actually starting something. I have no delusions of it ever reaching more than 1 user or something like that, the only reason why I'd want to code an OS of my own is simply to learn. Nothing beats learning kernel internals, memory handling and all that like actually coding it all yourself from scratch!

Actually, it was the same with me for a long time. My OS is my first successful long-time personal project so far, and I have two failed attempts on my back (main reason for their failure will be explained in the beginning of article #2 ;) ).

Edited 2011-01-29 08:12 UTC

Reply Score: 1

Comment by philluminati
by philluminati on Sat 29th Jan 2011 12:05 UTC
philluminati
Member since:
2011-01-29

This is an absolutely brilliant article, so much so I had to create an account and leave a comment.

I've probably spent a year on malloc and still not finished because of all the downtime and the fact you have to get kprintf working with variable arguments and so forth. I copied a tutorial to get it working and now I'm in the process of spending months learning what Interupts are and playing with Assembly retrospectively. I understand basic assembly, stacks and registers are but you have to learn as you go along.

God it's a long road but I'm not giving up! If I spend 5 hours playing with basic assembly and don't touch the source code of my OS...I still consider that time spent _working_ towards my OS.

Reply Score: 2

Machine language or C
by fran on Sat 29th Jan 2011 15:40 UTC
fran
Member since:
2010-08-06

Neo what was your thoughts in deciding between using machine language (like menuet os use) or C to code the OS?

Reply Score: 2

RE: Machine language or C
by Neolander on Sat 29th Jan 2011 16:11 UTC in reply to "Machine language or C"
Neolander Member since:
2010-03-08

Depends. If you want to write portable code, you should avoid Assembly like pest due to its highly machine-specific nature. I also find it much harder to write well-organized, easy-to-debug ASM code, but it might be due to my relative lack of experience with it. If you feel this way too, you should probably be using a higher-level language instead : it's very important to keep your codebase as tidy as possible.

Otherwise, as you say yourself, MenuetOS itself shows that it's possible to get some interesting results with Assembly.

Edited 2011-01-29 16:16 UTC

Reply Score: 1

RE[2]: Machine language or C
by WereCatf on Sat 29th Jan 2011 17:47 UTC in reply to "RE: Machine language or C"
WereCatf Member since:
2006-02-15

I'd strongly advise against assembly: unless you have years and years of experience with it you'll sooner or later simply lose oversight of the whole thing simply due to the sheer amount of text you'll be writing. Not to mention how incredibly tedious it is.

Of course it's a way of learning assembly, yes, but there's plenty of better ways of going about that one. If you plan to also learn kernel programming at the same time then you're faced with the famous chicken and egg problem: you need to learn assembly to do kernel coding, and you need kernel coding to learn assembly..

Of course opinions are opinions, but I'd much rather suggest to set out to learn one thing at a time: either assembly, or kernel programming, not both. If you wish to do kernel programming then choose a language you're more familiar with.

Reply Score: 2

RE[3]: Machine language or C
by fran on Sat 29th Jan 2011 18:53 UTC in reply to "RE[2]: Machine language or C"
fran Member since:
2010-08-06

About keeping track of hardware changes.
The main reason SkyOS became dormant is the rapid development of hardware
http://www.skyos.org/?q=node/647

I was wondering whether a hardware driver in machine language is more easy to add to your kernel and more universal then say a higher level language driver. But maybe then you would have less functionality?

Reply Score: 2

RE[4]: Machine language or C
by Neolander on Sat 29th Jan 2011 19:58 UTC in reply to "RE[3]: Machine language or C"
Neolander Member since:
2010-03-08

Not necessarily. You can create wrappers in order to use the arch-specific assembly functions you need in your language of choice, and keep all the logic written in this language.

As an example, if I need to output bytes on one of the ports of my x86 CPU, all I have to do is to create a "C version" of the OUTB assembly instruction, using macros and inline assembly, and after that I can use it in the middle of my C code at native speed.

With GCC's inline assembly, it'd look something like this :
"#define outb(value, port) \
__asm__ volatile ( \
"outb %b0,%w1" \
::"a" (value),"Nd" (port) \
)"

Yes, it's ugly, but you only have to do it once.

Edited 2011-01-29 20:09 UTC

Reply Score: 1

RE[4]: Machine language or C
by Alfman on Sat 29th Jan 2011 20:22 UTC in reply to "RE[3]: Machine language or C"
Alfman Member since:
2011-01-28

The majority should definitely be coded in a high level language.

In the days of DOS TSRs, we wrote in assembly because the code had to be small and efficient.

Even to this day it's often easy to beat a compiler's output simply because it is restrained by a fixed calling convention. In x86 assembly language, I am free to stuff values where I please. A variable can be stuffed into segment registers, a function returning boolean can use the "zero flag". This eliminates the need to do "cmp" in the calling function.

In my own assembly, I can keep variables intact across function calls without touching the stack. To my knowledge, all C compilers use unoptimized calling conventions by design so that separately compiled object files link correctly.

Of course above I'm assuming that function call overhead is significant, YMMV.


However, all this optimization aside, looking back on TSRs I wrote, it takes a long time to familiarize oneself with code paths again. This is true of HLL too, but even more so with assembly. It is crucial to comment everything in assembly, and it is equally crucial that the comments be accurate.

Low level assembly is just not suitable for code that is meant to developed over time by many people. Besides, it's not portable, and cannot benefit from new architectures with a simple recompile.

Off the top of my head, the bootloader and protected mode task management are the only things which really require any significant assembly language.

Reply Score: 1

RE[5]: Machine language or C
by WereCatf on Sat 29th Jan 2011 20:34 UTC in reply to "RE[4]: Machine language or C"
WereCatf Member since:
2006-02-15

Even to this day it's often easy to beat a compiler's output

I quite doubt that. There's been plenty of discussion of this and the general consensus nowadays is that it's really, really hard to beat atleast GCC's optimizations anymore. Though, I admit I personally have never even tried to ;)

Reply Score: 2

RE[6]: Machine language or C
by stestagg on Sun 30th Jan 2011 00:10 UTC in reply to "RE[5]: Machine language or C"
stestagg Member since:
2006-06-03

I have tried it, just for fun. I even used some example assembly provided by AMD, optimised for my processor for parts of it, but could only get up to about 80% of the gcc performance.

Reply Score: 2

RE[7]: Machine language or C
by WereCatf on Sun 30th Jan 2011 03:07 UTC in reply to "RE[6]: Machine language or C"
WereCatf Member since:
2006-02-15

Just remember kids, premature optimization is the root of all evil, similar to certain other premature...mishappenings! ;)

Reply Score: 2

RE[6]: Machine language or C
by Alfman on Sun 30th Jan 2011 03:23 UTC in reply to "RE[5]: Machine language or C"
Alfman Member since:
2011-01-28

"Even to this day it's often easy to beat a compiler's output"

"I quite doubt that. There's been plenty of discussion of this and the general consensus nowadays is that it's really, really hard to beat atleast GCC's optimizations anymore."

I feel you've copied my phrase entirely out of context. I want to re-emphasize my point that c compilers are constrained to strict calling conventions which imply shifting more variables between registers and the stack than would be possible to do by hand.

I'm not blaming GCC or any other compiler for this, after all calling conventions are very important for both static and dynamic linking. However it can result in code which performs worse than if done by hand.

As for your last sentence, isn't the consensus that GCC output performs poorly compared to other commercial compilers such as intel's?

I would be interested in seeing a fair comparison.

Edited 2011-01-30 03:37 UTC

Reply Score: 1

RE[7]: Machine language or C
by WereCatf on Sun 30th Jan 2011 03:57 UTC in reply to "RE[6]: Machine language or C"
WereCatf Member since:
2006-02-15

As for your last sentence, isn't the consensus that GCC output performs poorly compared to other commercial compilers such as intel's?

I would be interested in seeing a fair comparison.


Apparently this is true. I googled a bit and found three benchmarks:

http://macles.blogspot.com/2010/08/intel-atom-icc-gcc-clang.html

http://multimedia.cx/eggs/intel-beats-up-gcc/

http://www.luxrender.net/forum/viewtopic.php?f=21&t=603


They're all from 2009 or 2010 and in all of them icc beats GCC by quite a large margin, not to mention icc is much faster at doing the actual compiling, too. Quite surprising. What could the reason be then, why does an open-source compiler fare so poorly against a commercial one?

Edited 2011-01-30 03:58 UTC

Reply Score: 2

RE[4]: Machine language or C
by Morin on Sat 29th Jan 2011 20:25 UTC in reply to "RE[3]: Machine language or C"
Morin Member since:
2005-12-31

I was wondering whether a hardware driver in machine language is more easy to add to your kernel and more universal then say a higher level language driver. But maybe then you would have less functionality?


No. As neolander said, you can wrap machine language instructions in C functions (or Java methods, or anything else that models a procedural construct).

The only area where this is impossible is when the function call itself interferes with the purpose of the instruction, and this occurs in very very few places. Certainly not hardware drivers, but rather user-to-kernel switching, interrupt entry points, etc.

As a side node, the NT kernel is based on an even more low-level core called "HAL" (hardware abstraction layer) that IIRC encapsulates those few places where you actually NEED machine language.

Reply Score: 2

RE[5]: Machine language or C
by Neolander on Sat 29th Jan 2011 22:32 UTC in reply to "RE[4]: Machine language or C"
Neolander Member since:
2010-03-08

I'd also add that in some cases, there's so much assembly in functions that devs are better off writing the whole function in assembly instead of using such wrappers.

x86 examples :
-Checking if CPUID is available
-Switching to long mode (64-bit mode) or to its "compatibility" 32-bit subset.

Edited 2011-01-29 22:33 UTC

Reply Score: 1

Machine language can be OK
by Kochise on Tue 1st Feb 2011 13:04 UTC in reply to "RE: Machine language or C"
Kochise Member since:
2006-03-03

Some attemps were tried to leverage assembly to a more higher level, such like HLA :

http://webster.cs.ucr.edu/AsmTools/HLA/hla2/0_hla2.html

Give it a try and figure out were MASM have stalled...

Kochise

Reply Score: 2

I remember...
by cefarix on Sat 29th Jan 2011 21:07 UTC
cefarix
Member since:
2006-03-18

I started OS deving when I was 12, back in 2000. It started as a DOS .com file, written in assembly, and much of the necessary code based off of SHAWN OS (like the GDT, IDT, etc). It turned eventually into a monster 4000 line single file. Then I reformatted my HD and lost it ;) So I started over, this time with more experience under my belt, in C. After about 2 or 3 years of work it had a GUI with a mouse and was reading the HD and CD serials using the ATA/ATAPI identify command (anyone remember that?). Then I started from scratch again, because I realized it would take too much time to rework the monster source code tree to make it load drivers as modules and not link in at compile time. Worked on that for another 3 years. That was a more solid base, could read/write HD properly, had a filesystem (though it was somewhat buggy), and a bootloader with multiboot options, loadable drivers, and a command line interface (I decided to focus on the internals rather than going for the fancy GUI right away this time). Then I stopped working on it because I grew up.

All in all, it was an amazing experience. I spent all my teenage years working on my OS or other programming things, and thanks to that, I'm a pretty good programmer now if I may say so and started my own company doing iPhone dev'ing for now.

If anyone wants to check out its source code, its still on sourceforge... http://cefarix.cvs.sourceforge.net/cefarix/

Edited 2011-01-29 21:08 UTC

Reply Score: 2

Very good advice!
by abstraction on Sat 29th Jan 2011 23:05 UTC
abstraction
Member since:
2008-11-27

I started working on my OS without having any prior knowledge of operating system- or close-to-hardware programming of any kind. The reason for trying it out was, as people here said before me, to learn how everything actually works because you can theorize as much as you want but knowing how it actually works gives a lot more weight to your arguments.

The thing that really struck me was the amount of time it took to go from nothing into something. Behind the work I've done the major part excist of just reading tons and tons of documentation, looking at examples, looking at other peoples code and actually learning the tools (because it's not as simple as gcc kernel.c -o kernel). The actual coding is only a small part of time you invest.

This article amused me because it reminds me of the steady stream of posts on osdev's forum with people trying to create an operating system that is going to be the next best thing but doesn't even get the most fundamental things working. Often they try to get other people to do it for them. This is actually more common than you might think.

It is not very hard to get your os to boot and print some text on the screen but going from there into having all the other parts done can truly be a pain in the buttocks. If you are not careful it is easy to overlook some part that you should have thought about in the beginning which now turns out to be a freakin nightmare.

Anyway, good luck to all enthusiasts out there!

Edited 2011-01-29 23:09 UTC

Reply Score: 2

Another way to write a hobby O/S...
by axilmar on Sun 30th Jan 2011 15:52 UTC
axilmar
Member since:
2006-03-20

...is to abstract the hardware (CPU, devices, etc) and target a fictitious architecture that suits your needs.

I've done that for a hobby project and it became much easier to write a small hobby O/S.

My main interest was in-process component isolation; I managed to add component isolation through the page table of the CPU: each page belonged to a process' component, and the component could only touch pages of the same component, or call code in pages that were 'public' for other components. My pseudo-assembly had relative addressing, as well as absolute addressing, in order to allow me to move components inside each process without too much fuss.

I believe this approach is viable, especially in 64-bit architectures: all programs share the same address space, and each program cannot touch another program's private memory, except the public pages. It would make component co-operation much easier than what it is today...and much faster ;-).

Reply Score: 3

Comment by Innominandum
by Innominandum on Sun 30th Jan 2011 18:50 UTC
Innominandum
Member since:
2005-11-18

Having a self-defeating attitude achieves nothing. Perseverance & tenacity will take you further than 'being realistic' ever will.

"Even to this day it's often easy to beat a compiler's output"

This is true. I do it everyday.

Reply Score: 1

HLL FTL
by Innominandum on Mon 31st Jan 2011 02:32 UTC
Innominandum
Member since:
2005-11-18

Just in case you guys don't believe me: I compiled and disassembled a small segment of code that happened to be on my screen, under GCC 4.5.2:

x^=(x<<13), x^=(x>>17), x^=(x<<5)

Which resulted in:

8B45F0 mov eax,[rbp-0x10]
C1E00D shl eax,0xd
3145F0 xor [rbp-0x10],eax
8B45F0 mov eax,[rbp-0x10]
C1E811 shr eax,0x11
3145F0 xor [rbp-0x10],eax
8B45F0 mov eax,[rbp-0x10]
C1E005 shl eax,0x5
3145F0 xor [rbp-0x10],eax

Ouch. 6 memory references. It runs at an average of 20 cycles on my AMD Phenom 8650. The obvious 2 memory reference replacement runs an average of 8 cycles, more than twice as fast.

This is basic stuff that even a neophyte ASM programmer would not miss.

Edited 2011-01-31 02:35 UTC

Reply Score: 1

RE: HLL FTL
by WereCatf on Mon 31st Jan 2011 02:47 UTC in reply to "HLL FTL"
WereCatf Member since:
2006-02-15

And what were the compiler parameters for GCC then?

Reply Score: 2

RE[2]: HLL FTL
by WereCatf on Mon 31st Jan 2011 03:13 UTC in reply to "RE: HLL FTL"
WereCatf Member since:
2006-02-15

Replying to myself: I got the same code _without any kinds of compiler parameters_, ie. you are comparing optimized code versus completely unoptimized i486-compatible code. The reason why you get such code is quite obvious...

Reply Score: 2

RE: HLL FTL
by Alfman on Mon 31st Jan 2011 03:48 UTC in reply to "HLL FTL"
Alfman Member since:
2011-01-28

Here is what I get, with and without -O3 in GCC 4.4.1.
(I hope this output doesn't get clobbered)
Edit: They did get clobbered, I needed to fix manually.

They are both pretty bad, I am actually quite surprised at how poorly GCC handled it. But for the record, I never doubted your claims about being able to do better than the compiler. Does someone have ICC on hand to see it's output?


Gcc flag -O3
08048410 <func>:
8048410: push %ebp
8048411: mov %esp,%ebp
8048413: mov 0x8(%ebp),%edx
8048416: pop %ebp
8048417: mov %edx,%eax
8048419: shl $0xd,%eax
804841c: xor %edx,%eax
804841e: mov %eax,%edx
8048420: shr $0x11,%edx
8048423: xor %eax,%edx
8048425: mov %edx,%eax
8048427: shl $0x5,%eax
804842a: xor %edx,%eax
804842c: ret

Normal:
080483e4 <func>:
80483e4: push %ebp
80483e5: mov %esp,%ebp
80483e7: mov 0x8(%ebp),%eax
80483ea: shl $0xd,%eax
80483ed: xor %eax,0x8(%ebp)
80483f0: mov 0x8(%ebp),%eax
80483f3: shr $0x11,%eax
80483f6: xor %eax,0x8(%ebp)
80483f9: mov 0x8(%ebp),%eax
80483fc: shl $0x5,%eax
80483ff: xor %eax,0x8(%ebp)
8048402: mov 0x8(%ebp),%eax
8048405: pop %ebp
8048406: ret

Edited 2011-01-31 03:57 UTC

Reply Score: 1

RE: HLL FTL
by Alfman on Mon 31st Jan 2011 03:52 UTC in reply to "HLL FTL"
Alfman Member since:
2011-01-28

Innominandum,

I see your disassembly is in intel x86 syntax, how did you generate that? All the GNU tools at my disposal generate AT&T syntax which I find very annoying.

Reply Score: 1

RE[2]: HLL FTL
by jal_ on Mon 31st Jan 2011 10:16 UTC in reply to "RE: HLL FTL"
jal_ Member since:
2006-11-02

Check the objdump parameters, especially --disassembler-options with value intel-mnemonic.

Reply Score: 2

MMURTL
by jibadeeha on Tue 1st Feb 2011 18:12 UTC
jibadeeha
Member since:
2009-08-10

I remember wanting to write my own Operating System years ago, and bought a book called "Developing your own 32-bit Operating System". It sounds sad, but I had never been so excited about a book and thought it was really good for step-by-step learning.

I then started to read a book on the x86 architecture and protected mode, but unfortunately only got far as writing a boot loader (like so many) that switch the machine into protected mode and then wrote my own code to output some text to the screen.

It took me many months to get to that stage, so much so that I hit a wall with it and gave up. Yet I had original ambitions to write my owner scheduler, memory management, and file system code.

I wasn't cut out for OS development, so really admire those who managed to write their own hobby OS - it takes a lot of your time and dedication.

Reply Score: 1

RE: MMURTL
by Alfman on Wed 2nd Feb 2011 02:12 UTC in reply to "MMURTL"
Alfman Member since:
2011-01-28

"I remember wanting to write my own Operating System years ago, and bought a book called 'Developing your own 32-bit Operating System'. It sounds sad, but I had never been so excited about a book and thought it was really good for step-by-step learning."

It seems to be a phase that geeks go through at that age. Does anyone know if today's youth has the same aspirations?

Doing it forces us to learn a great deal more than can be taught in any class. But as much as I loved being able to write my own bootloader/OS to study computer architectures in detail, it's a shame that those skills are so unappreciated in today's job market.

Speaking of which, is anyone hiring in Suffolk County NY? I'm woefully underemployed.

Reply Score: 1

RE[2]: MMURTL
by A420X on Wed 2nd Feb 2011 11:01 UTC in reply to "RE: MMURTL"
A420X Member since:
2011-02-02


It seems to be a phase that geeks go through at that age. Does anyone know if today's youth has the same aspirations?


Well I'm in my early 20s im not sure if that qualifies me as a 'youth' (then again everything is relative ;) ) I do have fond memories of my GCSE IT class and being the only person to do a programming project for my coursework instead of a database.

When our teacher introduced the module on programming he asked if any of us had used VB6. The class was a sea of blank faces, I answered no but that I was okay at c++ and was trying to learn assembly and c. Our teacher (of the cant do cant teach either variety) seemed to take offence and asked If I would like to teach the class about variables since I was obviously such an 'expert'.

I still consider it a brave moment when I walked to the front of the class, copied a diagram I remembered from my beginners c++ book and got everyone to understand the concept of data types, variables and memory addresses, most could even get a basic calculator working by the end of class. (The look on old teacher's face was priceless)

Fuelled by this (undeserved) ego boost I decided I would write my own OS for my coursework (bad move!) It never worked, but the theoretical knowledge I got from just trying was worth it and my documentation was pretty good so I still got a B for the module (maybe that says something about the difficulty of GCSEs)

What is very sad though is that I knew people who got A* results for the course and still didn't really understand what a simple program let alone an operating system consisted of at the basic levels. Not because they were stupid or didn't care but because IT like Maths is simply not taught properly in schools these days. We had a week on programming and low level stuff and the rest of the year was spent learning how to mail merge in Office and make charts in excel *sigh*

Reply Score: 1

RE[3]: MMURTL
by Alfman on Wed 2nd Feb 2011 19:48 UTC in reply to "RE[2]: MMURTL"
Alfman Member since:
2011-01-28

"Well I'm in my early 20s im not sure if that qualifies me as a 'youth'"

I was thinking younger but there's no reason to be discriminating.

"What is very sad though is that I knew people who got A* results for the course and still didn't really understand what a simple program let alone an operating system consisted of at the basic levels."

I found this to be often the case.
I had one particular professor for many upper level CS electives who refused to accept "original" solutions to his class problems. He would only accept solutions which were near verbatim copies of what had done in class. This meant that people who merely memorized material did much better than those of us who were able to derive solutions.

After getting a failing grade for an exam (Operating Systems class of all things), I confronted him about this and despite the fact that none of my answers were wrong, they weren't what he was expecting. Obviously he didn't care whether the answer was right, only that it matches his. He justified this by saying that he was a professor for 20 years and that he wasn't about to change for me. I told him genuine industry experts would be unable to pass his exams, he didn't care.

Reply Score: 1

RE[4]: MMURTL
by A420X on Wed 2nd Feb 2011 23:48 UTC in reply to "RE[3]: MMURTL"
A420X Member since:
2011-02-02

I was thinking younger but there's no reason to be discriminating.


If you mean that I came across as being discriminatory apologies, wasn't my intention - I really need to be less colloquial when I write online, things do have a habit of getting lost in translation ;)

There does seem to be a big problem in the UK at least with the IT curriculum, obviously no school can teach their students about all operating systems and software packages but I think they could try to teach more theoretically and rely less on Microsoft for examples. After all if there's one place you shouldn't be blinkered it's at school.

And maybe a bit more OS theory would spark the imagination of the next Bill Gates - maybe not Bill, Linus? ;)

Reply Score: 1

RE[2]: MMURTL
by A420X on Wed 2nd Feb 2011 11:03 UTC in reply to "RE: MMURTL"
A420X Member since:
2011-02-02

--double post--

Edited 2011-02-02 11:09 UTC

Reply Score: 1