Linked by Hadrien Grasland on Fri 28th Jan 2011 20:37 UTC
OSNews, Generic OSes It's recently been a year since I started working on my pet OS project, and I often end up looking backwards at what I have done, wondering what made things difficult in the beginning. One of my conclusions is that while there's a lot of documentation on OS development from a technical point of view, more should be written about the project management aspect of it. Namely, how to go from a blurry "I want to code an OS" vision to either a precise vision of what you want to achieve, or the decision to stop following this path before you hit a wall. This article series aims at putting those interested in hobby OS development on the right track, while keeping this aspect of things in mind.
Permalink for comment 460234
To read all comments associated with this story, please click here.
RE[13]: Machine language or C
by Alfman on Sun 30th Jan 2011 22:22 UTC in reply to "RE[12]: Machine language or C"
Alfman
Member since:
2011-01-28

"Second reason why early optimization is bad is that, as I mentioned earlier, there's a degree of optimization past which code becomes dirtier and harder to debug."

Re-writing code in assembly (for example) is usually a bad idea even after everything is working, surely it's even worse to do before. But then this isn't the sort of optimization I'm referring to at all.

Blanket statements like "premature optimization is the root of all evil" put people in the mindset that it's ok to defer consideration of efficiency in the initial design. The important factors proposed are ease of use, manageability, etc. Optimization and efficiency should only be tackled on at the end.

However, some designs are inherently more optimal than others, and switching designs mid stream in order to address efficiency issues can involve a great deal more difficultly than had the issues been addressed up front.

For a realistic example, see how many unix client/server apps start by forking each client. This design, while easy to implement up front, tends to perform rather poorly. So now we have to add incremental optimizations such as preforking and adding IPC, then we have to support multiple clients per process, etc.

After all this work, the simple app + optimizations end up being more convoluted than an more "complicated" solution would have been in the first place.

The Apache project is a great example of where this has happened.


The linux kernel has also made some choices up front which has made optimization extremely difficult. One such choice has been the dependence on kernel threads in the filesystem IO layer. The cement has long dried on this one. Every single file IO request requires a kernel thread to block for the duration of IO. Not only has this design been responsible numerous lock ups for network file systems due to it being very difficult to cancel threads safely, but it has impeded the development of efficient asynchronous IO in user space.

Had I been involved in the development of the Linux IO subsystem in the beginning, the kernel would have used async IO internally from the get go. We cannot get there from here today without rewriting all the filesystem drivers.

The point being, sometimes it is better to go with a slightly more complicated model up front inorder to head off complicated optimizations at the end.

Edited 2011-01-30 22:36 UTC

Reply Parent Score: 2