Linked by Thom Holwerda on Fri 22nd Feb 2008 09:16 UTC, submitted by obsethryl
.NET (dotGNU too) "Previously, we have presented one of the two opensource licensed projects related to creating a C# kernel. Now it's the time to complete the set by rightfully presenting SharpOS, an effort to build a GPL version 3 + runtime exception licensed system, around a C# kernel of their own design. It is my pleasure and priviledge to host a set of questions and answers from four active developers of SharpOS, that is William Lahti, Bruce Markham, Mircea - Cristian Racasan and Sander van Rossen in order to get some insight into what they are doing with SharpOS, their goals, their different design and inspiration."
Thread beginning with comment 302020
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: So what ?
by g2devi on Fri 22nd Feb 2008 19:51 UTC in reply to "RE[2]: So what ?"
g2devi
Member since:
2005-07-09

Buffer overflow and memory leaks are only a small part of the issues operating systems face. A more serious issue is general *resource* leaks (which C# can do nothing about that can't be done with even C) and excess memory usage which anything but referenced counted garbage collectors are notorious for having (memory is eventually recovered, but you in the m-allocated memory), treating different things the same (i.e. if you try to treat register memory the same way you treat swap memory, you're going to have serious performance and behavioral issues), ignoring the hardware to make developer's lives easier like using the stack exclusively in preference to fast registers (if you ignore the hardware, the hardware will ignore you).

IMO, writing an OS in Java or C# might be good for educational purposes and OSes where latency makes speed a nonissue (e.g. a distributed OS over a slow network) or when you have tonnes of memory and CPU to waste. But I don't see it reaching the desktop any time soon. But it *might* change with time. After all, anyone who's had a 1MHz Commodore 64 or Amiga 1000 knows that those seriously underpowered machines could do wonders because of their tight coding (which included now taboo performance techniques like self-modifying assembly code). Compared to those standards, these days, even Linux is a memory hog for the sake of maintainability and functionality.

OTOH, given that applications of the average user keep finding ways to stress memory and CPU (Compiz, Video processing, Human Genome projects, Weather prediction, etc) in ways that we didn't even think of 10 years ago and the fact that Moores law has a limit which which we might reach within the next five years (i.e. atomic level), I'm skeptical.

Reply Parent Score: 2

RE[4]: So what ?
by tuttle on Fri 22nd Feb 2008 21:23 in reply to "RE[3]: So what ?"
tuttle Member since:
2006-03-01

ignoring the hardware to make developer's lives easier like using the stack exclusively in preference to fast registers (if you ignore the hardware, the hardware will ignore you).


I agree that the state of the art of managed runtime environments leaves a lot to be desired. But it is not quite as bad as you describe:

All current runtime environments (CLR, JVM) use registers whenever possible. That includes using registers for passing function parameters and return values.

The new JVM will even stack allocate thread-local objects like local variables to reduce the stress on the garbage collector.

And in .NET it is possible to write complex programs that do not use the heap at all by using structs.

Resource leaks can be dealt with by using the 'Resource Acquisition is Initialization' pattern. It is not as elegant as in C++, but definitely possible. I use this all the time.

Reply Parent Score: 1

RE[5]: So what ?
by obsethryl on Sat 23rd Feb 2008 09:40 in reply to "RE[4]: So what ?"
obsethryl Member since:
2006-11-16

Resource leaks can be dealt with by using the 'Resource Acquisition is Initialization' pattern. It is not as elegant as in C++, but definitely possible. I use this all the time.


Correct. And there are many many ways to do nice things in C++. It is just hard when you begin, but not at all once you start relying on efficient design and solid implementation to develop your ideas further. It does not attempt to make things easy for the sake of making them easy to begin with. Sure, nothing is perfect, but a multiparadigm language that allows you to be able to work at _any_ level you wish to work with your hardware may be a best fit in many scenarios.

However, language shoot - outs serve no particular purpose, for the record, I will quote B. Stroustrup on this, from his page at

http://www.research.att.com/~bs/bs_faq.html

where you can see the context the words below are written in:


I also worry about a phenomenon I have repeatedly observed in honest attempts at language comparisons. The authors try hard to be impartial, but are hopelessly biased by focusing on a single application, a single style of programming, or a single culture among programmers. Worse, when one language is significantly better known than others, a subtle shift in perspective occurs: Flaws in the well-known language are deemed minor and simple workarounds are presented, whereas similar flaws in other languages are deemed fundamental. Often, the workarounds commonly used in the less-well-known languages are simply unknown to the people doing the comparison or deemed unsatisfactory because they would be unworkable in the more familiar language.

Similarly, information about the well-known language tends to be completely up-to-date, whereas for the less-known language, the authors rely on several-year-old information. For languages that are worth comparing, a comparison of language X as defined three years ago vs. language Y as it appears in the latest experimental implementation is neither fair nor informative.



I believe that the above apply to many situations.

Reply Parent Score: 1