Singularity is a research project in Microsoft Research that started with the question: what would a software platform look like if it was designed from scratch with the primary goal of dependability? Singularity is working to answer this question by building on advances in programming languages and tools to develop a new system architecture and operating system (named Singularity), with the aim of producing a more robust and dependable software platform.
Damn I wish that they’d release this under one of their new, less restrictive Shared Source licenses. I seriously doubt it’ll happen anytime soon tho if ever 🙁
More PR crap from Microsoft. They have a Linux research lab. Will they produce a Linux distro? Now they have a “dependability” lab. Will they produce a dependable OS?
Those who do not learn from Unix are doomed to reinvent it — badly.
What?!?
I guess you didn’t read it and I can tell from your linux fanboy rant. This blows away both linux and windows. Way beyond that Linux copy of a 70’s OS. This is what Linux should have been.
Blows away both huh, too bad it doesn’t exist yet.
Green Hills Software (http://www.ghs.com) has produced exactly such an operating system for embedded systems. It’s called INTEGRITY, and it’s used in everything from cars to routers to airplanes. It’s been rated by the FAA for DO-178B Level A certification, which means that it can run on parts of the plane where if the software crashes, the plane could crash. It is the only Commercial off the Shelf operating system that is rated Level A.
While I agree that working with “safe” languages such as C# is a great goal and a great research project (I would love to work on it), people should realize that the real world currently has the ability to run safety-critical code in an safe environment without the need for a “safe” language.
Horseshit.
Would you mind reading the paper before you comment on it?
This is a very cool research project.
I did read the paper before I commented on it, and as I said, I would love to work on this research project. However, I wanted to point out that other projects exist with a similar purpose, and at least one of them is available today.
Microsoft? Trying to research and develop an original, dependable operating system they didn’t buy from someone else? Why do I smell bullshit?
I thought we already had OpenBSD, NetBSD and Solaris for “dependability”?
I’m confused…
EDIT: does look like an interesting project. Just because they’re a huge uncaring and/or monopolistic corporation doesn’t mean they can’t make *something* cool.
Edited 2005-10-29 06:58
This is a very cool research project from Microsoft. It’s a good example of how they are actually innovating outside of a marketplace.
The trolls on this thread are very discouraging – please, at least read about Singularity before giving a generic response that “Microsoft can’t be dependable” or that Linux/BSD is better.
Remember to keep your mind open, and that goes both ways.
indeed.
I would like to try this singularity for myself. it appears that the microsoft research lab is trying at least, to come up with something decent.
like it says, there are over 40 years of experience to build upon, and if OS designers at microsoft change their views and design for stability/security from the ground up, then we might have something.
OH – and forget backwards compatability, thats why microsoft is in the state it is now.
This is a very cool research project from Microsoft. It’s a good example of how they are actually innovating outside of a marketplace.
Yes this is a research project, i.e. innovation outside the marketplace. And it does look very interesting although some of the safeguards it implements such as objects only being able to establish exactly one communication stream at a time and kernel code not being verifiably safe seem like they are issues that need to be addressed sooner rather than later.
What I’m looking forward to is when they may see fit to start innovating in the marketplace as well as offering a wider range of dependable (in a positive sense) software and operating systems.
The trolls on this thread are very discouraging – please, at least read about Singularity before giving a generic response that “Microsoft can’t be dependable” or that Linux/BSD is better.
You could always view the thread at a higher level than the default of -1. That would shield you from recognized and modded trolling (but not necessarily from everything you don’t want to see).
Besides if you see that it’s a troll comment then why elevate it to a level that requires you to be concerned about it. A little personal/internal down-modding will save you more mental and emotional distress than all of the complaining that could ever be done by all the OSNews readers put together about the value or correctness of the writings of strangers.
Remember to keep your mind open, and that goes both ways.
Pardon my asking, but what might this mean? If a mind is open whether through remembering to keep it that way, by practice at keeping it open, habit so to speak, or just by happy accident then what does “both ways” mean? This sounds like it has something behind it that hasn’t made it’s way into the actual comment. Are you saying that if I keep my mind open you will remember to do the same? Even to ideas that might not be your [current] favorite?
It would be a good time for a major, popular OS that is really designed from scratch. But that is just dreaming …
The basis seems to be to C# what JNode is to Java. Can anyone comment on this?
But there are other nice points (I cannot remember if JNode did this too):
> In Singularity, an application consists of a manifest
> and a collection of resources. The
> manifest describes the application in terms of its
> resources and their dependencies. Although
> many existing setup descriptions combine declarative
> and imperative aspects, Singularity
> manifests contain only declarative statements that
> describe the desired state of the application
> after installation or update.
Voila, no installation hassle anymore.
This is going to be interesting.
– Morin
True, and more interesting will be seeing some of the research results applied to upcoming versions of Windows whether Singularity becomes marketable or not.
This is probably something they will migrate to over time… say 10-15 years for a full migration to a system based on the Singularity research.
That leaves plenty of time to adapt applications to make use of these things.
The sad thing is… if they decide to patent anything from the Singularity project… in 10-15 years, when those patents actually begin to matter, there will be lots of people that complain about it being “trivial” and other things. (These people often forget that patented things may not have been trivial at the time of patent application.)
I don’t know if anything from this research is patentable though, so we’ll have to wait and see.
It’s nice to see that they’re so freaked out by free operating systems that they’re actually starting to innovate in meaningful ways.
Can they ever come up with a _single_ original name?
And , btw, talk about hyping a product!
dôLôb
There was some great stuff in that article and all you can say is something about the name?
Get lost!
me and friends of mine have had several such ideas, mainly focusing on python as the safe language, but damn, this stuff is good. i really hope they release it as a real product, after they have finished researching it.
-sebulba
From Microsoft I never thought I might see any recognition for an occam project like RMOX which they say is very similar, indeed it seems so. But no mention of CSP, Pi Calculus etc, Alt guards. It seems they are reinventing or rediscovering alot of what has been around for 25 years although thats okay, 1st time around doesn’t mean it was quite ready at the time. 25yrs ago, there was no malware, etc to worry about and silicon was still very underpowered.
So we now have SIPs, mobile? channels, message passing, protected object spaces, no shared pointers.
I agree with alot of their goals but I also fundamentally disagree with the big premise of using current hardware.
Real progress comes when the hardware supports everything the OS and applications want to do safely rather than hinders. This sort of OS really needs hardware capabilities to protect memory spaces down to very small pages ie down to small data structs or even strings, for upto say 4B or more distinct objects which can then be used to store any protected code, data or message type you can imagine. Clearly basing this work on current x86 memory model or any current page table driven processor is going to hinder progress.
Processes take 100Ks of cycles to create, hardware can do that in 100s. Message ping ponging, 1500 cycles, hardware can do this also more than an order faster. The virtual address spaces for these SIPs is also very large at 286K, it should be much closer to the actual store needed.
I assume that even if this came to pass, modern MSIL apps could be compiled for it, I guess alot of old apps will be left behind.
transputer guy
(sorry for my bad english)
hey!! you are sooo right, there are a lot of
cool stuff ‘semi-buried’, if a ‘new’ innovation
age is to come needs to be on top of new HW
and take account much of the reasearch done by FP,
maybe some day I will enjoy a ‘true’ stack-machine
running combinators …
> Real progress comes when the hardware supports
> everything the OS and applications want to do safely
> rather than hinders. This sort of OS really needs
> hardware capabilities to protect memory spaces down
> to very small pages ie down to small data structs or
> even strings, for upto say 4B or more distinct
> objects which can then be used to store any protected
> code, data or message type you can imagine. Clearly
> basing this work on current x86 memory model or any
> current page table driven processor is going to
> hinder progress.
Where did you get this from? It is using type safety for protection. What is special hardware needed for?
– Morin
If one is going to do such a grand project to rebuild the OS world and applications from ground up, one can also consider what role the hardware can play to support that. As the paper says, very little has changed in hardware or software architecture in 20-30 years except much more layering of complexity. If Windows is no longer a sacred cow, then the x86 certainly isn’t in its current form. I always thought that it was a shame that Intel-Microsoft effectively divided up the world into hardware & software camps that can’t be unified or co designed.
With modern memory hardware design, all array and pointer refs can be checked with hardware in a way that comes for free when the memory wall is addressed. This comes out of an ongoing development for a new Transputer that involves both the OS and compiler side too. If I were only looking at the OS I would have similar conclusions to the paper, but having the processor start over from ground up means the best solution to protection can be in hardware and software.
If hardware can protect every process space from itself or any of zillions of other process spaces, are you saying you would rather have software do it even if it means adding extra code? I’d rather let the MMU do it but I have a rather special MMU at hand.
transputer guy
> If hardware can protect every process space from itself
> or any of zillions of other process spaces, are you
> saying you would rather have software do it even if it
> means adding extra code? I’d rather let the MMU do it
> but I have a rather special MMU at hand.
For traditional OSes, it was necessary to do it in hardware, and since *every* memory access had to be checked this was also important for performance.
For environments like Java or .NET, things like array checks work in HW or SW. However, time has shown that the kind of “automatic” access checks aren’t worth the chip area they are built on. They are hardly faster than SW checks, they hinder dynamic optimization done by the hardware, and they take up chip area which could as well be used for cache (what you are proposing is, in fact, a CISC processor).
– Morin
You are presupposing certain things such as conventional CISC design, page tables, TLBs etc, perhaps even aware of the so called semantic gap between HW SW etc much discussed in the 80s.
In the context of overly complex OoO x86,PPC, with regular page tables and TLBs, what you say is more or less correct regarding adding special purpose range or bounds checking HW and most cpu designers are comfortable with software taking the role there.
A long time ago there was a capability machine from Intel called the iapx432 that also did the kinds of checks you supposed, and it was the biggest CISC disaster of its day. Kind of funny, when OS guys design a cpu, you get something like that, no feel for what hardware can do well and what it can’t.
The design I have developed is most definitely not CISC, it is as about RISC as it gets in the very purest John Cocke style. I did not intend to provide pervasive checking, I set out only to build a MMU that could be practical on FPGA that would above all allow a small but fast processor collection to have good performance without huge data caches. The result is a design that has lots of threads, and no effective Memory Wall, but a side result was that bounds checking is entirely free, ie you get it whether or not you want it. It can’t be turned off. Something like a page fault handled much more in hardware at the 32byte level rather than 4k or 64K level. If the virtual address misses a line, it forces either the processor to allocate more store at the line, or stop the process or call a provided handler depending on how the store was set up. This sort of thing makes it a doddle to build much more interesting software memory structures too, huge hash tables, queues etc that use the hardware to watch when the software hits walls. It is also more or less associative but I say too much already.
The kind of OS it would run would look very similar to that proposed in the paper, but I would pare that down some. I’d prefer a different desktop though like Tracker.
“This sort of thing makes it a doddle to build much more interesting software memory structures too, huge hash tables, queues etc that use the hardware to watch when the software hits walls. It is also more or less associative but I say too much already. ”
PATENT!
Its in preparation, lawyer advice stage.
I’d love to beta test it, sounds interesting. probably years away though?
It is really hard to digest it sometimes, but MS actually does something good sometimes…They DO make good products once in a while that really shine.
.NET is one example. If it wasn’t good, Mono developers wouldn’t be well… Mono developers!
P.S. And no I am not getting an XBox for punching these words in here
They are beginning to talk about “designing from scratch with the primary goal of dependability”
Are they finally realizing that they can’t keep backward compatibility forever? If they seriously are, it can only be good, IMO.
Are they finally realizing that they can’t keep backward compatibility forever? If they seriously are, it can only be good, IMO.
I think we cannot agree on the value of backward compatibility. If Windows took the world over (well, almost…) it’s been thank to smart backward compatibility effort, among other things.
I wouldn’t be happy to trash BC as you are, actually.
However, you can effectively provide better upgrade paths even when you’re going to limit backward compatibility and Vista will be a perfect example of that.
The truth is everyone would love to modernize their platform while mantaining backward compatibility. The bare fact is a few can or able to, and Microsoft is one of those few.
If others had learned a lesson from MS, that would be backward compatibility value at first…
How about the following scenario:
1)Forget about BC and create a modern, *great* OS: imagine what could be achieved with the hardware we have and Microsoft resources.
2)Ask the ISVs to “port” their apps to the new OS
3)On top of that create a compatibility layer (like Wine for linux) so that people can still use their old apps until the transition is complete.
It’s great but…
1)Forget about BC and create a modern, *great* OS: imagine what could be achieved with the hardware we have and Microsoft resources.
Since I’m involved in developing software I can assure it’s easier to say than to do. It takes years (say 3-4-5 years) to develop a new OS and make it ready. And *then*, people should start developing for it if it has no BC and that means at least 2-3 years more to have decent application.
Look at OS X: it took at least 3-4 years to make it a very cool thing (and they didn’t even started from scratch since they had FreeBSD ready!) and it only worked because it remained compatible with older software.
2)Ask the ISVs to “port” their apps to the new OS
Again, easier to say than to do. Do you know that MS even puts code inside their OS which is aimed solely to mantain backward compatibility for software which is important but would break otherwise? Do you know that MS helps (read “writes code in their place”) many companies to port their software to newew OS versions? This is *not* easy and sometimes company are not interested to port their software or simply don’t know how to do. Not to mention what it will *cost* to convert their code…
3)On top of that create a compatibility layer (like Wine for linux) so that people can still use their old apps until the transition is complete.
It’s a good idea… just it doesn’t work. Look at WINE: it’s far from perfect after 10 years. Also consider Vista, which took 5 years to develop mostly because it changes a lot of internal OS structure and introduces layers to assure BC. Not easy, trust me. Not to mention that if you break compatibility, people will need to start from scratch in building their knowledge of your system… and nothing can assure that they will start with your system again 😉
In the end, I’m not saying that we don’t need new concept: we do need them. However, new concept (innovations) most of times comes from newer hardware rather than software, expecially now that there are millions coders and that many ways are getting *explored*.
I think they key for success is assuring BC (100% BC or the like…). There are simply too many disadvantages in not doing that.
“It takes years (say 3-4-5 years) to develop a new OS and make it ready.”
Well, the time frame I had in mind was 5/6 years anyway 🙂
“And *then*, people should start developing for it if it has no BC and that means at least 2-3 years more to have decent application.”
Well, maybe not that long if they worked closely together with MS while the new OS is being developed,
“sometimes company are not interested to port their software or simply don’t know how to do. Not to mention what it will *cost* to convert their code…”
But they would have to… After all, how long is the lifespan of an application?
“Look at WINE: it’s far from perfect after 10 years”
But presumably the Wine people didn’t have the source 🙂
“Not easy, trust me. Not to mention that if you break compatibility, people will need to start from scratch in building their knowledge of your system… and nothing can assure that they will start with your system again 😉 ”
But even the endusers have to learn new things all the time: just an example: look how much more can be done with UMTS phones than it could be done with first generation ones (just talk)
And then the interface of a new OS doesn’t need to be more difficult: on the contrary, it could be a lot easier…
“But presumably the Wine people didn’t have the source 🙂 ”
Didn’t do the Mozilla people any good.
The venerable mainframe operating systems of the past were designed for dependability. If you want a good place to start — start there, and then derive a modern OS from them.
You are correct in your statement, but off when it comes to applying it to the article. The mainframes are dependable in the hardware sense where this research project is really based upon software.
thebackwash from a public computer- I don’t know anything about failsafe computing, but it makes me wonder what the relationship to the hardware might be. (I did skim the article for any information that would be relevant, BTW.) Obviously the hardware would have to be designed in a failsafe way, but if the software fails, what then? Perhaps the hardware could detect this, and restart safely, or could fall back into a basic operation mode, for example, where the sofware it will then run is contained in a ROM chip. I don’t know. This is all conjecture, maybe someone with knowledge on the subject could enlighten us.
>The basis seems to be to C# what JNode is to Java. Can anyone >comment on this?
>But there are other nice points (I cannot remember if JNode did this >too):
> In Singularity, an application consists of a manifest
> and a collection of resources. The
> manifest describes the application in terms of its
> resources and their dependencies. Although
> many existing setup descriptions combine declarative
> and imperative aspects, Singularity
> manifests contain only declarative statements that
> describe the desired state of the application
> after installation or update.
>Voila, no installation hassle anymore.
>This is going to be interesting.
>- Morin
I do not know, but this seems very similar to the way Mac OS X Apps are. Especially Cocoa Apps.
See, Microsoft is better than you. Stop coming in Microsoft threads to bullshit over and over again. You have lost already. Microsoft is 10 billions times more innovative than all the jerky open source groups that pretend to be innovative but in fact arent because they just keep talking about shit that they will never really do. By the way, Microsoft will soon own the planet and put a version of Windows CE in your head to control your stupidity. You have been warned.
… is UNIX.
Remember where you are now. People may take you seriously if forget to add a smiley face to your comment. ;^)
Check this link at channel 9:
http://channel9.msdn.com/ShowPost.aspx?PostID=68302
There is a very interesting interview with the research group of Singularity project.
It is really amazing one. I would like to hope that it will replace Windows and will appear instead of Blackcomb, next to Longhorn.
How could that team at MS be so incredibly clueless that they make a new OS that does not use capability-based security, especially when they say they are focusing on security? As if there couldn’t be a confused deputy just because the invoker chain can be inspected.
Maybe Microsoft has some weird corporate culture where thickheadedness is rewarded or something. Sigh…
The verified code sounds like an annoyance as much as anything: Will this be a good way to make sure they distribute the only compiler?
Most of the research I’ve heard of around safe code has basically been ways to tell you “hey, you don’t have to think about edge cases and inputs, just run this and make sure you only use your for loops one way, and don’t use anything but for loops.”
The SIP’s sound heavy, they claim their efficient to create but I think Microsoft said that about NT processes (which they were … compared to VMS).
And the lack of shared memory sounds annoying as well. I think the problems it solves are only being solved by forcing programmers to think about the stuff they should have thought about anyway!
But hey, if it weren’t controversial it wouldn’t be worth thinking about.
The verified code sounds like an annoyance as much as anything: Will this be a good way to make sure they distribute the only compiler?
If you mean that MS would have the only IL to native code compiler on the OS (assuming the hardware wasn’t a native IL CPU), that’s most likely. However, anyone should be able to produce <insert language here> to MSIL compilers as many do today with .NET.
Many of the concepts they discuss in terms of code verification are already in use in Windows and the .NET Framework, including the use of application manifests and MSIL.
Interestingly, it doesn’t look a thing like VMS. It looks more like what QNX would be if it decended from Plan 9 instead of Unix.
VMS is probably the most proven stable non embedded OS in the wild, and was created by the lead Windows engieneer. Course this is a research OS, which is a pretty different branch of MS.
I once designed though never implemented an OS where the goal was a secure workstation from the ground up. It’s interesting to see what decisions you would never ever think of unless you had one overarching goal – like forcing even ordinary applications to be signed comes out of dependebility concerns, despite the fact that it would also be a fantastic security measure. But I never thought of it, and they did, as an outgrowth of their core research
VMS is probably the most proven stable non embedded OS in the wild
I think that honor goes to z/OS and its OS/390 / OS/360 heritage…
“what would a software platform look like if it was designed from scratch with the primary goal of dependability?”
That depends, how many backdoors, I mean remote exploit bugs would be included in the closed source?
“Singularity is working to answer this question by building on advances in programming languages and tools to develop a new system architecture and operating system (named Singularity), with the aim of producing a more robust and dependable software platform.”
Sigh.
And years later Unix will still be there and superior.
Anything to turn a buck these days and seduce more customers.
…as in more tolerant of attacks due to the technology. Not as in robust after thousands of bug-fixes after first release.
Wake me when it actually gets shipped.
Just like this:
http://research.microsoft.com/specsharp/
Still waiting for Checked Exceptions.
Maybe they’ll get put into Dot Net in VS2007.
LoL.
Microsoft designs for boobs.
Hit enter too soon.
Should have deleted the last line, sorry.
Microsoft Research don’t create commercial products, they do… hmm, research. They go where the development teams of commercial products can’t go. You don’t send a 50 person development team, 6 months or more in a direction where you are not absolutely sure you can achieve your goals. With a small research team it is very different. You can go in a direction and see what happens. If it doesn’t work out, then you can just throw away the work and go in another direction.
See the Channel 9 interview for details. http://channel9.msdn.com/ShowPost.aspx?PostID=68302.
Interesting read.
But IMHO, we will end with a nice microkernelOS with a very dynamic VM on top of it.
L4/HURD/Parrot/Smalltalk ?
I can dream.
It’s ammusing that Microsoft research papers are written using LaTeX.
Why is it amusing? Some at Microsoft also use Emacs (see the XAML introduction on MSDN TV for instance). Novell and Red Hat also use Windows. Red Hat even recommends Windows on the desktop http://news.zdnet.co.uk/software/linuxunix/0,39020390,39117575,00.h…
.. is on http://research.microsoft.com/os/singularity/ with links to a few other publications. No downloads AFAIK