Ah, MinWin. The elusive project in the Windows team that has been misunderstood more times than I can count. Once again, Mark Russinovich, more or less the Linus of the Windows world (I win stupidest comparison of the year award), has explained what MinWin is all about, while also touching upon a number of other changes to the core of Windows. Before we start: thanks to BetaNews for once again detailing these technical talks regarding the core of Windows so well.
We’ve talked about MinWin, and the core of Windows in general, quite a few times already here on OSNews. There’s a reason for that: I’m an obsessive compulsive cleaner and organiser. I not only clean a lot and keep everything tidy – I actually like doing it. Consequently, I like MinWin and what is currently going on in the core of Windows.
This should give you a hint as to what MinWin is, and as to what the Windows team is currently trying to achieve: exactly, it’s all about cleaning, structuring, and reordering. Over time, the dependencies inside the core of Windows have become a tangled incomprehensible mess that not even Microsoft itself – not even Russinovich – really understand. To make matters worse, countless spaghetti strands extend outwards from the core of Windows to the layers higher up in Windows. This is bad.
“If you look back at the evolution of Windows, it’s evolved very organically, where components are added to the system and features are added to the system without, in the past, any real focus on architecture or layering,” Russinovich explained, “And that’s led us to do some hacks with Windows, when we want to make small footprint versions of Windows like Server Core, or Embedded Windows, or Windows PE.”
They’re taking a different approach now, one geared towards the future. “What we do [instead] is take full Windows, and start pulling pieces off of it,” Russinovich said, “The problem with that is, the pieces that are left sometimes have dependencies out to the pieces that we’ve removed. And we don’t really understand those dependencies.”
The first breakthrough in the cleaning of Windows was Server Core, the minimal server installation option for Windows Server 2008. However, even services in this “minimal” installation called the graphical layer of Windows repeatedly, even though they don’t need it. Logically, the next step is to produce a minimal version of Windows that doesn’t make any of these calls upwards.
“We want to get more rigorous about this, because every time we evolve Windows, we end up breaking those versions that we’ve sliced-and-diced,” Russinovich said, “We’d like to have a Server Core that we understand, that totally depends on itself and not things outside of itself, so that we can evolve things outside it while we evolve Server Core, and not be worried about breaking Server Core, or having to redefine it with every release.”
MinWin is the first result. In Windows 7, MinWin consists of 161 files, with a disk footprint of about 28MB. It contains the kernel, basic system services, and the IP stack – it doesn’t even have a command line. You need to prod it using external processes.
Part of the ever ongoing MinWin project is transforming the APIs in Windows – collectively called Win32 – from vertical layering into horizontal layering. “The principal division of labor in Win32 has historically been vertical, not horizontal, dividing core system kernel functions from ‘user’ input and interactive functions, from graphics and display functions,” Scott M. Fulton explains at BetaNews, “Even though Windows architecture has evolved to the point where the whole graphics part is essentially deprecated for modern apps, GDI32.DLL is presumed to be present.”
And this needed to change. To get there, MinWin more or less fools API calls into thinking the traditional, vertical layering still exists. In MinWin, KERNELBASE.DLL handles essential system services. Calls made to APIs outside of this realm are “forwarded” to libraries that do not reside within this core. And it is here that MinWin makes a rather radical shift away from what Windows used to do.
Russinovich explained that in the days of yore, APIs in Windows were thrown together not based on any sensible logic, but to reduce the length of the boothpath. “Bigger API collections meant fewer references to their filenames,” Fulton explains. As said, this is where MinWin makes a radical break with the past.
“We want to get away from that [random grouping], and really make the definition of the logical DLLs, these files on disk, separate from the API sets that they implement, so that we can compose them dynamically,” Russinovich explains, “In other words, we want people to call virtual DLLs that implement APIs, and then what happens on the system is that those virtual DLLs are mapped to logical DLLs that actually implement this functionality. So it doesn’t matter from a programmer’s perspective if a virtual DLL’s implementation is in this logical DLL or that one, it’s up to us behind the scenes to figure out how to best combine virtual DLL implementations into logical DLLs.”
Clever, but there are downsides, such as performance costs. There also needs to be a map on disk that links virtual DLLs to the logical ones, and the virtual DLLs do need to exist on disk, even though they are more less “dummies”.
“But the benefits outweigh the costs, including expediting API requests through virtual, dynamic placement,” Fulton explains, “And now Microsoft’s own developers, mindful of what Russinovich calls the three-year ‘cadence’ between major product release cycles, are freer to innovate different form factors and implementations of Windows for new classes of hardware and new configurations.”
The future benefits are clear. For instance, as recent as Windows Vista, the command prompt called a higher-level process related to graphical functions, even though it didn’t need any of that stuff. MinWin’s new architecture provides each process with access to CONHOST, a command subsystem closer to the core.
Heck, you could take all this a step further: MinWin managing basic hardware access and system service, with a virtual layer on top also built on MinWin that provides the user environment.
As I’ve said before, I have a lot of respect and admiration for the people working on the core of Windows. They are evolving the world’s most popular desktop operating system, and despite the rigid constraints of don’t-break-stuff, they still manage to not just untangle the mess, but also to improve performance and add in new features. Say what you want about Windows, but the guys in the trenches are doing some impressive work.
I think it’s really easy to get a bit superior, especially those of us who daily work on a robust OS that doesn’t fall prey very often to viruses and spyware. Thanks for injecting some much-needed perspective to remind us to be more humble — after all, the ‘guys in the trenches’ are software engineers who love software engineering, just like those who work on OS X, BSD, Linux, etc, and it’s a bit short-sighted to think that they intentionally write bad code.
Absolutely, its a architecture problem. Pretty much only a few can take credit for the architecture of Linux/BSD/OSX. It does sound as if MS really has its act together now.
There are still other reasons why I don’t prefer it, but Its good to know for those that do it will still be there for them.
I believe this is what is known as a back-handed compliment. This comment says or implies that:
1. OSX, BSD, Linux et al, are better than Windows.
2. Windows dev guys write bad code, but they love engineering, and they try hard.
3. Windows is more susceptible to viruses and spyware.
Hardly humble pie.
What’s wrong with a back handed compliment? It’s pretty clear it was intended to be insulting, so what’s the problem?
That said:
1. OSX, BSD, Linux et al are better than Windows in any number of measurable ways.
2. Windows developers are known to have released some poorly written software.
3. Windows is demonstrably more susceptible to viruses and spyware.
So what was your point?
Measure them.
All developers are known to have released some poorly written software. Irrelevant.
Demonstrate it.
My point is: Prove yours.
Well, I understand what you’re saying, and perhaps you’re right, but I take issue with it being called a back-handed compliment. I really was being sincere: I do think the alternatives are marginally better — or at least, I’ve found them to be so for my daily work. I left Windows because I was sick of it, and the alternatives just worked better. I was making a value judgment, but it was based on pragmatic considerations.
I regretted the ‘intentionally write bad code’ bit — that’s not what I meant to say. I think the Windows developers write a huge pile of good code, and their backwards-compatibility efforts are especially noble and sacrificial. But someone, somewhere, either was sloppy or was forced to write bad code (for whatever reason, bureaucratic, architecture, or otherwise). Otherwise, their engineers wouldn’t be attempting to fix it.
We all write bad code — I’ve written a pile of it. As developers, true honesty recognises that we’re all in a state of learning, and we’ll never reach some sort of absolute pinnacle. This is all I was saying — that it’s easy to think we’re perfect and they just don’t give a damn, and I found it refreshing to be reminded that this just isn’t the case.
Dave Cutler is the Linus of the Windows world:
http://en.wikipedia.org/wiki/Dave_Cutler
Dave is now working on Azure.
Mark Russinovich, although influential, does not really have the same degree of control over the kernel that Linus and Cutler have/had.
Russinovich is certainly a significant and influential contributor but there is no ‘Linus’ at Microsoft anymore and probably never will be.
]{
I’d question even that. He is the face of Windows, and frequently the spokesperson for Windows, but he doesn’t make changes to Windows. Changes, and their justification, are left to the appropriate teams. Also, note that the MinWin project was well underway before Mark came to Microsoft.
.. MS’ major multi-year effort to advance the Windows kernel and to clean up the mess is to make it more like Linux.
Cool!
Definitely the right decision. See you if you get there.
More like Linux? Care to elaborate?
Sure,
the way I read it is that they want a minimal kernel with packages or modules added on, with minimal dependencies.
Check.
They want to decouple the kernel from the GUI.
Check.
They want to contain parts of windows in smaller subsets or libraries or collections or whatever that can evolve on their own without to much interaction with other parts of the system.
Check.
They want know how all the pieces work and what the dependencies are.
Check.
Some marketing people might come up with slogans like “Linux: The future of windows is now” .. who knows.
Eh, they don’t really want to be that much like Linux. Their idea for the kernel is to cut it down to an even smaller surface area than the Linux kernel–so that not just the display system, but the graphics drivers themselves are all implemented outside the kernel. It makes a lot of sense when you consider that they want the kernel to be as rock-solid and unchanging as possible. It’s not quite a microkernel, but it’s moving it that direction.
Well, Xorg (and Mesa) provide the graphics drivers for Linux .. so they are outside the kernel too.
And realistically the thing is still 28 mb, hardly micro. Linux has been in 2mb (and even smaller) embedded devices for a decade now. I know size is not what a micro kernel is about, but it is a good indication how far along they really are.
No, it isn’t. The 28MB is not just the kernel – it also contains several parts that are not part of the actual kernel.
And comparing the MinWin effort to an embedded Linux kernel is rather troublesome, as such an embedded Linux kernel has all drivers removed. You could achieve the same thing on Windows if you’d want to.
OK, but the 2mb on Linux often also include a shell, a telnet/ssh server, a webserver, a dhcp server etc.
I doubt Minwin is even near.
Edit: And they had it at 25mb in 2007 (http://arstechnica.com/microsoft/news/2007/10/core-of-windows-7-tak…)
So the progress is not really there .. more like Minwin got fatter.
Edited 2009-12-15 13:15 UTC
Wouldn’t it then be more accurately making Windows more like Posix which Unix like OS including Linux take there architectural inspiration from? Linux based distros can be great but we should remember that the OS architecture was designed long before GNU tools and Linux kernel turned up.
No because it’s apples and oranges. POSIX is an API specification. MinWin is not in any way attempting to replicate a POSIX API (although Windows had a POSIX subsystem at one time). I think the comparison is more appropriate when talking about Linux specifically because Linux is a well layered system suitable for stripping down in embeddable environments or for other purposes. This is essentially the purpose of MinWin, to strip it down to the basics and layer everything on top appropriately so different configurations of Windows are more easily attainable.
Edited 2009-12-15 16:35 UTC
I thought the modular layering design was an attribute of Unix like OS rather then a specific grouping within similar platforms. As a result, Linux would be inspired by the existing Unix like systems (Minix in the specific case of the kernel of course).
Good to be corrected on Posix. I thought it was more of a standards spec rather than an API.
eh well… client server model for graphics for one
and also MinWin is supposed to be closer to linux in terms of embedability (I assume) and apropriateness on a sever ie lacking higher level graphics interface
That said I personally don’t think MinWin or any version of windows will be anything like Linux. Linux is just too diverse for that matter.
Windows kernel has been well-designed from the beginning. More than I can say for the Linux kernel until more recently.
Indeed. There are plenty of things to criticize in Windows, but the NT kernel isn’t one of them. Too bad the userland they put around that kernel has been total crap until recently, and even then Windows at all levels feels convoluted and clunky.
Very true and I find the same problems with Linux distros. Apparently it’s a major reason for many FOSS developers moving to MacOSX.
Personally, I’m a Linux addict with a dual boot Win7 partition for my games. Both systems have user land issues that annoy the hell out of me and frankly MacOSX is no different.
There has only ever been one system where I found the user land to be almost perfect and that was the BeOS. My dreams of a stable Haiku netbook are the only thing keeping me from giving up seeing that day again. 😉
Heh, guilty as charged though admittedly I’m not a full-time foss developer. OS X has annoyances of its own, but at the moment it’s the only way I can get the full power and stability of UNIX and not be burdened down with the hellish rat’s nest that is Xorg/insert-de-of-choice-here. Oh, and audio works perfectly on OS X too. And so do drivers even if they’re for an older version of OS X.
These days, it seems like we simply have to choose which one annoys us the least, rather than which one doesn’t annoy us at all. Honestly, the *NIX CLI is the only system that doesn’t annoy me one bit but these days it’s not as though most web sites will work in text mode anymore . There aren’t CLI equivalents for a lot of things, gone are the days when the graphical apps on X were simply frontends to CLI apps or libraries. I think those were the only days when the os didn’t annoy me somewhat, it did exactly what I told it to do and didn’t try to make assumptions about what I wanted done. Didn’t give me these repetitive and annoying confirmation dialogs either. Ah, those were the days.
Agreed.
And I’m still not convinced GUIs are any more user friendly to n00bs.
Sure, they’re less intimidating at first, but when you have rows of buttons hidden behind more rows of buttons (as with MSOffice’s ribbon) and dialogs behind dialogs – and I think most non-techies long for a simple “save” option instead of a floppy disk image that doesn’t actually mean much anymore.
I’ve always wondered if they pulled out win32, replaced it with a UNIX 2003 user land then bought in the good technology like .NET, DirectX, and other technologies to develop a great desktop and server operating system. It isn’t as though Microsoft is lacking in the knowledge and technology – they just lack the will power by management to lead from the front, lay down a coherent path and push the engineers in that direction.
Though a stripped down .net OS would be interesting there is the question of who they could sell it to.
On the desktop you would lose Win32 driver and program compatibility, on the server you can already run the core edition with the .net framework. I just don’t see a good investment cost/benefits ratio.
Security issues in Windows are related to the user, not user land. Sure there are improvements that can be made in the area of program isolation but that is true for Linux and OSX as well. Those improvements could also be added without removing Win32.
The real security problem is that the user can download and install software from untrusted sources and visit spoofed websites. There’s also the problem of users having auto-update off, usually because they have a pirated install.
As for Windows Server I think a lot of the arguments against it come from the late 90’s when you really were better off running Linux for stability and security. WS 2008 is plenty secure and stable. RHEL actually had more Secunia advisories this year. If you are really paranoid about security then you should be using OpenBSD, not Linux.
Who said anything about a .NET operating system? just replace the user land with UNIX 2003 implementation then develop a brand new widget kit ontop of Direct2D/DirectWrite and the new, modern technology remaining after the old crap has been removed.
No driver compatibility would be lost because it is seperate from win32. win32 is only the userland API, the driver API is something completely different – the only thing that might need re-writing are those awful custom applications hardware companies bundle with their drivers.
As for applications, one word – virtualisation.
it doesn’t help, however, when when there are so many compromises and the bundle applications rely on a mish mash of old and new API’s instead of moving all the bundled applications included with Windows to the new API’s, keep the old stuff for compatibility, gradually remove it as to remove the area of vulnerability. The less area to target, the more secure the operating system.
True, but even then the new anti-virus form Microsoft along with the malware and other security software is pretty damn good if you ask me. Mum and dad have it installed on their respective computers and they’re pretty happy. I think Microsoft is finally realising what they need to do – but they’re still pretty wishy washy when it comes to making the big decisions needed.
Edited 2009-12-16 08:05 UTC
Not until Windows NT was there a part-way reasonable design for the Windows kernel. Before that, Windows was DOS-based, co-operative multitasking, single user, without any network stack or security layer. Enough said.
Even today, the Windows kernel will simply execute anything it believes it has been directed to, without any attempt to ascertain the authenticity of the direction.
Edited 2009-12-15 04:06 UTC
This part is true…
And this is not. Windows has permissions on files, including execute permissions, which can be disabled, preventing execution of the file. I just tested it myself and sure enough, Windows would not let me execute an otherwise perfectly valid executable. That Explorer and the rest of userland generally seem to apply permissions willy-nilly is another problem, but it certainly is not one with the kernel.
Look, do yourself and us a favor and stop right there. You simply don’t have any idea what you’re talking about. The Windows kernel has a well-developed security model that is, in a lot of ways, much more flexible than the Linux kernel.
You’re inaccurate there on all counts:
Windows NT predates Windows 95 and has been around nearly as long as Windows 3.1 (only a couple of years in it)
So, contary to the tone your comment, Windows NT didn’t just magically come alone and whipe a decade of poor kernel design.
It just took several years for MS to finally scrap their rediculas DOS booting Windows line – however the option of NT has always been available (and was taken up in droves with Win2000 despite WinMe being pushed for the desktop)
Actually there’s serveral checks that happen between request and execution.
Windows’ downfall has been it’s default security profile (all users with admin accounts, etc)
Like yourself, I’m a full time desktop Linux user, but let’s not distort the facts to make a point
Edited 2009-12-15 09:18 UTC
[quote][quote]Not until Windows NT was there a part-way reasonable design for the Windows kernel. Before that, Windows was DOS-based, co-operative multitasking, single user, without any network stack or security layer. Enough said.[/quote]
Windows NT predates Windows 95 and has been around nearly as long as Windows 3.1 (only a couple of years in it) [/quote]
It is still not inaccurate to say that “not until Windows NT” was there a reasonable design for the Windows kernel. By your own admission Windows NT was predated by Windows 3.1, which I think you will agree does not have a particularly good kernel.
Nor did the OP suggest that Windows NT sprang forth magically from the head of Zeus. He simply said that until it arrived on the market Windows’ kernel was bad, which is true.
Given that Windows and Windows NT were entirely different product ranges that had run concurrently for over a decade before Windows XP (ie NT for the masses) was released and the earlier point that the early Windows (not NT) product line was pretty much front ends (ie NOT OSs), I still stand by my correction.
Plus back then in the late 80s / early 90s (which is when Windows predated NT), MS had less of a strangle hold. In fact, many other desktop platforms were receiving just as much attention:
–> Norman Cook was releasing popular dance hits he’d produced on his Atari ST,
–> Babylon 5’s effects were being rendered on an Amiga
–> and NeXTcube users were browsing the WorldWideWeb
and those that stuck with “Windows” probably spent most of their time in DOS anyway.
It’s also worth noting that Windows was only originally intended to extend DOS (the original windows shell was even called ‘MS-DOS Executive’!) – however MS being MS, rather than break compatibility, they kept polishing the same old turd instead of pushing forward on a clean slate.
And I stand by my correction of your correction. Prior to the release of an NT-based Windows there was a Windows on the market with a poor kernel. So, it is certainly fair to say that until NT came along if you went to out and bought a product called Windows it had a poor kernel. After NT was available if you went out and bought a product called Windows it either had a poor kernel or not, depending on which one you got. None of this invalidates the original statement: Before NT, Windows had a poor kernel. Windows NT was the first Windows to not have a poor kernel.
You can bicker about whether Windows should be compared with Windows, if you like, but that was Microsoft’s marketing decision.
The rest of your comment is not relevant to my point, given that no one was commenting on whether or not Windows-before-NT had a stranglehold on the market. Nor is a history lesson on the intent of Windows-before-NT relevant since the intent of Microsoft was not under discussion. Nor does it matter whether Windows-before-NT was or was not an OS: whatever it really was, it was marketed as and understood as an OS and, importantly, it was called Windows and it did include a kernel that was not well designed.
Windows, before Windows NT, had a bad design and was a bad OS. (Or, if you prefer, was a bad DOS front end.)
The problem here is you’re not taking the context of the original post into account where as I am.
The point was the original comment was somewhat overstating the NT kernel argument to make Windows look bad.
Even you agreed that the Windows/NT product lines have run concurrently for most their lives – so stating what he said and the way he said it was more a theatrical way of slamming MS than a factual insight into the core of Windows
And for the record, if you want to get pedantic then as Windows 3– was just a front end (a point you didn’t dispute), then I’d argue that it’s kernel isn’t so much a kernel as more a wrapper for DOS. So one could then argue that NT was the first Windows OS with a kernel :p
I read the original post in its context and this is why I am baffled at your pedantic, nitpicking dissection of what was an entirely non-controversial and factual statement.
I do not “not dispute” that Windows 9x/3/2/1 were front ends for DOS, it is a well known fact that goes without saying. Did it still have a kernel? *Yes* is the answer (and if you wish to dispute *that* fact I can only assume you are trolling.) I do not “agree” that Windows-not-NT ran concurrently with Windows NT, it is also a well known fact that goes without saying. Why do you feel the need to bring up irrelevant historical facts as if they supported your assertion that the original poster was wrong?
The original poster did not appear to be intending to make Windows look bad. He replied to someone stating that “The Windows kernel was well designed from the beginning.” His correction to this was that *NT* was well designed from the beginning, not Windows as it was released. As it was released Windows was not well designed… until Windows NT was released.
This is neither complementary nor derogatory! This is a simple factual clarification. The post to which the original poster, whose words we have been discussing, was replying was making an ambiguous statement in an attempt to make Windows look *better* than it is. The ambiguity here, as I noted earlier, is *Microsoft’s* fault, if you want to point fingers.
The original quote was contrasting Linux’s kernel design with Windows, with the implication that Windows got it right and Linux still hasn’t. It is important, therefore, to note that Windows NT got it right and Windows-not-NT, which was released much earlier, did not get it right. This is a useful refutation of an overly complementary statement. This is not an attempt to make Windows look bad! Frankly, Windows-not-NT has such a poor reputation because it *was* bad.
I would like to, in the most uncivil way possible, to call you out on your hostile attitude. Why do you insist on attributing malice where there is none aparant? You chastised the original poster for implying that Windows NT appeared magically, which he in no way implied. You stated that the “tone” of his comment indicated this. Did you have *any* basis for such a wild and insulting assertion?
You stated that “The option of NT has always been available”–a statement which is *obviously* untrue, since Windows 1.0 *at least* was released before NT. Perhaps *you* should take more care in what you say least some asshole like me come along and take issue with it.
Let me ask you a question, master of history. In what year was Microsoft’s first OS released? Go ahead, you can look at Wikipedia for this one. Got it? Add 10 years, aka one (1) decade. Would you say that the date is now *before* the first release of an NT OS? If you do not fail at math then the answer is *yes*. So, quoth you:
In fact, it did! Now I cannot hesitate to add that it took another decade before Windows NT more or less entirely replaced Windows-not-NT (despite your Win2k assertion, many people did not switch until XP). So, for certain values of “wipe” you remain partially correct.
Half a point for you.
I guess you didn’t read the article because the whole point was that Windows wasn’t well-designed from the beginning. It was put together in a piecemeal way. MinWin is an attempt to refactor and properly layer Windows, making it easier to remove unneeded features and services.
Linux on the other hand incorporates features in a piecemeal way but refactoring is an ongoing process. Take the wireless stack for example. Working wireless drivers were first created and then more added in a piecemeal fashion, most of which worked similarly to the original wireless drivers. Then a better, more universal solution was adopted and drivers were ported to the new, better architecture. That’s how a lot of things have worked with Linux and FOSS in general. First get something working, then improve it.
First of all, we are talking about kernels here, not the whole OS. Secondly, Windows *has* continued to undergo development. New technologies have replaced old ones. Behold how the driver model has changed completely in Vista and 7.
I don’t know, however, that the constant API/ABI churn in the Linux kernel has produced much value, aside from a few victories like the WiFi stack revamping that you mentioned.
I didn’t mention userland so I’m not sure what you’re talking about. I’m also not just talking about new technologies replacing old ones. I’m talking about refactoring and layering code. Linux has been very good at separating out and layering code in a generic fashion when it is appropriate. These types of changes are ongoing in the Linux kernel. Windows builds up a lot of cruft before anything major happens in the kernel and the need for MinWin itself is an indication that that was not done as needed with NT throughout the years.
That’s just because you probably haven’t been paying attention to kernel development for years. USB was ripped out and replaced a couple of times but now the USB stack is the best performing USB stack for any system. The graphics drivers have been vastly improved only recently, getting kernel mode setting and improved memory management. Yes there are downsides to ABI inconsistencies but to pretend that there are no upsides is ignorant. The upside is rapid development and improvement which is how Linux got to where it is in the first place.
Whoopdy doo. So we’re throwing away stable hardware compatibility for a slightly better USB stack? So you can plug in a webcam and have it not work 10x faster because the hardware company didn’t want to release an open source driver?
It’s not as if USB transfer rates are limited by the stack. It isn’t as if the USB stack in Windows or OSX clearly sucks like video card and wireless support in Linux.
The Linux team should be men and admit they were wrong in their decision to have an unstable abi. The vast majority of users would prefer the benefits that come with a stable abi over the costs. At the very least they should provide a stable abi for video cards. It’s not as if people are trying to plug an 8800gts into an ARM netbook so concerns over x86 lock in are rather silly. Server admins can always use a vesa driver if they are paranoid about binary blobs.
You only think that because you probably haven’t been paying attention to Windows kernel development.
Every major Windows release had significant kernel changes. Win7 has major changes in the scheduler and memory manager (dispatcher and PFN lock refactoring). Vista added ASLR, dynamic kernel address space and the kernel transaction manager. WS03 added x64 support, user large pages… Etc.
There is a huge difference between Windows releasing an updated kernel years apart in conjunction with a new Windows release and a new Linux kernel being released in a matter of months. In fact that was my point. I’m not sure how you missed it.
Windows != Windows NT kernel. The Windows NT kernel IS well-designed. It’s the stuff on top that sucked. That’s what the MinWin effort is busy remedying: making sure that stuff close to the core of Windows does not have any outward calls/dependencies.
I was referring to the NT kernel when I said Windows. I figured that much was obvious and it doesn’t change anything I said. Claiming the NT kernel is well-designed is completely subjective. It was well-engineered at the time it was created but today it’s looking a little crufty around the edges. MinWin is not just an effort to affect the userland services but to also reduce the kernel to a minimal system that will make more calls outside of the kernel. This is an attempt to properly layer the system where it wasn’t before, including in the kernel.
It was well-engineered at the time it was created but today it’s looking a little crufty around the edges
Are you going to provide specifics or do you just subscribe to the Linux is a better kernel cause it is Nix line of thinking?
If any kernel needs an improvement it is the Linux kernel in the area of the unstable abi. Gregh KH’s arguments against a stable abi are looking pretty crusty given that Linux has not blown away other Unixes that have a stable abi like Solaris and OSX.
I never said Linux was a better kernel. You’re using a strawman argument. I just said it is ridiculous to claim that any “source” that says NT is inherently superior isn’t biased garbage.
How can an unstable ABI be crufty? It’s anything but. There are arguments to be made about whether having a constantly improving system with unstable interfaces is better than old and out-of-date but stable interfaces. That being said it’s also a bit naive to think that Solaris or OSX come even remotely close to the market penetration that Linux has.
This is not true. MinWin, at least at this point, is purely a user-mode refactoring project.
Not it isn’t. MinWin is actually not a static concept. MinWin started before even Vista was released and in fact what was considered MinWin at the time became the basis for Vista. The confusion lies in the fact that MinWin as a project has been ever evolving, with different goals and different definitions. So while it is very much about refactoring userland APIs it’s also about refactoring and cleaning up kernel APIs. This isn’t clear in this particular article but it is in other statements released by MS.
Oh give me a break. Nothing is more piecemeal/hackneyed than Linux. Even Linus calls Linux development “software evolution” and not software engineering. NT is designed around modern needs while Linux a clone of a Unix kernel which was designed around the needs of mainframes in the 70’s. If you want to see a Unix that is well designed for modern needs then try OSX.
Part of the problem with Linux-based operating systems is that too many of the components are designed independently of each other which makes optimization difficult. Modular design has advantages but not when you have isolated teams working on subsystems without central planners to determine how these modules should work together.
It also allows for tribal wars over standards that should be set by the OS team. Of couse unlike OSX and FreeBSD there is no central OS team with Linux. The result is bickering over silly things like where system files should go or which sound system should be used.
You’re mixing up kernel and OS so much that I’m not sure a single point you made makes any sense. First of all the piecemeal way in which Windows was developed is evident just by reading the article if nothing else where it is explained that this is why MinWin came about in the first place. Calling NT “modern” is rich for something that is 30 years old unless you’re talking about the current implementation of the NT kernel then your point against Linux is moot because it is very different than the original UNIX. As for Linux vs OSX that’s an entirely different discussion. The OSX kernel is just a bastardized version of FreeBSD and Mach. It’s just a “clone of the Unix kernel” as you succinctly put it. The real differences are in the userland.
The big difference is the speed at which Linux is developed. A lot of stuff in the FOSS world starts out like this but eventually things are sorted out. IPC is a good example. Everyone and their mom had their own implementation but now DBUS has become the standard. The wireless stack in the kernel that I used in my last post is also a good example. What is funny is that despite the “central planning” of Windows there was still a lot of stuff hacked on at a later date without fixing up the interfaces. In the Windows world those things needed to stay the same because API stability is considered the utmost importance. There is a tradeoff. One is not necessarily better. It depends on what you value more.
The result is also a system that is much more widely used than FreeBSD or OSX. It’s apparent that you don’t like Linux but to consider it a failure compared to OSX or FreeBSD is asinine.
Edited 2009-12-17 05:45 UTC
Personally, I’m looking forward to the part where you can’t install Windows drivers without first recompiling them.
You already can’t install new Windows drivers without them first being recompiled.
It is just that with Windows, you are not permitted to do recompiling yourself.
Not really…while MS did add some crazy new things in Vista like WDDM (for display drivers) and WaveRT (for audio drivers), for most hardware, the API for drivers (WDM, or “Windows Driver Model”), has remained reasonably static — in fact, many unsupported devices continued to work on Windows 98 long after people stopped caring about it, simply because WDM was sufficiently unchanged between the releases. In the case of Linux, a minor kernel point release would traditionally break existing drivers. (IIRC, Dell has recently put some work into making this no longer the case.)
lol Even WindowsNT kernel was (by far) superior to any Linux kernel. Check better your sources.
If Windows had any problem, that was not related to the kernel nor it could be.
MinWin made just some cleaning in a 30 y/o code base. And MS itself has already proved to be attempting to be even more radical about OS internal architecture than any OSS developer could be (and I don’t blame them for that : most of them are underpaid – when paid at all – guys doing a favor to big sharks… but that’s another story…).
What sources? That is a completely subjective statement as far as I’m concerned. Both Kernels are advanced, featureful, and well established. Any “source” that claims NT is inherently superior in this day and age is biased garbage.
Is that so?
http://www.microsoft.com/technet/security/Bulletin/MS09-065.mspx
I beg to differ.
http://pdos.csail.mit.edu/exo/
http://www.coyotos.org/
yeah , right , let’s see the superior parts of Windows kernel:
Windows scheduler .. umm..
Windows io scheduler , ummm..
MAC , ummm…
File systems , ummm..
General performance (throughput, latency ) ummm..
Ok , one I can think, the basic API’s (WaitFor.. etc..) have always been quite nice (kernel objects).
Though nowadays also linux has eventfd,timerfd etc, so it’s not that bad on the penguin side anymore.
The Windows IO scheduler has, in my experience, done a much better job than Linux.
Maybe Linux has finally got the IO scheduling fixed in 2.6.32. Maybe.
By IO scheduling I am not talking only about putting IO blocks into the CFQ scheduler queue, but the entire VM system and how it handles flushing dirty RAM and swapping.
Before my Linux laptop hardware died it was running 2.6.29 in 1 GB RAM. When I’d load a new program into RAM it would start simultaneously trying to read the new program, flush dirty RAM and swap out older RAM pages. On a laptop disk. This was not a success.
Windows never did that to me.
I’ll readily admit I can’t understand half the stuff being tossed around here. All I’ll say is the better Windows becomes, the happier I’ll be. Granted, I’ve been quite happy with it, but that’s not to say I couldn’t be happier!
In all honesty, so long as all the OS’s out there improve, I think its a winning situation for everyone. Might be naive or overly optimistic, but hey
Dear Mark,
If your team can ship the kernel to the market. With addons as plugins to the OS, with nightly builds then all the geeks and free loaders and those who bash windows will think you have a cooler product.
Trust me, I quit using linux since kernel 2.2. It annoyed me having to install all those binaries and those version mismatches. I love windows because it just works well at almost 98 to 99 % of the times. I do not prefer linux unless its a grid, a cloud or a bit of HPC.
I dont know how many people advocating linux would have advocated it if it was not free. They should try the Red Cap>> <deliberately miss spelt> linux. Then they will know what fees they pay and support they get. If you open a windows box you can almost always understand things because its made simple, but with linux, well i am not launching a rocket, running an atm, or doing protein folding using FP32. Hehhehe.
Oh and with Windows compute Cluster,/HPC well i dont have to worry about what things to do, and focus on what i want to do.
You know, maybe us “freeloaders” actually (gasp!) like Linux.
Why do Windows users always assume that Linux users are all cheapskates just because we didn’t spend $400 in our OS?
True, and some of us even pay for our Linux when we don’t “have” to. I have spent many hundreds of dollars on Linux and BSD. I have purchased Mandrake/Mandriva, Ubuntu (at Best Buy), Red Hat, OpenBSD and others. I also donate to the organization responsible for the OS. In addition, I have Window 7 boxes at home, all paid for and legal.
Most of the time I use a Free-as-in-Freedom operating sustems because I genuinely prefer it. I like the flexibility and power of Linux and BSD. I use Windows for gaming, or when I have a professor that requires Word 2007, Visual Studio, etc.
I have been using Linux since 1994, and it has been my primary desktop since the late 1990’s. I am a software developer.
Let’s not label each other. My kids run Windows 7, and they are very geeky kids. For them, it serves their purposes best, but they do know there way around Linux. The game support just isn’t quite there (yet!).
I’m a long-time Linux user, but I don’t hate Windows. Microsoft helped make computers accessable for all people. That was a good thing. Unfortunately, I’m not a fan of some of their business practices. But Visual Studio rocks, and Office (especially Access and Excel) are solid products.
I believe someday Microsoft will be a champion of Free software, and most likely will say that it was their idea all along!
Edited 2009-12-15 15:52 UTC
With the kernel at 2.6, you don’t think there has been some changes since 2.2 when you last looked at it? Isn’t your statement the equivalent of saying win7 is a mess because you had issues with win98 back in the day and haven’t looked at Windows since? If Red Hat was all your touched and as it was back then, I can see where your grief came from. With Debian current release things are much different.
Don’t mistake the intention of the post. If Windows works for you then your all set. I just find it interesting that your final judgment of another platform is based on a rather outdated experience.
That’s the solution…make a brand new O/S that uses the latest technologies and forget the Windows APIs. Run Windows in a virtual machine.
Oldie but goldie:
http://www.tbray.org/ongoing/When/200x/2003/07/12/WebsThePlace
first off he talks about minwin in his “kernel talk” from last PDC, available freely online.
I raelly don’t know how it works in the *nix world, but @ microsoft and IBM there is what they call “technical fellow” (forgot the ibm term) that are on the same pay-check level and hierarchy as VP but with freedom. They are few: http://www.microsoft.com/presspass/exec/techfellow/default.mspx
They give technical advice to other people in the cie and most of them don’t have “direct power” (like Mark), but some do have power, for example some are team/department leaders. But who wouldn’t consider mark’s thoughts? He is very well articulated and isn’t “controlled” by msft (just watch this talk when he disable UAC in a matter a second because it is not a security bondary: http://www.microsoft.com/emea/spotlight/sessionh.aspx?videoid=993 or when he used the term “vista performance” as an analogy to poor performance )
I love the fact there is no single guru, that must create some healthy discussion
BTW. Mary Jo’s blog on ZDnet US talks about the “big brain” @Microsoft and have a few interresting presentation of the lesser known technical fellows: http://blogs.zdnet.com/topic/Microsoft+Big+Brains.html?tag=col1;pos…
anyone know how is designed linux… who make choices? where can I find information such as the one we have on http://www.microsoftpdc.com or channe9 series on architecture on the linux side? I mean I’m interrested, but it’s all so underground and stuff …
Edited 2009-12-15 16:40 UTC
This article articulates one thing that I have always found very annoying about Windows, but which I have not always had success expressing. That thing can be summed up in one word: layering.
Windows (above the kernel) is an ungodly, soupy mess of inter-related and unrelated things which come as a bewildering morass. One reason Linux, for example, is so soothing is that I can easily step through the layers and pinpoint the origin of problems. I can also independently restart, reinstall and upgrade specific layers. It is not some mysterious black art that only MVPs understand!
I’ve said it before, Microsoft would be better off tossing out win32 (and most of their userland) and building a new OS on top of NT. They have one of the only well designed kernels that is in real production use, and I say this as a Linux fanboy. Windows’ problems are largely self inflicted and have nothing to do with the fairly good underpinnings.
If MS has finally woken up and is going to start refactoring their OS until it doesn’t suck so much, great! It will take a few years, but great for them. It’s not great for their competitors who have often relied on Microsoft’s poor software quality to sell their own alternatives.
It’s a big if, though. Microsoft is, I think, institutionally unable to understand what it takes to make good software. Fixing all of the flaws their competitors exploit, or even the biggest offenders, could easily take a decade… at the end of which they may not matter that much.
Okay. Good. FINALLY! Break down the Windows API into a series of DLLs.
No. DON’T!
This is bad. Very bad. And it’ll make DLL Hell all that much more hellish.
Programs need to know what DLLs their interfaces. Programmers should be able to expect those interfaces in explicit DLLs.
Okay – so you want to change out the DLLs. Great! Make a DLL with the API and standardize that API. When you make a new version, fine – just update it.
DLLs would be greatly improved if they had the installation scheme that UNIX/Linux/et al. have had for years – being able to install multiple versions side-by-side.
Fix that issue please. Then I can have DLL 1.0, 1.1, 1.2, 2.0, etc. all installed at the same time using the same filename at link time (for the default) or a specific for the general; and the programs can use the appropriate versions.
I’d be very happy in Windows-land to have my program only load the 3 or 4 DLLs it actually uses, and not have the APIs, etc. for everything else. I don’t want a virtual DLL; and I don’t want Windows mucking around to figure out what DLLs I do need.
Sure, that’s okay for compatibility – i.e. helping older Windows software be ported to the new structure. But make it just part of the compatibility layer, not part of the normal environment!
Please help reduce the problem, not make it more complex!
You can install DLL side by side since ages.
Don’t want to get into DLL hell issues? Distribute the required DLLs with the application, and make sure they are either on the application directory or referenced on the PATH.
Still want to deploy them inside System32 for whatever reason? Use manifests.
See? Problem solved.
No you can’t. You cannot put two DLLs by the same name differing in version only in the same directory; you have to use different names. This is why you have things like “mfc70.dll” and “mfc90.dll”.
Now suppose you have two applications that get installed to the same location, but use different versions of the same DLL. Further suppose each tries to install it there and the file names are the same. Guess which one both applications will use? It’ll be the right one for one of the applications, but the wrong one for the other.
Comparatively on pretty much all other systems using shared libraries, the DLL is named at the get go by with its version; and the user/admin can decide which version is considered the latest by symlinking that one to a name without the version. If a program installs other DLLs, they are installed side-by-side without conflict.
Manifests are a hack to a broken shared library system. Fix the shared library problem and you won’t need manifests period.
Problem with telling whether NT kernel is really that great is that Microsoft always kept real details behind the curtain (recent bloggery is a sign of change though).
MinWin seems like they try to transform an uncomprehensible 30-years old mess of layering violations and hackery into something well designed. The reason why it’s solid is probably because an army of devs at MS fixed 99% of bugs over time. No wonder they didn’t dare to change it often. So it’s probably more comparable to XFree86 than Linux kernel; good initial design taken into wrong direction (probably also as a part of their design obfuscation strategies).
You just have to buy the books! What is so secret about it?
http://www.amazon.de/Inside-Microsoft-Windows-CD-ROM-Programming/dp…
http://www.amazon.de/Windows-Debugging-Prentice-Microsoft-Technolog…
Just to name two examples.
It’s good to see MS addressing some of these issues. I hope they don’t lose their way in this quest, as a better Windows helps MS as well as those of us who are forced to use it.
What’s not good is MS asking us to pay handsomely for every (beta) release from now until they get to where they want to be. I didn’t buy Vista outright (though I got it with a new laptop) and I probably won’t buy Win 7 outright. They want too much money for not enough improvements.
Mark, drop me an email when you get finished, OK? (Oh, and while you are at it, get rid of that Registry abomination, too.)