Linked by Thom Holwerda on Mon 22nd Oct 2007 13:48 UTC
Windows Earlier today, OSNews ran a story on a presentation held by Microsoft's Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) - in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.
Order by: Score:
So can we...
by Adam S on Mon 22nd Oct 2007 14:01 UTC
Adam S
Member since:
2005-04-01

So, can we admit then, finally, truthfully, and in an unbiased form, that Windows itself is done?

Windows - at we know it today - is in its downswing. Yes, it's everywhere, and it's the basis for virtually every corporate environment. But the word is out. Any major shop who isn't evaluating alternatives is woefully delinquent. If Microsoft has any chance of surviving in this arena for more than the next decade or so, they need a dramatic change.

What Thom is proposing here, essentially, is scrapping Windows as a whole. Save only the kernel - nay, a subset of a fraction of the kernel - and rebuild a new OS atop.

I welcome this move. Windows is fundamentally used-up. The licensing is draconian. The software is a constant battle for most users. I should know, I support hundreds of them. They don't work on Windows - they work in spite of it.

So I agree that a move like this would be a great strategic move for Microsoft if they want to stay relevant in the long run in this corner of the market.

Reply Score: 2

RE: So can we...
by Thom_Holwerda on Mon 22nd Oct 2007 14:41 UTC in reply to "So can we..."
Thom_Holwerda Member since:
2005-06-29

So, can we admit then, finally, truthfully, and in an unbiased form, that Windows itself is done?


Windows Vista is Microsoft's OS9. It's done, it's used used up, there's no more stretch in the elastics, as we Dutch say. It's time to move on, and make a viable plan for the future - and I don't see how Microsoft can maintain its relevancy by building atop Vista.

Edited 2007-10-22 14:42 UTC

Reply Score: 2

RE[2]: So can we...
by Kroc on Mon 22nd Oct 2007 15:55 UTC in reply to "RE: So can we..."
Kroc Member since:
2005-11-10

I thought Vista was more like OSX.0. Slow, incompatible, horribly inadequate and it'll be five revisions later before it's up to scratch.

The problem with Vista is that if MS move to a new OS and virtualise Vista, Vista will end up being heavier than the new OS, what a drag that will be. It would have been far, far better if MS had virtualised XP inside of Vista and dropped all backcompat in the name of a cleaner, leaner stack on top of the Kernel.

Vista was mismanaged. I don't doubt the programmers themselves because Microsoft produce good, solid products on every front except for consumer Windows releases!

Reply Score: 2

RE[3]: So can we...
by Constantine XVI on Mon 22nd Oct 2007 20:25 UTC in reply to "RE[2]: So can we..."
Constantine XVI Member since:
2006-11-02

Except OSX was (from the Mac community's standpoint) a completely new OS, where Vista is just more stuffs piled onto NT5 (2000 and XP), and I highly doubt they can do much to make it better other than stripping it down to the bare essentials and putting something new on top of it (which will be "WinOSX")

Reply Score: 1

RE[2]: So can we...
by joshv on Mon 22nd Oct 2007 21:47 UTC in reply to "RE: So can we..."
joshv Member since:
2006-03-18

"Windows Vista is Microsoft's OS9. It's done, it's used used up, there's no more stretch in the elastics, as we Dutch say. It's time to move on, and make a viable plan for the future - and I don't see how Microsoft can maintain its relevancy by building atop Vista."

Yes, exactly like OS9 - a 64-bit, preemptive multitasking, hardware graphics accelerated OS9 with virtual memory.

Reply Score: 0

RE[3]: So can we...
by Thom_Holwerda on Mon 22nd Oct 2007 22:17 UTC in reply to "RE[2]: So can we..."
Thom_Holwerda Member since:
2005-06-29

Yes, exactly like OS9 - a 64-bit, preemptive multitasking, hardware graphics accelerated OS9 with virtual memory.


Not literally, of course. Don't you grasp the concept of the analogy?

Reply Score: 1

RE[4]: So can we...
by losethos2 on Tue 23rd Oct 2007 00:06 UTC in reply to "RE[3]: So can we..."
losethos2 Member since:
2007-10-22

I would send this privately, but mine's preemptive. 2000Hz swap rate, actually. It has the OPTION of turning-off preemption on a task-by-task basis. Other operating systems prevent potentially abusive features, like applications turning-off interrupts (not same as turing-off preemption). Mine allows both.

True, mine does not have hardware graphics acceleration -- couldn't bring myself to look at Linux code and steal it.

I worked for a certain nameless monopoly event-ticket-selling company (who probably wasn't the one which crashed today selling world series tickets ) that had their own operating system and learned about processes voluntarily yielding the CPU before preemption. They had a propriatary VAX operating system and I'm pretty sure it once ran without preemption, since all code was controlled by the company and could be guarenteed not to abuse the privilege. In addition to some work on the operating system, I wrote business report applications and had to include commands to "swap-out" periodically so it didn't hog the CPU and ruin other user's responsiveness.

Reply Score: 0

RE[4]: So can we...
by joshv on Tue 23rd Oct 2007 04:15 UTC in reply to "RE[3]: So can we..."
joshv Member since:
2006-03-18

Yeah sure - apples are just like oranges.

Reply Score: 2

RE[5]: So can we...
by phoenix on Wed 24th Oct 2007 04:07 UTC in reply to "RE[4]: So can we..."
phoenix Member since:
2005-07-11

Hmmm, let's see, they're both fruits, they both have coloured skins, they both grow in trees, they both contain seeds, they can both be made into juice, they both come in many variations. Yeah, they seem pretty similar to me. ;)

Reply Score: 2

RE: So can we...
by butters on Mon 22nd Oct 2007 16:23 UTC in reply to "So can we..."
butters Member since:
2005-07-08

It truly feels like the end of an era. Not just for software systems, but for a whole corporate American mindset on how to manage large projects of great economic and social import. You can almost feel the "whoosh" of the deflating ideology as its symbolic champions knowingly head for the exits and their dutiful sidekicks blindly rearrange the deck chairs on the Titanic.

Meanwhile, the era of centralized control and explicit agreement is gradually giving way to decentralized empowerment and implicit tolerance. The transition will be sticky and bumpy, with winners and losers of all shapes and sizes. The key for stalling giants like Microsoft is to reinvent itself with an eye toward sustainability. Computing is no longer a revolutionary frontier, it's an evolving reality, and Microsoft has to reexamine its priorities with this in mind.

Where did all the frontiers go? We chewed through them all like caterpillars through leaves. Now it is time to contemplate our borderless reality and emerge as butterflies, elegantly endowed with a common vision for a sustainable future. Hopefully our cocoons won't become our graves.

Reply Score: 6

RE[2]: So can we...
by stestagg on Tue 23rd Oct 2007 15:47 UTC in reply to "RE: So can we..."
stestagg Member since:
2006-06-03

The transition will be sticky and bumpy, with winners and losers of all shapes and sizes.

I'm gonna be a winner of IT2.0 you just wait!

Reply Score: 2

RE: So can we...
by polaris20 on Mon 22nd Oct 2007 19:33 UTC in reply to "So can we..."
polaris20 Member since:
2005-07-06

Try as some might to look to alternatives to Windows, in many cases it just won't work, at least not without the help of virtualization running Windows for about 3 different critical apps in our case.

Reply Score: 1

Planned for 2010
by WorknMan on Mon 22nd Oct 2007 14:20 UTC
WorknMan
Member since:
2005-11-13

So we can be assured that it won't be out until at least 2012. And if somebody thinks I'm trolling, has there EVER been a Microsoft OS that has actually been released when they originally said it woudl be since at least Windows 95?

Reply Score: 4

RE: Planned for 2010
by n4cer on Mon 22nd Oct 2007 15:46 UTC in reply to "Planned for 2010"
n4cer Member since:
2005-07-06

So we can be assured that it won't be out until at least 2012. And if somebody thinks I'm trolling, has there EVER been a Microsoft OS that has actually been released when they originally said it woudl be since at least Windows 95?


Yes, Windows Home Server was the latest. I believe SBS before that.

Reply Score: 2

RE[2]: Planned for 2010
by sbergman27 on Mon 22nd Oct 2007 16:22 UTC in reply to "RE: Planned for 2010"
sbergman27 Member since:
2005-07-24

"""
Yes, Windows Home Server was the latest. I believe SBS before that.
"""

Nice try. But wouldn't those be more like Windows "distros"? Variations upon the themes of existing products? What *major new release* was ever even *remotely close* to being delivered on the original schedule?

The OP's point is valid.

Reply Score: 2

RE[3]: Planned for 2010
by n4cer on Mon 22nd Oct 2007 16:34 UTC in reply to "RE[2]: Planned for 2010"
n4cer Member since:
2005-07-06

Nice try. But wouldn't those be more like Windows "distros"? Variations upon the themes of existing products? What *major new release* was ever even *remotely close* to being delivered on the original schedule?
The OP's point is valid.


The OP didn't ask about major releases. He asked about OSes in general.
Quote:
And if somebody thinks I'm trolling, has there EVER been a Microsoft OS that has actually been released when they originally said it woudl be since at least Windows 95?

Reply Score: 2

RE[4]: Planned for 2010
by sbergman27 on Mon 22nd Oct 2007 16:50 UTC in reply to "RE[3]: Planned for 2010"
sbergman27 Member since:
2005-07-24

"""

The OP didn't ask about major releases. He asked about OSes in general.

"""

Again. Nice try. But the context made his meaning clear enough. And this sounds like the most major rethink and overhaul since NT.

Reply Score: 2

RE[5]: Planned for 2010
by n4cer on Mon 22nd Oct 2007 16:58 UTC in reply to "RE[4]: Planned for 2010"
n4cer Member since:
2005-07-06

The context was about any version of Windows shipping on time. The question was asked and answered. Stop trying to move the goalpost.

This is a continuation of changes made during the Vista dev cycle. That was the major rethink and overhaul for the near future. If you expect Seven to be an entirely new base, you're going to be disappointed.

Reply Score: 2

RE: Planned for 2010
by Thom_Holwerda on Mon 22nd Oct 2007 16:54 UTC in reply to "Planned for 2010"
Thom_Holwerda Member since:
2005-06-29

So we can be assured that it won't be out until at least 2012. And if somebody thinks I'm trolling, has there EVER been a Microsoft OS that has actually been released when they originally said it woudl be since at least Windows 95?


Well, technically, Windows Vista was perfectly on time. It was released on the only official release date ever set - not a day later.

Reply Score: 1

RE[2]: Planned for 2010
by raver31 on Tue 23rd Oct 2007 08:09 UTC in reply to "RE: Planned for 2010"
raver31 Member since:
2005-07-06

hmmm, how did Thom get modded down ?

Reply Score: 2

v virtualization not virtualisation
by gooned on Mon 22nd Oct 2007 15:06 UTC
Thom_Holwerda Member since:
2005-06-29

I write in British English, because that's what I got taught in primary and high school, and I study it now at university.

Reply Score: 2

This isnt new
by cchance on Mon 22nd Oct 2007 15:13 UTC
cchance
Member since:
2006-02-24

Microsofts been working with the idea of a new codebase for well.... forever vista was to be the new codebase with a ground up restructuring but it was pushed off due partly to stockholders impatience.

It's good to see microsoft evolving and looking to the future.

Windows isnt dead it just needs a major garbage cleaning, you don't throw out code that works XP was one of the best OS's in history especially since SP2.

The issue is Vista didn't do enough to move further than XP, it was XP SP3.5 perhaps but not the OS that microsoft had invisioned. They did a lot to make it close to what they wanted but their not getting the help they need.

The new TCP stack is great but only really if router and isp's work to implement the needed protocols and the fact is most arent.

The new Graphics architecture is great if the drivers support it and the hardware, but the problem is ATI nor Nvidia can produce fast stable drivers for their lives.

The new systems for increasing speed like the hybrid drives is great but only if the hardware and drivers support it, which to date neither has really been accomplished.

The new UAC works it really does, but it's very obvious a 1.0 attempt at it. It's in my view a great move it's just not a move that was 100% worked out, the fact that running installs requires a ok window before the secure UAC even launches is pretty much proof of that.

The new sandboxed environments is a wicked move, something even apple didnt do for safari and im very thankful microsoft did it for IE, but if we can't easily lock applications in sandboxes by themselves when needed then its effect is not as great... its also a 1.0 move

Reply Score: 2

RE: This isnt new
by TemporalBeing on Mon 22nd Oct 2007 15:40 UTC in reply to "This isnt new"
TemporalBeing Member since:
2007-08-22

Windows isnt dead it just needs a major garbage cleaning, you don't throw out code that works XP was one of the best OS's in history especially since SP2.

The XP code-based seemed to work. It was really a delinquent code base that really needs a lot of work, and a lot of legacy crap dropped from it. The author has a good approach for how to do so while still maintaining the backwards compatibility, and it would behoove Microsoft to actually do it.

As to throwing out a code base - yes, there are times when you do throw out a code base. Typically, it is when you can no longer control the code. Sure, you might be using CVS or SVN or something similar, but that doesn't mean you can truly 100% control the code.

For instance, I worked on one project where the code base was really uncontrollable. It had a legacy history to it and we couldn't solve the problems it had by continuing to use that code base. The only answer was to start a fresh - use new practices so that we could manage the resources of the code, ensure security, etc. The old code base, while it worked, wouldn't have supported those efforts. Moreover, the new code base allowed us to add in new features quickly, easily, and maintainably. (When we fixed or added a new feature was added to the old code base, we would end up with more issues coming out than we went in with. It was really bad.)

The Windows code base is likely at that point. It was likely there before XP, and only made worse by XP. It's easy to tell when you're at that point as every new change takes longer to get in and keep the old code functional.

So yes, it's high time Microsoft cut the cruft and started a new code base, and designed the code base to be more modular, maintainable, secure, etc. It's the only way the software will survive another generation (e.g. Windows 7 and Windows 8). Otherwise, it will collapse under its own weight.

Reply Score: 1

RE[2]: This isnt new
by n4cer on Mon 22nd Oct 2007 16:22 UTC in reply to "RE: This isnt new"
n4cer Member since:
2005-07-06

So yes, it's high time Microsoft cut the cruft and started a new code base, and designed the code base to be more modular, maintainable, secure, etc. It's the only way the software will survive another generation (e.g. Windows 7 and Windows 8). Otherwise, it will collapse under its own weight.


In large part, Vista is the beginning of the new code base. Again, MinWin isn't new to Seven. It's there in Vista/Server 2008. A lot of code was rewritten for Vista. They've started to virtualize system resources, and they've mapped/eliminated most dependencies and layering violations, and turned each feature into manifest-backed compoents. They are more agile in what they can add/remove without affecting other components because of this work and the processes put in place during Vista's development.

They aren't going to throw out all of that work in Seven. They're going to build upon it. I expect there will be a greater shift towards updated versions of the managed code services they've added in Vista as the preferred method for application development. I also believe they'll start to integrate application virtualization for legacy compatibility as well as driver virtualization for reliability, but the end product will be the offspring of Vista/Server 2008, not an all-new code base. I wouldn't expect something that big for another 1 or 2 major releases.

Reply Score: 2

RE[3]: This isnt new
by Weeman on Mon 22nd Oct 2007 19:56 UTC in reply to "RE[2]: This isnt new"
Weeman Member since:
2006-03-20

and turned each feature into manifest-backed compoents

About that...

Have you ever taken a look at WindowsPackages or wherever they're stored? All it is, is a manifest of bloat.

Reply Score: 2

RE[4]: This isnt new
by TemporalBeing on Mon 22nd Oct 2007 20:09 UTC in reply to "RE[2]: This isnt new"
TemporalBeing Member since:
2007-08-22

In large part, Vista is the beginning of the new code base. Again, MinWin isn't new to Seven. It's there in Vista/Server 2008. A lot of code was rewritten for Vista. They've started to virtualize system resources, and they've mapped/eliminated most dependencies and layering violations, and turned each feature into manifest-backed compoents. They are more agile in what they can add/remove without affecting other components because of this work and the processes put in place during Vista's development.

It isn't a matter of how agile the code is. It's a matter of how much the code itself can take change. Windows, due to quite a lot of reasons (e.g. backward compatibility, competition stifling, incomplete and undocumented APIs, bugs, etc.), is a monolithic code base that is not very easy to change. Revising it, refactoring it is not going to help. The only way you solve that is by starting over.

Starting over is often good for a project too. You lose a lot of legacy code that is not needed, and you get the chance to do it better, more correctly. You can apply newer design and architectural principles and fix things proactively instead of retroactively. (Sure you'll still have stuff to fix retroactively, but they'll be different things than before if you did your job right.)

Every software project will at some point reach a point where it'll have to have its entire code base thrown out and restarted. In many respects, it is really a sign of maturity of the program - you understand the program enough to know how to do it right and you need to give yourself the opportunity to do it. A clean cut is often the only way to do so.

Vista is better in some respects to modularity of parts. However, it is still far from what it needs to be and it has a lot of cruft in it - stuff Microsoft simply can't get rid of unless they start over. Otherwise, they're just continuing in the same paradigm, fixing the same issues over and over.

Reply Score: 1

RE[5]: This isnt new
by joshv on Wed 24th Oct 2007 04:38 UTC in reply to "RE[4]: This isnt new"
joshv Member since:
2006-03-18

"It isn't a matter of how agile the code is. It's a matter of how much the code itself can take change. Windows, due to quite a lot of reasons (e.g. backward compatibility, competition stifling, incomplete and undocumented APIs, bugs, etc.), is a monolithic code base that is not very easy to change. Revising it, refactoring it is not going to help. The only way you solve that is by starting over. "

The very fact of Microsoft's existence, and spectacular stock valuation proves this point utterly and completely false. They've made built an extremely successful business around never starting over from square one.

The past few decades are littered with the carcasses of companies that were stupid enough to think they could start from scratch. In the mean time, Microsoft acquired code they didn't have, and incrementally improved the code they did. We've come from DOS, all the way to Vista, and at no point along the way did MS ever start from scratch. I don't expect them to any time soon.

Reply Score: 1

RE[6]: This isnt new
by TemporalBeing on Thu 25th Oct 2007 02:14 UTC in reply to "RE[5]: This isnt new"
TemporalBeing Member since:
2007-08-22

The very fact of Microsoft's existence, and spectacular stock valuation proves this point utterly and completely false. They've made built an extremely successful business around never starting over from square one.

I would hardly call their stock spectacular. It moved high in the bubble just like all the others. Since the bubble is has sat flat due to their inability to produce products and deliver on their primary programs in a timely manner. It took them 5 years (and two development cycles since they restarted the development 2.5 years into it) to deliver Vista and Windows 2008.

The fact is that Windows has become a monolith that they can no longer develop the way they have been, and it is causing them headaches. They're producing products like Windows Server Core, projects like MinWin and others in order to get the code to manageable state so that they can even begin to compete.

So, yes - they are very likely to do so very soon. They've did it in the past with WinNT, which was a brand new, from scratch code base that they later (WinXP) merged their crap code and legacy support into. Win2k and earlier did not run their DOS based programs and vendors had to typically support two different code bases for products to run on both the WinNT line and the DOS/Win9x/WinME line.

They can do it, and they will. Otherwise, it will be the end of them. Oddly enough, this is pretty much what all the commentators are saying of Microsoft and Windows. They will likely choose to use isolated app-centric VM's to manage legacy programs but they will have to do it.

Reply Score: 1

RE[7]: This isnt new
by joshv on Mon 29th Oct 2007 19:29 UTC in reply to "RE[6]: This isnt new"
joshv Member since:
2006-03-18

NT wasn't exactly written from scratch. It's API was an extension of the pre-existing Windows API, and it's design borrowed heavily from VMS. Much of the NT design/development team came from Digital, including Dave Cutler, one of VMS's chief designers.

Reply Score: 1

RE[8]: This isnt new
by TemporalBeing on Tue 30th Oct 2007 02:53 UTC in reply to "RE[7]: This isnt new"
TemporalBeing Member since:
2007-08-22

NT wasn't exactly written from scratch. It's API was an extension of the pre-existing Windows API...

It was still a largely incompatible code base with the pre-existing Windows and DOS programs. So the point still stands.

Reply Score: 1

RE: This isnt new
by shapeshifter on Mon 22nd Oct 2007 20:50 UTC in reply to "This isnt new"
shapeshifter Member since:
2006-09-19

Gee, what the hell are you talking about?
Have you ever used Windows at all?!

I'm really getting sick hearing that XP is

one of the best OS's in history especially since SP2


when if reality it's THE WORST in history (out of the 32bit ones).
Even OS/2 was/is better than XP.

And the stupid gimmicks you mention, the hybrid drives, UAC, sandboxed environments, are the lamest ever attempts to hide the complete incompetence and most horrible design of any OS in history.

TCP/IP stack, graphics architecture? What?!
Again, what are you talking about?
What's so great about them?
Have you used Vista at all?
Nvidia has had the best graphics drivers in the industry for quite a few years now, so that tells me that the graphics problems are not Nvidia's fault.
(ATI has been been crap since the early '90).
And the whole networking system in vista is seriously demented. Only a complete moron could come up with something that stupid.
Have you seen the network dialogs and screens that Vista provides? Can you say confusing?

To this day I have to laugh when I recall my first encounter with Internet Explorer on Windows 2003 server.
I start IE, I type a web address, and I get some idiotic notice that this will not work.
What?! A web browser is not allowed to browse the web?!
Seriously, this is beyond ridiculous.
What's next, Word without the ability to type text?

So Microsoft's solution to security is to simply cut functionality.
That convinced me that Microsoft is a company that will never write a good OS.
And yes, Server 2003 is garbage too, even though it stinks a bit less than XP, it's still garbage.

Reply Score: 1

RE[2]: This isnt new
by anduril on Tue 23rd Oct 2007 13:06 UTC in reply to "RE: This isnt new"
anduril Member since:
2005-11-11

You might have used Windows but you apparently know little about operating systems. There's a good reason Win2k3 denied you using the browser. Its a SERVER. You shouldn't be trying to use the web browser with it! Yes, you can use it as a workstation or a desktop OS but thats not what its intended for. Why is IE included then? Well, because IE was still integral for much of the explorer based system. They're getting much better at moving that dependency out but as far as Im aware it still exists.

As to both the graphics and networking stack in Vista they have been significantly improved. The changes can, and will, greatly increase stability and performance down the road. However, as with ANY version 1 major change release, nothing works perfectly. Much of the blame does lay in the hands of driver makers (actually, ATI's drivers have been far, far superior to NVIDIAs in regards to Vista. They still are in most cases NVIDIA just has the higher performing and non-late hardware releases) but it doesnt help that its a completely different interface to the OS. That takes time to make up the changes. You think OSX.0 didnt perform like shit? Was highly stable? Must not have used it.

The same has mostly been true with Linux when they do big changes but again, most of the fault lies in the hands of the driver makers. Seeing a trend?

Reply Score: 2

versioning
by ZephyrXero on Mon 22nd Oct 2007 15:35 UTC
ZephyrXero
Member since:
2006-03-22

don't you mean Windows 8?

Reply Score: 1

RE: versioning
by jrronimo on Mon 22nd Oct 2007 18:28 UTC in reply to "versioning"
jrronimo Member since:
2006-02-28

Nope, Windows 7. It's Microsoft's internal versioning scheme:

Windows 3.1 = Windows 3
Windows NT = Windows 4
Windows 2000 = Windows 5
Windows XP = Windows 5.1
Windows Vista = Windows 6

So the next Windows is 7.

Reply Score: 1

RE[2]: versioning
by raver31 on Tue 23rd Oct 2007 08:17 UTC in reply to "RE: versioning"
raver31 Member since:
2005-07-06

Windows 3.1 = Windows 3
Windows NT = Windows 4
Windows 2000 = Windows 5
Windows XP = Windows 5.1
Windows Vista = Windows 6


You forgot

Windows 1 = Windows 1
Windows 2 = Windows 2
Windows 286 = Windows 3
Windows 3.0 = Windows 4

Renumbering your list, we get

Windows 3.1 = Windows 5
Windows NT = Windows 6
Windows 95 = Windows 7 (You forgot that one)
Windows 2000 = Windows 8
Windows XP = Windows 9
Windows Vista = Windows 10

So in reality, any new version should be called Windows 11

Edited 2007-10-23 08:17

Reply Score: 2

RE[3]: versioning
by Hurtta on Tue 23rd Oct 2007 09:04 UTC in reply to "RE[2]: versioning"
Hurtta Member since:
2006-04-16

Not taking any stand on whether your numbering is correct or not. The web is already full of posts about kernel of coming windows 7. Everyone will always call it windows 7 even though it could be anything else. That's just the way world goes, try to adjust ;)

Reply Score: 2

RE[3]: versioning
by Flatland_Spider on Tue 23rd Oct 2007 14:29 UTC in reply to "RE[2]: versioning"
Flatland_Spider Member since:
2006-09-01

So in reality, any new version should be called Windows 11


No, in reality it should be called Windows 4 since this would be the fourth version of the NT kernel if MS had done a proper 1.0 release instead of syncing it with the current Windows version. Windows 1-3.11, 95-98, and Me don't factor into the count as they were built on DOS.

This is the say way that Mac OS X.4 is really NextStep 5.4 and not Mac OS 10.

Reply Score: 1

RE[2]: versioning
by Constantine XVI on Tue 23rd Oct 2007 12:25 UTC in reply to "RE: versioning"
Constantine XVI Member since:
2006-11-02

Close, but not quite:

Win1.01-.04
Win2.0-.03, Win2.1-.11
Win3.0, Win3.1, Win3.11
WinNT3.1, WinNT3.5, WinNT3.5.1 (first non-DOS based Windows)
Win95(4), Win98(4.10), WinME(4.90) (end of DOS-based line)
WinNT4
Win2K(5.0), WinXP(5.1), WinServer2003(5.2)
WinVista(6)
Windows 7 (in development)

Reply Score: 1

Honk! Honk!
by Weeman on Mon 22nd Oct 2007 15:38 UTC
Weeman
Member since:
2006-03-20

I keep posting it whenever this topic is being talked about.

WOW64 is proof, shipped with any 64bit Windows, that it's entirely possible to run two different userlands (more like subsystems) on the same kernel. There's a full 32bit subsystem installed to run any 32bit application, and apart from messaging, the 32bit subsystems runs completely on its own, only sharing the kernel as common code.

Nothing speaks against a completely new main subsystem, keep the old one running side by side for "legacy" applications. Glue put where needed (i.e. windowing).

Alternatively, Microsoft could take a clue from Solaris Zones, if there's a more heavy-handed approach needed (quasi full hosting of an operating system), which is still lightweight in regards to resource sharing.

Reply Score: 2

RE: Honk! Honk!
by SlackerJack on Mon 22nd Oct 2007 16:02 UTC in reply to "Honk! Honk!"
SlackerJack Member since:
2005-11-12

64bit computing on WIndows is useless to say the least, may was well run 32bit version because only a handful of apps are 64bit. Windows 7 pure 64bit like they said is there marketing team after a bad night out.

Edited 2007-10-22 16:03

Reply Score: 2

RE[2]: Honk! Honk!
by n4cer on Mon 22nd Oct 2007 16:44 UTC in reply to "RE: Honk! Honk!"
n4cer Member since:
2005-07-06

64bit computing on WIndows is useless to say the least, may was well run 32bit version because only a handful of apps are 64bit. Windows 7 pure 64bit like they said is there marketing team after a bad night out.


The kernel, drivers, and all of the apps in the package are 64-bit. What's not pure about it? In terms of third-party apps, most don't need 64-bit versions. They run on x64 Windows just fine via WOW64.

Depending on your workload, 32-bit Windows may be fine, but some people benefit from the larger available address space (even when running 32-bit apps -- particularly some games).

Reply Score: 2

RE[3]: Honk! Honk!
by SlackerJack on Mon 22nd Oct 2007 18:30 UTC in reply to "RE[2]: Honk! Honk!"
SlackerJack Member since:
2005-11-12

Thats what I mean, why have a 64bit OS if third party apps dont even support 64bit. I dont see the point of running 32bit apps on a 64bit OS, may as well use 32bit version.

Point being that you may as well use 32bit version because third part support is useless on Windows.

Reply Score: 2

RE[4]: Honk! Honk!
by polaris20 on Mon 22nd Oct 2007 19:34 UTC in reply to "RE[3]: Honk! Honk!"
polaris20 Member since:
2005-07-06

Uh, there's quite a few apps in various fields (audio, video, virtualization) that support 64-bit. That would be the point of 64-bit operating systems.

Reply Score: 2

RE[4]: Honk! Honk!
by fjhb on Tue 23rd Oct 2007 06:10 UTC in reply to "RE[2]: Honk! Honk!"
fjhb Member since:
2007-10-23

The kernel, drivers, and all of the apps in the package are 64-bit. What's not pure about it?<p>
<p>
It's obvious. Your question contains the answer: apps that are not in the package. Something that simply doesn't need to exist in the free world.<p>
<p>
When not all the apps you want to run that are part of the package, including many Microsoft apps, nobody can take win64 seriously.<p>
<p>
Oh, and drivers too. Most drivers aren't made by MS. In fact, many aren't even validated by them.<p>
<p>
In fact, when you consider the switching overhead, you might just end up with a slower system.<p>

Reply Score: 1

RE: Honk! Honk!
by Thom_Holwerda on Mon 22nd Oct 2007 16:11 UTC in reply to "Honk! Honk!"
Thom_Holwerda Member since:
2005-06-29

Nothing speaks against a completely new main subsystem, keep the old one running side by side for "legacy" applications. Glue put where needed (i.e. windowing).


The NT kernel indeed allows for subsystems (up until Windows 2000, for instance, it had an os/2 subsystem), but would you really want to run the entirety of win32 in an NT subsystem?

One of the prime points in these two articles of mine is that you really do! not! want! to ship/run the current Windows userland, because it is a mess - if you move it to a subsystem, you do just that: you move it to a subsystem. You're just moving it around, you're not sandboxing or isolating it.

Reply Score: 1

RE[2]: Honk! Honk!
by Weeman on Mon 22nd Oct 2007 19:53 UTC in reply to "RE: Honk! Honk!"
Weeman Member since:
2006-03-20

The NT kernel indeed allows for subsystems (up until Windows 2000, for instance, it had an os/2 subsystem), but would you really want to run the entirety of win32 in an NT subsystem?

Actually, Win32 as it is, IS a subsystem in the very sense of the NT definition. Maybe with the years, they spaghetti coded some stuff, but that's being unwired for quite some time now (been said a whole lot during the Vista development, and Traut said it in his presentation).

One of the prime points in these two articles of mine is that you really do! not! want! to ship/run the current Windows userland, because it is a mess - if you move it to a subsystem, you do just that: you move it to a subsystem. You're just moving it around, you're not sandboxing or isolating it.

If it runs in a tailored VM or in a controlled subsystem (using resource virtualization a la UAC), where's the difference? Latter is easier on the total system.

As said earlier, the best way to implement this IMO would be using a construct like Solaris Zones, where there's hard partitioning inside the kernel already, running full blown operating systems (well, everything right above the kernel) inside the partitions, but using the same kernel and as such able to share resources (mostly just CPU and memory). Using a huge shim, you would be able to keep the old Win32 system running, just like Solaris can run e.g. the whole unmodified Ubuntu userland using a syscall translator in a zone.

VMs really aren't a solution for this, because they're too static and have a huge footprint (memory). To make them more flexible in that regard, the guest operating system would have to be able to deal with fluctuating memory sizes. I don't see that coming anytime soon, at least not automated, because different systems deal with memory pressure in different ways, resulting in a memory scheduling clusterf--k.

Reply Score: 1

RE: Honk! Honk!
by phoenix on Tue 23rd Oct 2007 04:31 UTC in reply to "Honk! Honk!"
phoenix Member since:
2005-07-11

WOW64 is proof, shipped with any 64bit Windows, that it's entirely possible to run two different userlands (more like subsystems) on the same kernel. There's a full 32bit subsystem installed to run any 32bit application, and apart from messaging, the 32bit subsystems runs completely on its own, only sharing the kernel as common code.


WoW has been around since the earliest releases of Windows NT 3.51. There were several "personalities", as they were called, released with NT 3.51:
- Win16
- Win32
- OS/2
- Posix
- probably more, but that's all I can remember off the top of my head

The OS/2 personality was dropped with Windows 2000, the Posix subsystem was "replaced" with Services for Unix, and Win64 was added.

This is not a new concept, and was one of the main selling points of Windows NT back in the day.

Reply Score: 2

Smaller is faster!
by SamuraiCrow on Mon 22nd Oct 2007 15:54 UTC
SamuraiCrow
Member since:
2005-11-19

Considering that cache usage is contingent on fitting the loops into a small amount of memory. (I had paid an obscene amount of money for a sucky Micro-A1c just to run AmigaOS 4.0 based on this same reasoning!)

On one programming site somebody observed that (referring to compiler flags) optimizing code for less memory usage typically generates faster code than optimizing for speed.

If they manage to integrate this with their Singularity project that replaces some page faults with API functions that call the pager, etc., directly, then this might actually be a good version of Windows.

Reply Score: 2

v What's the point?
by anonybrowse on Mon 22nd Oct 2007 16:58 UTC
v RE: What's the point?
by anonybrowse on Mon 22nd Oct 2007 21:24 UTC in reply to "What's the point?"
my dream
by markoweb on Mon 22nd Oct 2007 17:27 UTC
markoweb
Member since:
2006-11-30

I dream of the day when Microsoft creates a sister company (just so that all those "monoply" aqusations wouldn't hold place and MS could do what ever with that product - for instance integrate AV).

That sister company should create a following OS:
1) Purely 64-bit (maybe even 128...)
2) Based on Singularity (great stuff that)
3) The .NET Framework should BE-THE-API (no need for p/invoke and stuff like that)
4) Everything even remotely related to backwards compatibility should be handled via virtualization
5) New PC hardware wouldn't hurt either. Throw out all the legacy crap (yes, your current hardware is also built around f**ked-up backwards compatibility layers) and definently redesign the USB stuff (programming for USB is such a-pain-in-the-A**)
6) Throughout, openess and well documentation should be embraced. For instance Bill Gates's letter - in which he said that ooxml should render perfectly only in IE - made me sick to my bones (really, somebody shoot that idiot instead of throwing a cake in his face).

I guess I'll be dead, burried and long forgotten before that ever happens... ;)

Edited 2007-10-22 17:28

Reply Score: 0

RE: my dream
by sbergman27 on Mon 22nd Oct 2007 17:55 UTC in reply to "my dream"
sbergman27 Member since:
2005-07-24

"""

1) Purely 64-bit (maybe even 128...)

"""

Oops! You lost a lot of credibility with that. What, pray tell, do you think that > 64 bits would buy you? At the traditional rate of memory increase of doubling about every 2 years, even the 48 bits allotted to memory access in current 64 bit processors will last us 30+ years. And that's just a hardware limitation. It can easily be increased to 64 bits, extending us out to 60+ years. 64 bit filesystems are good for a about 40 years at the current exponential rates of expansion. (The 128 bitness of the otherwise excellent ZFS was, quite frankly, a marketing gimmick.)

And besides, what processor would you run this 128bit OS on? Did AMD announce something that I missed?

"""
I guess I'll be dead, burried and long forgotten before that ever happens... ;)

"""

Well, you are thinking in the right time scale, anyway.

Edited 2007-10-22 18:00

Reply Score: 2

RE[2]: my dream
by losethos2 on Mon 22nd Oct 2007 18:06 UTC in reply to "RE: my dream"
losethos2 Member since:
2007-10-22

Good point. Many novices use induction to say, well we ran out of 32-bit space and, now, 64 bit is needed, so lets jump to 128 bits.

Sometimes, people find uses for excess bits by incompletely utilizing the space... Like placing kernel stuff in 0x80000000-0xFFFFFFFF even before you run-out. I think I heard that IP numbers are way excessive, but good for routing.

Reply Score: 2

RE[3]: my dream
by sbergman27 on Mon 22nd Oct 2007 18:17 UTC in reply to "RE[2]: my dream"
sbergman27 Member since:
2005-07-24

One really good thing about living in the year 2007 is that both the hardware and software creators got out of the "let's add 4 more bits" mindset that we lived with through the 80s and 90s. Remember all those "barriers" we broke, only to face them again in a few years? Now we have the opposite problem: Adding an excessive number of bits gratuitously for marketing reasons. That's a far lesser problem though. As an old friend of mine was fond of saying: Better to have it and not need it than need it and not have it. ;-)

The ext3->ext4 transition is, hopefully, the last major barrier we will face for some time. (Famous last words!)

Edited 2007-10-22 18:17

Reply Score: 1

RE[4]: my dream
by losethos2 on Mon 22nd Oct 2007 18:31 UTC in reply to "RE[3]: my dream"
losethos2 Member since:
2007-10-22

Is ext4 128 bits? I tend to think that's excessive, but maybe you might have virtual drives composed of several physical ones and it might prove convenient to use high bits for physical drive number, or if they were on a network, include numbers for that. I picked 64-bits for my filesystem, but I can see reasons for 128.

Reply Score: 1

RE[5]: my dream
by sbergman27 on Mon 22nd Oct 2007 18:35 UTC in reply to "RE[4]: my dream"
sbergman27 Member since:
2005-07-24

"""
Is ext4 128 bits?
"""

No. (Can you *imagine* the ruckus on LKML if anyone proposed such a thing?!) But the current ext3 filesystem size limitation is only about 16 terabytes, depending on architecture. It does not take full advantage of 64 bit. Ext4 raises that to an exabyte. And adds extents, as well. But that's tangential.

Edited 2007-10-22 18:36

Reply Score: 1

RE[3]: my dream
by philcaetano on Mon 22nd Oct 2007 18:36 UTC in reply to "RE[2]: my dream"
philcaetano Member since:
2007-10-22

Now don't get me wrong, I agree with 64 bit being enough for a long time.

But when I first read the this, I read no one will ever need more than 64 bit for a software. ;) Is it possible that saying, we don't need it is too short sighted or arrogant?

But ya.. 64 is enough, and even the idea of 128, I'm not so sure if the advantages out way the disadvantages. Memory space available vs usage needed for a typical software is the first thing that comes to mind.

Edited 2007-10-22 18:39

Reply Score: 1

RE[4]: my dream
by sbergman27 on Mon 22nd Oct 2007 19:36 UTC in reply to "RE[3]: my dream"
sbergman27 Member since:
2005-07-24

"""

But when I first read the this, I read no one will ever need more than 64 bit for a software. ;) Is it possible that saying, we don't need it is too short sighted or arrogant?

"""

Never say never. And, of course, 40-60 years is not never. ;-)

But I don't *think* I'm being short sighted. The 20th century conditioned mind thought in terms of arithmetic progressions with regards to computer hardware and was continually underestimating future requirements. The 21st century conditioned mind is used to thinking in terms of geometric progressions. And the geometric constants for the increase in disk space, ram, and transistor density in processors have remained remarkably constant over the 20 years I have been watching. If anything, I would expect the constants to *decrease* over the years.

At any rate, and for now at least, one can figure 1 bit for every two years on memory, and 1 bit for every year and a half of disk.

X86_64 took us from 32 bits to 48 bits for memory addressing. A difference of 16 bits. So I figure we're good for 32 years. In fact, due to the way X86_32 works, one has to start making tradeoffs at less than 1GB. Not sure if and where such tradeoffs might need to be made with current processors.

I'm sure some kind soul will stop by to fill us in on that.

Edited 2007-10-22 19:38

Reply Score: 1

RE[2]: my dream
by markoweb on Mon 22nd Oct 2007 20:56 UTC in reply to "RE: my dream"
markoweb Member since:
2006-11-30

sbergman27

Don't underestimate the future. In 5 years someone might come up with something so revolutionary that will require that address space or maybe even more. God knows, maybe we'll all be living in full HD worlds (sound, music, video, etc) and memory will gome in TB sticks. So to say that >64 bits is unnecessary is to say like IBM did in the early 80's - "who needs personal computers?!?"

Creating a 128-bit or larger processor is a piece of cake anyways. All you have to do is enlargen the instruction size and mingle with the microcode. If I'm not mistaken...
The only reason no one is making these, is because there is no market for them yet.
But if you are starting a new and using a larger address space doesn't seriously hurt performance, then why settle for less? Why not embrace the future right now?


And for those who still can't see the point in 64-bit proccessors, all I've got to say to you is - memory, there is never enough of it.

Reply Score: 1

RE[3]: my dream
by sbergman27 on Mon 22nd Oct 2007 21:29 UTC in reply to "RE[2]: my dream"
sbergman27 Member since:
2005-07-24

"""

Don't underestimate the future. In 5 years someone might come up with something so revolutionary that will require that address space or maybe even more.

"""

Then it would be totally unfeasible to implement, because physical memory availability would not be within many orders of magnitude of that requirement. At the rate of exponential increase that we have seen in the last 20 years, which has remained fairly constant, 2^52 bytes of memory, the limit for future versions of X86_64 processors, would cost about 100 million dollars in 5 years time. (Requiring 262,144 16GB memory sticks, which are likely to be the largest available at that time.) Do you have some reason to think that the rate of *geometric* expansion will increase? It hasn't over the last few decades.

Your terabyte sticks of memory would actually be scheduled for about 2023-2027, BTW.

"""
And for those who still can't see the point in 64-bit proccessors, all I've got to say to you is - memory, there is never enough of it.

"""

Precisely. There is no reason in the world to think that memory will be available in large enough quantities to require > 64 bit processors for about 40-60 years.

BTW, I should take this opportunity to correct my previous posts now that I've refreshed my memory on the topic. The physical addressing limit of current AMD 64 bit processors is 2^40 bytes (not 2^48), giving us about 16 years respite. This can be increased to 2^52 (not 2^64), which would give us a total of 40 years.

My statement it not at all like "who needs personal computers". It is more like "whether people need this much memory or not, it is unlikely to be available in such quantities for at least 40-60 years.

My statement is somewhat *more* like "Nobody will ever need more than 640k of ram". But that statement, if it was ever actually made back then, was *demonstrably* short-sighted and wrong at the time. Can you provide actual *evidence* that my statement is demonstrably short-sighted and wrong?

Edited 2007-10-22 21:42

Reply Score: 2

RE[3]: my dream
by Morin on Tue 23rd Oct 2007 02:29 UTC in reply to "RE[2]: my dream"
Morin Member since:
2005-12-31

> Creating a 128-bit or larger processor is a piece of cake anyways. All
> you have to do is enlargen the instruction size and mingle with the
> microcode. If I'm not mistaken...

The details are a bit more complex, but yes, it would be piece of cake if there was any market for a 128-bit CPU.

> But if you are starting a new and using a larger address space doesn't
> seriously hurt performance, then why settle for less? Why not embrace
> the future right now?

Increasing address space size *does* hurt performance. Modern programming sees a lot of "passing pointers around", and even more so in reference-eager programming languages such as Java or C#. All those pointers would now be twice the size, meaning the size of the CPU cache measured in number of pointers halves and resulting in more actual memory accesses. And those are *really* bad for performance. Similar arguments apply to instruction size.

Unless you are changing to an entirely different memory model (e.g. non-uniform pointer size), 128-bit addressing would kill performance.

Reply Score: 3

RE[3]: my dream
by Soulbender on Tue 23rd Oct 2007 12:02 UTC in reply to "RE[2]: my dream"
Soulbender Member since:
2005-08-18

So to say that >64 bits is unnecessary is to say like IBM did in the early 80's - "who needs personal computers?!?"


Where's my flying car, personal robot butler and hologram?

Creating a 128-bit or larger processor is a piece of cake anyways. All you have to do is enlargen the instruction size and mingle with the microcode. If I'm not mistaken...


Speaking from experience as a chip designer, right?

memory, there is never enough of it.


Sure, but at every point in time there's a size at which there's no gain from adding more.

Reply Score: 1

RE: my dream
by butters on Mon 22nd Oct 2007 20:41 UTC in reply to "my dream"
butters Member since:
2005-07-08

Purely 64-bit (maybe even 128...)


As Dr. Albert Bartlett famously said, "the greatest shortcoming of the human race is our inability to understand the exponential function".

Reply Score: 6

200?
by zhulien on Tue 23rd Oct 2007 02:45 UTC
zhulien
Member since:
2006-12-06

200 engineers? how do they work effectively if they don't share a single mind? do they just blindly code functions as per input/output specs? if so, aren't they just coders? perhaps uni grads? perhaps that explains why there are so many variations of the same dialogs in windows, they all coded their own...

Reply Score: 1

RE: 200?
by sbergman27 on Tue 23rd Oct 2007 02:51 UTC in reply to "200?"
sbergman27 Member since:
2005-07-24

"""

200 engineers? how do they work effectively if they don't share a single mind? do they just blindly code functions as per input/output specs?

"""

You're forgetting the Borg implants. :-P

Reply Score: 2

RE[2]: 200?
by PlatformAgnostic on Tue 23rd Oct 2007 03:48 UTC in reply to "RE: 200?"
PlatformAgnostic Member since:
2006-01-02

Yes... it's far more efficient for us to communicate that way (and assimilate people into the Windows collective).

Reply Score: 2

RE[3]: 200?
by stestagg on Tue 23rd Oct 2007 15:50 UTC in reply to "RE[2]: 200?"
stestagg Member since:
2006-06-03

I feel a Dr. Who episode coming...

Reply Score: 2

RE: 200?
by Soulbender on Tue 23rd Oct 2007 11:57 UTC in reply to "200?"
Soulbender Member since:
2005-08-18

200 engineers? how do they work effectively if they don't share a single mind?


Gee, I dunno. How does the Linux developers work?
If I was to make wild guess I'd say by communicating and using source control tools.
I'll also go out on a limb and guess they'll also have project managers who oversee things and delegate tasks.

Reply Score: 1

I think they should...
by stodge on Tue 23rd Oct 2007 12:41 UTC
stodge
Member since:
2005-09-08

I think they should remove all legacy support, and start from scratch. And please, no registry!

Reply Score: 1

RE: I think they should...
by stestagg on Tue 23rd Oct 2007 15:52 UTC in reply to "I think they should..."
stestagg Member since:
2006-06-03

No, it'll be called Registry.net and all your settings will be stored on live.com servers ;) Imaging the startup latency that would create.

Reply Score: 2

Peter Principle
by netpython on Wed 24th Oct 2007 11:37 UTC
netpython
Member since:
2005-07-06

I didn't know the Peter Principle is valid for some OS's as well.

Reply Score: 2