Linked by Thom Holwerda on Mon 22nd Oct 2007 13:48 UTC
Windows Earlier today, OSNews ran a story on a presentation held by Microsoft's Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) - in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.
Permalink for comment 280000
To read all comments associated with this story, please click here.
RE[3]: my dream
by Morin on Tue 23rd Oct 2007 02:29 UTC in reply to "RE[2]: my dream"
Morin
Member since:
2005-12-31

> Creating a 128-bit or larger processor is a piece of cake anyways. All
> you have to do is enlargen the instruction size and mingle with the
> microcode. If I'm not mistaken...

The details are a bit more complex, but yes, it would be piece of cake if there was any market for a 128-bit CPU.

> But if you are starting a new and using a larger address space doesn't
> seriously hurt performance, then why settle for less? Why not embrace
> the future right now?

Increasing address space size *does* hurt performance. Modern programming sees a lot of "passing pointers around", and even more so in reference-eager programming languages such as Java or C#. All those pointers would now be twice the size, meaning the size of the CPU cache measured in number of pointers halves and resulting in more actual memory accesses. And those are *really* bad for performance. Similar arguments apply to instruction size.

Unless you are changing to an entirely different memory model (e.g. non-uniform pointer size), 128-bit addressing would kill performance.

Reply Parent Score: 3