Linked by Thom Holwerda on Mon 22nd Oct 2007 13:48 UTC
Windows Earlier today, OSNews ran a story on a presentation held by Microsoft's Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) - in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.
Thread beginning with comment 279943
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: my dream
by markoweb on Mon 22nd Oct 2007 20:56 UTC in reply to "RE: my dream"
markoweb
Member since:
2006-11-30

sbergman27

Don't underestimate the future. In 5 years someone might come up with something so revolutionary that will require that address space or maybe even more. God knows, maybe we'll all be living in full HD worlds (sound, music, video, etc) and memory will gome in TB sticks. So to say that >64 bits is unnecessary is to say like IBM did in the early 80's - "who needs personal computers?!?"

Creating a 128-bit or larger processor is a piece of cake anyways. All you have to do is enlargen the instruction size and mingle with the microcode. If I'm not mistaken...
The only reason no one is making these, is because there is no market for them yet.
But if you are starting a new and using a larger address space doesn't seriously hurt performance, then why settle for less? Why not embrace the future right now?


And for those who still can't see the point in 64-bit proccessors, all I've got to say to you is - memory, there is never enough of it.

Reply Parent Score: 1

RE[3]: my dream
by sbergman27 on Mon 22nd Oct 2007 21:29 in reply to "RE[2]: my dream"
sbergman27 Member since:
2005-07-24

"""

Don't underestimate the future. In 5 years someone might come up with something so revolutionary that will require that address space or maybe even more.

"""

Then it would be totally unfeasible to implement, because physical memory availability would not be within many orders of magnitude of that requirement. At the rate of exponential increase that we have seen in the last 20 years, which has remained fairly constant, 2^52 bytes of memory, the limit for future versions of X86_64 processors, would cost about 100 million dollars in 5 years time. (Requiring 262,144 16GB memory sticks, which are likely to be the largest available at that time.) Do you have some reason to think that the rate of *geometric* expansion will increase? It hasn't over the last few decades.

Your terabyte sticks of memory would actually be scheduled for about 2023-2027, BTW.

"""
And for those who still can't see the point in 64-bit proccessors, all I've got to say to you is - memory, there is never enough of it.

"""

Precisely. There is no reason in the world to think that memory will be available in large enough quantities to require > 64 bit processors for about 40-60 years.

BTW, I should take this opportunity to correct my previous posts now that I've refreshed my memory on the topic. The physical addressing limit of current AMD 64 bit processors is 2^40 bytes (not 2^48), giving us about 16 years respite. This can be increased to 2^52 (not 2^64), which would give us a total of 40 years.

My statement it not at all like "who needs personal computers". It is more like "whether people need this much memory or not, it is unlikely to be available in such quantities for at least 40-60 years.

My statement is somewhat *more* like "Nobody will ever need more than 640k of ram". But that statement, if it was ever actually made back then, was *demonstrably* short-sighted and wrong at the time. Can you provide actual *evidence* that my statement is demonstrably short-sighted and wrong?

Edited 2007-10-22 21:42

Reply Parent Score: 2

RE[3]: my dream
by Morin on Tue 23rd Oct 2007 02:29 in reply to "RE[2]: my dream"
Morin Member since:
2005-12-31

> Creating a 128-bit or larger processor is a piece of cake anyways. All
> you have to do is enlargen the instruction size and mingle with the
> microcode. If I'm not mistaken...

The details are a bit more complex, but yes, it would be piece of cake if there was any market for a 128-bit CPU.

> But if you are starting a new and using a larger address space doesn't
> seriously hurt performance, then why settle for less? Why not embrace
> the future right now?

Increasing address space size *does* hurt performance. Modern programming sees a lot of "passing pointers around", and even more so in reference-eager programming languages such as Java or C#. All those pointers would now be twice the size, meaning the size of the CPU cache measured in number of pointers halves and resulting in more actual memory accesses. And those are *really* bad for performance. Similar arguments apply to instruction size.

Unless you are changing to an entirely different memory model (e.g. non-uniform pointer size), 128-bit addressing would kill performance.

Reply Parent Score: 3

RE[3]: my dream
by Soulbender on Tue 23rd Oct 2007 12:02 in reply to "RE[2]: my dream"
Soulbender Member since:
2005-08-18

So to say that >64 bits is unnecessary is to say like IBM did in the early 80's - "who needs personal computers?!?"


Where's my flying car, personal robot butler and hologram?

Creating a 128-bit or larger processor is a piece of cake anyways. All you have to do is enlargen the instruction size and mingle with the microcode. If I'm not mistaken...


Speaking from experience as a chip designer, right?

memory, there is never enough of it.


Sure, but at every point in time there's a size at which there's no gain from adding more.

Reply Parent Score: 1