Linked by Thom Holwerda on Wed 5th Jan 2011 22:09 UTC
Windows And this is part two of the story: Microsoft has just confirmed the next version of Windows NT (referring to it as NT for clarity's sake) will be available for ARM - or more specifically, SoCs from NVIDIA, Qualcomm, and Texas Instruments. Also announced today at CES is Microsoft Office for ARM. Both Windows NT and Microsoft Office were shown running on ARM during a press conference for the fact at CES in Las Vegas.
Permalink for comment 456241
To read all comments associated with this story, please click here.
RE[2]: enough bits?
by jwwf on Thu 6th Jan 2011 21:46 UTC in reply to "RE: enough bits?"
jwwf
Member since:
2006-01-19

"I sure hope they are planning to target only a future 64 bit ARM.


Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.
"

It's not "just to spite people", why would you think that? It is to allocate development resources efficiently, both for OS developers (fewer builds to test) and for all application developers (same reason). In case you haven't noticed, 2008 R2 is already 64 bit only for this very reason. It is a question of "I have X dollars to spend on this project, how can I most efficiently use them?"

Furthermore, there is no 32 bit application base on NT/ARM now, so there is no one who could be spited. My point is, if you are starting with a clean slate, make it clean!


Just sounds like lazy development to me - what assumptions in your software would implicitly fail on 32 bit addressed systems? Don't you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend...


Of course it's lazy! Do you test all your C on VAX and M68K? How could somebody be so lazy as to not do that? ;)

I own a couple of UNIX boxes from the early 90s. I like playing with them. But I wouldn't actually expect anybody writing software in 2011 to worry about it being portable to OSF/1 or Solaris 2.3. My personal belief is that 32 bit x86 is on it's way down that road; others are free to disagree, but as time goes on, I think fewer and fewer people will.

One other thing, just for fun: Let's say that the biggest single system image machine you can buy now can handle 16TB of RAM (eg, the biggest Altix UV). To hit the 64 bit addressing limit, you need twenty doublings*, which even if you assume happen once per year (dubious), puts us around 2030. Obviously it is possible to hit the limit. But the question is, will the programming environment in 2030 be similar enough to UNIX now such that thinking about 128 bit pointers now would actually pay off? On the one hand you could cite my 1990 UNIX machines as evidence that the answer would be "yes", but on the other, modern non-trivial C programs are not usually trivially portable to these machines. So it's hard to say how much I should worry about 128 bit pointers; they may be the least of my problems in 2030. Or maybe not. Who knows.

* OK, disk access issues like mmap will make it useful before then. Maybe we'll even want (sparse) process address spaces bigger than that before then. But it doesn't change the core question of whether you can anticipate the programming environments of 2030.

Reply Parent Score: 2