Linked by Thom Holwerda on Wed 5th Jan 2011 22:09 UTC
Windows And this is part two of the story: Microsoft has just confirmed the next version of Windows NT (referring to it as NT for clarity's sake) will be available for ARM - or more specifically, SoCs from NVIDIA, Qualcomm, and Texas Instruments. Also announced today at CES is Microsoft Office for ARM. Both Windows NT and Microsoft Office were shown running on ARM during a press conference for the fact at CES in Las Vegas.
Thread beginning with comment 456079
To read all comments associated with this story, please click here.
enough bits?
by jwwf on Thu 6th Jan 2011 00:27 UTC
jwwf
Member since:
2006-01-19

I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.

Reply Score: 3

RE: enough bits?
by umccullough on Thu 6th Jan 2011 01:47 in reply to "enough bits?"
umccullough Member since:
2006-01-26

I sure hope they are planning to target only a future 64 bit ARM.


Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.

It would be annoying if 32 bit addressing gets a new lease on life due to this.


What a strange thing to be annoyed at... especially given that if you never need 64bit addressing, you're potentially saving the overhead of having to address it that widely to begin with.

Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


Just sounds like lazy development to me - what assumptions in your software would implicitly fail on 32 bit addressed systems? Don't you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend...

Reply Parent Score: 6

RE[2]: enough bits?
by jwwf on Thu 6th Jan 2011 21:46 in reply to "RE: enough bits?"
jwwf Member since:
2006-01-19

"I sure hope they are planning to target only a future 64 bit ARM.


Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.
"

It's not "just to spite people", why would you think that? It is to allocate development resources efficiently, both for OS developers (fewer builds to test) and for all application developers (same reason). In case you haven't noticed, 2008 R2 is already 64 bit only for this very reason. It is a question of "I have X dollars to spend on this project, how can I most efficiently use them?"

Furthermore, there is no 32 bit application base on NT/ARM now, so there is no one who could be spited. My point is, if you are starting with a clean slate, make it clean!


Just sounds like lazy development to me - what assumptions in your software would implicitly fail on 32 bit addressed systems? Don't you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend...


Of course it's lazy! Do you test all your C on VAX and M68K? How could somebody be so lazy as to not do that? ;)

I own a couple of UNIX boxes from the early 90s. I like playing with them. But I wouldn't actually expect anybody writing software in 2011 to worry about it being portable to OSF/1 or Solaris 2.3. My personal belief is that 32 bit x86 is on it's way down that road; others are free to disagree, but as time goes on, I think fewer and fewer people will.

One other thing, just for fun: Let's say that the biggest single system image machine you can buy now can handle 16TB of RAM (eg, the biggest Altix UV). To hit the 64 bit addressing limit, you need twenty doublings*, which even if you assume happen once per year (dubious), puts us around 2030. Obviously it is possible to hit the limit. But the question is, will the programming environment in 2030 be similar enough to UNIX now such that thinking about 128 bit pointers now would actually pay off? On the one hand you could cite my 1990 UNIX machines as evidence that the answer would be "yes", but on the other, modern non-trivial C programs are not usually trivially portable to these machines. So it's hard to say how much I should worry about 128 bit pointers; they may be the least of my problems in 2030. Or maybe not. Who knows.

* OK, disk access issues like mmap will make it useful before then. Maybe we'll even want (sparse) process address spaces bigger than that before then. But it doesn't change the core question of whether you can anticipate the programming environments of 2030.

Reply Parent Score: 2

RE: enough bits?
by lemur2 on Thu 6th Jan 2011 12:19 in reply to "enough bits?"
lemur2 Member since:
2007-02-17

I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


The ARM Cortex-A15 MPCore CPU architecture, which is the one aimed at desktops and servers, is a 32-bit architecture. Nevertheless, it does not suffer from a limitation of 4GB of main memory, it can in fact address up to one terabyte (1TB) of main memory.

http://www.engadget.com/2010/09/09/arm-reveals-eagle-core-as-cortex...

The Cortex-A15 MPCore picks up where the A9 left off, but with reportedly five times the power of existing CPUs, raising the bar for ARM-based single- and dual-core cell phone processors up to 1.5GHz... or as high as 2.5GHz in quad-core server-friendly rigs with hardware virtualization baked in and support for well over 4GB of memory. One terabyte, actually.


I believe the Cortex-A15 MPCore architecture includes a built-in memory management unit to achieve this feat.

Edited 2011-01-06 12:21 UTC

Reply Parent Score: 2

RE[2]: enough bits?
by oiaohm on Thu 6th Jan 2011 12:53 in reply to "RE: enough bits?"
oiaohm Member since:
2009-05-30

"I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


The ARM Cortex-A15 MPCore CPU architecture, which is the one aimed at desktops and servers, is a 32-bit architecture. Nevertheless, it does not suffer from a limitation of 4GB of main memory, it can in fact address up to one terabyte (1TB) of main memory.

http://www.engadget.com/2010/09/09/arm-reveals-eagle-core-as-cortex...
"

Also important note the 4GB limit on 32 bit OS on a lot of x86 chips is garbage as well. PAE mode. 64gb to 128gb. 32 bit mode.

So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.

So we can expect MS to treat ARM the same as what they do x86. Different versions different limits nothing todo with real hardware limits.

Reply Parent Score: 1