Linked by Thom Holwerda on Wed 5th Jan 2011 22:09 UTC
Windows And this is part two of the story: Microsoft has just confirmed the next version of Windows NT (referring to it as NT for clarity's sake) will be available for ARM - or more specifically, SoCs from NVIDIA, Qualcomm, and Texas Instruments. Also announced today at CES is Microsoft Office for ARM. Both Windows NT and Microsoft Office were shown running on ARM during a press conference for the fact at CES in Las Vegas.
Order by: Score:
BluenoseJake Member since:
2005-08-11

Windows 8, like Windows 7, and every version of Windows since XP, is Windows NT. Do you think that MS rewrite Windows every time that they release a new version?

Reply Score: 5

phoenix Member since:
2005-07-11

Windows 8 will just be another evolution of Windows 7, as it was an evolution of Windows Vista, which is just an evolution of Windows XP, which is just the combination of the Windows 9x GUI-look on top of the Windows NT core, evolved from Windows 2K, and, originally, from Windows NT 4.0, and 3.5 before it.

They aren't going to chuck all the code from Windows 7, start from scratch, and create an entirely new Windows OS. That's like saying that Ubuntu 11.04 is completely new, and not an evolution of Debian.

Reply Score: 2

mfaudzinr Member since:
2008-02-13

Think Midori. It is a research OS being brewed at Microsoft Lab and rumored to be successor to Windows 7 (http://www.zdnet.com/blog/microsoft/goodbye-xp-hello-midori/1466). Nothing can be substantiated at the moment. All speculations...

Reply Score: 1

Bill Shooter of Bul Member since:
2006-07-14

A new file system does not mean a completely different operating system.

With any operating system there is always stuff that changes,but a lot remains the same. Determining when to call something a completely new operating system vs jsut a new version of an operating system is sometimes difficult to do, but I don't think Windows 8 will have enough differences to qualify as "completely new". Now if its really just midori ( kernel written in C#) with a compatibility api for legacy win32 stuff, that would be a "completely new" operating system. ANd there has been some speculation that something of that calibre is/or has been considered.

Reply Score: 4

kaiwai Member since:
2005-07-06

You are and the people that mod me down is not very up to speed are you.

Win 8 wil have a rehauled filesystem.
http://www.geek.com/articles/news/windows-8-coming-in-2012-20090422...

And a lot of other references Im not even going to link.

Do you think I just suck up my comments.


That has to be the worlds crappiest article; in one paragraph the author says: "Windows 8 will see a radical rehaul of the file system" then the next one the author says: "One job posting looks for someone to help program the next generation of Windows’ Distributed File System Replication storage technology, with “new critical features… including cluster support and support for one way replication” and performance improvements a big plus."

How on earth does the author leap from a job advertisement specialising in clustering file system and then conclude that there is a 'radical overhaul' for the file system for Windows 8? Pie in the sky circle jerks maybe fun but I'd sooner read an article with real substance instead of pie in the sky promises.

Windows 8 will not be a revolution, it will be an evolution. The foundations are there it is a matter of Microsoft making the changes to take advantage of them.

Reply Score: 5

Tuishimi Member since:
2005-07-06

Agreed. Evolution. But it should be good evolution. No I don't have proof of that, but it just makes good business sense at this point... use what you have that is obviously working well for them at the moment. They can continue to work on "midori" or whatever else might be in the oven in the background using fewer resources until the time comes when they really NEED to out it.

Reply Score: 2

kaiwai Member since:
2005-07-06

Agreed. Evolution. But it should be good evolution. No I don't have proof of that, but it just makes good business sense at this point... use what you have that is obviously working well for them at the moment. They can continue to work on "midori" or whatever else might be in the oven in the background using fewer resources until the time comes when they really NEED to out it.


I'd say midori is more a 'play ground' for the future once there is a movement away from win32 but that won't be for at least another 5-10 years at the earliest. There are lots of projects worked on that never really turn into complete end products - Microsoft has many projects on the go with the end result not necessarily turning the project into a product but what they learnt during the project being put into existing products.

Here is another cool article over at Neowin:

http://www.neowin.net/news/what-jupiter-means-for-windows-8

Now that is awesome; I hope that when they do open their 'application store' they put restrictions on it such as having to use the the latest Visual Studio and latest API's - forcing developers to upgrade their code so that applications look gorgeous on the desktop rather than the epitome of fugly as many today look like.

Reply Score: 2

BluenoseJake Member since:
2005-08-11

The Filesystem is not related to the kernel, or any other part of Windows, it is just a subsystem, NT could work with NTFS, FAT, or if anybody wanted too, it could use ext3, XFS, really anything.

Does Linux change to some other OS depending on what filesystem it is currently booting with? No it does not. FreeBSD does not change to Solaris just because you are using ZFS.

Reply Score: 2

oiaohm Member since:
2009-05-30

The Filesystem is not related to the kernel, or any other part of Windows, it is just a subsystem, NT could work with NTFS, FAT, or if anybody wanted too, it could use ext3, XFS, really anything.

Does Linux change to some other OS depending on what filesystem it is currently booting with? No it does not. FreeBSD does not change to Solaris just because you are using ZFS.

To be correct yes Linux does sometimes has to be changed due to what file-system it is booting on. Like Linux Secuirty Module SELinux require particular features from the file-system to operate. This does give a OS that behaves differently. Also booting a real-time Linux does have particular requirements in file-systems you can and cannot use.

Yes you stay in the same family but the Distributions cannot always remain the same on different file-systems to the point in some cases not being installable on particular file-systems as root file-system.

Also subsystem has a very particular meaning when talking about NT nothing related to file-system. File-system in NT an "Installable File System" http://en.wikipedia.org/wiki/Installable_File_System

Linux is simpler to boot on a alien filesystem than windows.

Linux has a kernel image and intrd that are loaded by the bootloader this does not have to be the on the filesystem that the OS will boot into.

Lets move over to NT. The bootloader of NT style OS's reads the Registry to work out what drivers to load with the kernel. Issue here bootloader must be able to read the filesystem the OS is on. So Windows boot loaders have a file-system driver independent to the "installable File System".

Anybody cannot use anything with NT. They have to be able to rewrite the bootloader and make a IFS. Where with Linux person only has make Filesystem driver for Linux.

Now nasty part with windows replacing the bootloader you could get on the wrong side of a update. So really MS is fully in-charge of what File-systems you can boot windows on.

Claim that Filesystem is not related to kernel or any other part of Windows is invalid. Its related to the bootloader that loads the kernel that also loads the drivers the OS needs to boot.

Linux can claim that File-system is not related to kernel. Since file-system driver can be bundled into initrd and loaded by any Linux supporting boot loader for starting up on any file-system you like. But then you have to remember the other points above. That not all Linux Distributions will operate after that due to the limitations of the file-system drivers.

Yes Linux distributions can be broken down into groups by secuirty design. So Linux is not uniform system. Talking about it as one item is foolishness.

FreeBSD is a Distribution from the BSD classes of OS's. Yes there is more than one in the BSD class of OS's and some of those OS's have file system limitations. Solaris again is a Distribution from the Solaris class of OS's but at this stage Solaris has not branched to having a file-system support differences in its class. Single distributions compared to Multi is really a mistake.

Reply Score: 1

BluenoseJake Member since:
2005-08-11

There is nothing stopping MS from supporting other file systems. If MS put out a version of Windows 7 that used ext3, IT WOULD STILL BE WINDOWS. all the arguments in the world would still not make it something other than Windows.

You can continue to argue, but that doesn't make you right. And your arguments about Linux and BSD do not make sense, because regardless what filesystem they are using (and they can use several at the same time), THEY ARE STILL LINUX AND BSD.

Edited 2011-01-06 14:18 UTC

Reply Score: 2

fretinator Member since:
2005-07-06

Actually, I've used EXT3 on Windows, and it worked well.

Reply Score: 3

BluenoseJake Member since:
2005-08-11

where'd you find such a beast?

Reply Score: 2

phoenix Member since:
2005-07-11

OMG! You made your own version of Windows? What did you call it? ;)

Reply Score: 2

TheGZeus Member since:
2010-05-19

Did you just say that the filesystem has nothing to do with the kernel?

Reply Score: 2

Tuishimi Member since:
2005-07-06

When I think "kernel" I think process scheduling, memory management, etc. I'm sure that is all he meant by that. I/O drivers and file systems can vary as long as the kernel can manage them, as long as they meet criteria.

Reply Score: 2

Neolander Member since:
2010-03-08

Did you just say that the filesystem has nothing to do with the kernel?

Don't know how it's done in the Windows NT kernel, but in most other modern kernels, the drivers used to read ExtN, NTFS, FAT, etc... are removable modules of the kernel, not a core part of it. So it's not too far-fetched to say that the kernel is not linked to a specific file system, as long as you have linked the drivers for all popular FSs in it.

Now if you're talking about the virtual file system (VFS), that is the hierarchical directory-based organization of files which applications and users see, that's another story. It's a core part of most monolithic kernels.

I admit that the distinction is subtle.

Edited 2011-01-06 17:53 UTC

Reply Score: 1

TheGZeus Member since:
2010-05-19

It still happens in Kernel space in NT, Linux, BSD...
If it's not a microkernel, then it's part of the kernel.

The distinction is technical, but should be made.

Reply Score: 2

BluenoseJake Member since:
2005-08-11

From the standpoint of filesystems, Sure, you can have 3rd party filesystems that run as drivers, and if MS added support for ext3 to Windows, for example, it would still be the NT kernel, regardless of filesystem support.

Reply Score: 2

kovacm Member since:
2010-12-16

Windows 8, like Windows 7, and every version of Windows since XP, is Windows NT.

since Windows 2000...

Reply Score: 2

Johann Chua Member since:
2005-07-22

Windows Me was released after Windows 2000.

Reply Score: 4

BluenoseJake Member since:
2005-08-11

When Windows 200 came out, it was the "business OS" WinME had also just came out, which is not NT.

Reply Score: 3

Carewolf Member since:
2005-09-08

There was a Windows 2000 Home version too. Windows 2000 was the first version designed to cater to both business and home users, somehow they just didn't think home users was quite ready, or maybe multiple teams were competing inside Microsoft. It is still a mystery today WTF was up with the ME thing

Reply Score: 2

BluenoseJake Member since:
2005-08-11

There certainly was not a Windows 2000 Home version. There was Windows 2000 Server, and Windows 2000 Pro. No other versions.

Edited 2011-01-06 19:06 UTC

Reply Score: 2

Bobthearch Member since:
2006-01-27

There was no Home version, as far as I recall. Here's what Wikipedia says:

Four editions of Windows 2000 were released: Professional, Server, Advanced Server, and Datacenter Server. Additionally, Microsoft sold Windows 2000 Advanced Server Limited Edition and Windows 2000 Datacenter Server Limited Edition...

Reply Score: 1

bornagainenguin Member since:
2005-08-07

Carewolf mentioned...

There was a Windows 2000 Home version too. Windows 2000 was the first version designed to cater to both business and home users, somehow they just didn't think home users was quite ready, or maybe multiple teams were competing inside Microsoft. It is still a mystery today WTF was up with the ME thing


Nice to know I'm not the only one who remembers Windows Neptune! ;)

http://en.wikipedia.org/wiki/Windows_Neptune

--bornagainpenguin

Reply Score: 2

BluenoseJake Member since:
2005-08-11

But Neptune was never released. Therefore it never happened.

Reply Score: 2

dylansmrjones Member since:
2005-10-02

It is without doubt very much NT. The likelyhood of Microsoft completely rewriting every single subsystem to such an extent they are no longer the least bit compatible is very low. Particularly considering the portability of the NT kernel and Windows API.

But of course... Microsoft may have switched ported Windows API to Haiku, OS/2 or even the Linux-kernel. I doubt it though.

Reply Score: 2

umccullough Member since:
2006-01-26

I cant see a referral to Windows NT in the press release?


You read Thom's summary wrong - he was the one clarifying this - to make sure nobody was confusing it with Windows CE/Phone/Mobile/Pocket (whatever they call it now).

Thus, Windows 8 for ARM will not be based on NT the same way that Apple's iOS was not based on OS X.

Reply Score: 2

sj87 Member since:
2007-12-16

No, he reads the summary like it's written. Thom also claims the demonstrated Windows version has already been released. The truth is Microsoft demo'd some Windows Thom himself is referring to as 'Windows NT', and that particular version has not been released yet.

Therefore we still have to wait some time for the 1996's case of multi-platform Windows release to come.

Edited 2011-01-06 06:39 UTC

Reply Score: 2

bhtooefr Member since:
2009-02-19

And, there's a (potentially doctored? it looks a bit odd) screenshot of "Windows 6.2.7867" - which fits right into the Win7 lineage, as well as a screenshot of this running Office 2010 on ARM.

This is NT, no doubt about it. (As opposed to CE, 9x, or anything like that.)

Reply Score: 1

BC
by vivainio on Wed 5th Jan 2011 22:20 UTC
vivainio
Member since:
2008-12-26

If this turns out to be succesful, this will be the biggest binary break in the history of mankind. Microsoft delayed this for a very good reason, and that's not big/little endian ;-).

It means there is no longer value in all the "legacy" crap that runs only on windows (shareware, etc), and it means there will be a bunch of windows computers that are immune to computer viruses running around (for a while).

Reply Score: 5

RE: BC
by Laurence on Wed 5th Jan 2011 23:34 UTC in reply to "BC"
Laurence Member since:
2007-03-26

If this turns out to be succesful, this will be the biggest binary break in the history of mankind. Microsoft delayed this for a very good reason, and that's not big/little endian ;-).

It means there is no longer value in all the "legacy" crap that runs only on windows (shareware, etc), and it means there will be a bunch of windows computers that are immune to computer viruses running around (for a while).

Viruses (in the strictest sense of the term)- perhaps, that depends on how MS handle x86 emulation (if at all).

Malware- definitely not. So long as shell scripting and other such interpreted code can still execute, malware can still be written. In fact with Office being ported to ARM, you instantly open up the problem that the same malicious VBA macros on x86 Office will work on ARM NT too. The same would be true for WSH, Powershell and even DHTML et al content.

Edited 2011-01-05 23:39 UTC

Reply Score: 3

RE[2]: BC
by lemur2 on Thu 6th Jan 2011 09:27 UTC in reply to "RE: BC"
lemur2 Member since:
2007-02-17

"If this turns out to be succesful, this will be the biggest binary break in the history of mankind. Microsoft delayed this for a very good reason, and that's not big/little endian ;-).

It means there is no longer value in all the "legacy" crap that runs only on windows (shareware, etc), and it means there will be a bunch of windows computers that are immune to computer viruses running around (for a while).

Viruses (in the strictest sense of the term)- perhaps, that depends on how MS handle x86 emulation (if at all).

Malware- definitely not. So long as shell scripting and other such interpreted code can still execute, malware can still be written. In fact with Office being ported to ARM, you instantly open up the problem that the same malicious VBA macros on x86 Office will work on ARM NT too. The same would be true for WSH, Powershell and even DHTML et al content.
"

Also, if Windows NT and MS Office can both be recompiled for ARM, so too can any viruses or other malware also be recompiled for ARM.

Just about the only thing that needs to be retained (in order for Windows malware to still work on ARM) is that the OS API is still Windows. This means that the same source code can still be re-compiled for a different machine architecture.

That is probably exactly what Microsoft theselves did to make MS Offcie for ARM.

In the short term Windows on ARM won't have any malware, but if Windows on ARM reaches significant usage numbers, Windows malware for ARM will very soon also start to appear.

The essential features for malware are that: (1) the API must be consistent (so that source code can be recompiled), and (2) trade secret source code with binary only executables which are routinely distributed and installed by end users.

Windows for ARM will faithfully retain those two essential elements from Windows for x86.

Edited 2011-01-06 09:31 UTC

Reply Score: 2

RE[3]: BC
by lucas_maximus on Thu 6th Jan 2011 10:22 UTC in reply to "RE[2]: BC"
lucas_maximus Member since:
2009-08-18

The essential features for malware are that: (1) the API must be consistent (so that source code can be recompiled),


1) This is also a necessary for 3rd parties to write good software for a platform that can run on multiple version of the same operating system on multiple platforms.

(2) trade secret source code with binary only executables which are routinely distributed and installed by end users.


Which isn't really a problem if people download the closed source executables from a reputable source i.e. the distributor.

If you downloaded a shell script for Unix/Linux without understanding from a random site and not understanding how it worked and just ran it, it would cause havok on your system as well.

Ergo the problem is user education not the fact that it is closed source. Funnily enough as a educated user I have no problems with viruses and malware even though I use both open and closed source applications.

But you will continue to push your anti-window/anti closed source agenda at every opportunity.

Edited 2011-01-06 10:24 UTC

Reply Score: 1

RE[4]: BC
by lemur2 on Thu 6th Jan 2011 11:58 UTC in reply to "RE[3]: BC"
lemur2 Member since:
2007-02-17

"The essential features for malware are that: (1) the API must be consistent (so that source code can be recompiled),


1) This is also a necessary for 3rd parties to write good software for a platform that can run on multiple version of the same operating system on multiple platforms.

(2) trade secret source code with binary only executables which are routinely distributed and installed by end users.


Which isn't really a problem if people download the closed source executables from a reputable source i.e. the distributor.

If you downloaded a shell script for Unix/Linux without understanding from a random site and not understanding how it worked and just ran it, it would cause havok on your system as well.

Ergo the problem is user education not the fact that it is closed source. Funnily enough as a educated user I have no problems with viruses and malware even though I use both open and closed source applications.

But you will continue to push your anti-window/anti closed source agenda at every opportunity.
"

There is indeed a great deal of closed-source software, which is distributed as binary executables only, which is perfectly good and functional software.

The problem is that almost all malware is also distributed as closed-source binary executables only, and that (being closed source) there is no way that anyone other than the creators of any given piece of such software can tell the difference. No amount of user education will change the fact that no-one (other than the authors of the software) can tell if a given closed-source binary executable does or does not contain new malware.

This fact is only relevant to this topic becasue someone stated that Windows for ARM would initially be free of malware, which is true, but my point is that there is nothing about ARM that would mean that this remains true for long.

It is "made-for-Windows", and "distributed via closed-source binary executables", that characterises 99% of existing malware. x86/x86_64 versus ARM really doesn't come into the picture. Just as Microsoft can fairly readily make a version of MS Office for ARM, so can malware authors also rapidly make an ARM version of their trojan malware in a similar fashion. It merely has to become worth their while.

BTW ... my agenda is merely to point out facts such as these to everybody, so they can make good decisions for themselves regarding which software they choose to run on their hardware. I make absolutely no apology for this agenda.

What exactly is your agenda in trying to disparage mine?

Edited 2011-01-06 12:10 UTC

Reply Score: 2

RE[5]: BC
by lemur2 on Thu 6th Jan 2011 12:34 UTC in reply to "RE[4]: BC"
lemur2 Member since:
2007-02-17

This fact is only relevant to this topic becasue someone stated that Windows for ARM would initially be free of malware, which is true, but my point is that there is nothing about ARM that would mean that this remains true for long.

It is "made-for-Windows", and "distributed via closed-source binary executables", that characterises 99% of existing malware. x86/x86_64 versus ARM really doesn't come into the picture. Just as Microsoft can fairly readily make a version of MS Office for ARM, so can malware authors also rapidly make an ARM version of their trojan malware in a similar fashion. It merely has to become worth their while.


Actually, it occurs to me that if Windows on ARM does gain appreciable market share, such that it does become worthwhile for malware authors to port their Windows malware (which is almost all malware) to ARM, then existing virus databases will be useless. Any re-compiled-for-ARM malware will have a different binary "signature" than the x86/x86_64 malware does.

This will open up the beginning of a "golden age" for Windows-for-ARM malware, until some lengthy time later when the antivirus and anti-malware scanner authors can build up a similar signature databse for the new for-Windows-for-ARM malware binaries.

Edited 2011-01-06 12:39 UTC

Reply Score: 1

RE[5]: BC
by Neolander on Thu 6th Jan 2011 13:24 UTC in reply to "RE[4]: BC"
Neolander Member since:
2010-03-08

The problem is that almost all malware is also distributed as closed-source binary executables only, and that (being closed source) there is no way that anyone other than the creators of any given piece of such software can tell the difference. No amount of user education will change the fact that no-one (other than the authors of the software) can tell if a given closed-source binary executable does or does not contain new malware.

At least a part of malware can be blocked without knowing how a program works internally, by using a capability-based security model. If the binary blob is sandboxed, it can only do the amount of harm it has been allowed to do.

Most desktop applications, as an example, don't need full access to the user's home folder. Really, they don't. Most of the time, they use this access to open either private config files, or user-designated files. Thus, if we only allow desktop apps to access their config files and user-designated files, we just got rid of that part of malware which used this universal access to the user's home folder for privacy violation or silently deleting and corrupting files without the user knowing.

It's exactly the same tactic as preventing forkbombing by not allowing a process to fork an infinite amount of times by default. Seriously, what kind of non-system software would require that with honest intents ?

This doesn't block the "please enter your facebook password in the form below" kind of malware, though... But at least, the user is conscious of what he's doing now. Only then may user education work.

Edited 2011-01-06 13:32 UTC

Reply Score: 1

RE[5]: BC
by lucas_maximus on Thu 6th Jan 2011 15:10 UTC in reply to "RE[4]: BC"
lucas_maximus Member since:
2009-08-18

The problem is that almost all malware is also distributed as closed-source binary executables only, and that (being closed source) there is no way that anyone other than the creators of any given piece of such software can tell the difference. No amount of user education will change the fact that no-one (other than the authors of the software) can tell if a given closed-source binary executable does or does not contain new malware.


And that is why you get the software from the original author, and guess what ... if you educate someone to always get the software from the original author ... mmmmm.

Furthermore if someone is so uneducated as to how to to avoid threats how will it being open source help ??? A malware author can just offer an "alternative download source" and stick a key logger in there for example ... having the source won't help because the uneducated simply won't know any different.

Also you obviously haven't heard of a checksum then? They use this on Unix/Linux Binary packages as well and also can be used on any file to validate it's integrity.

For example I remember Windows XP service pack 1 having a checksum key on in the installer properties ... if this didn't match what Microsoft had you had a duff/dodgy download.

BTW ... my agenda is merely to point out facts such as these to everybody, so they can make good decisions for themselves regarding which software they choose to run on their hardware. I make absolutely no apology for this agenda.


The thing is you "facts" aren't facts. They are opinions from someone that IMO doesn't really have any practical experience of developing or deploying software.

Unless you work directly in the software industry as a developer or a manager for a development team you simply don't understand the landscape and the issues that developers face.

Also you are biased in thinking that open sourcing everything is a cure to all software problems. This IMO couldn't be further from the truth.

What exactly is your agenda in trying to disparage mine?


Because I think you are biased and do not presents the facts fairly.

Reply Score: 2

RE[6]: BC
by lemur2 on Fri 7th Jan 2011 01:18 UTC in reply to "RE[5]: BC"
lemur2 Member since:
2007-02-17

"The problem is that almost all malware is also distributed as closed-source binary executables only, and that (being closed source) there is no way that anyone other than the creators of any given piece of such software can tell the difference. No amount of user education will change the fact that no-one (other than the authors of the software) can tell if a given closed-source binary executable does or does not contain new malware.


And that is why you get the software from the original author, and guess what ... if you educate someone to always get the software from the original author ... mmmmm.
"

The point is that if the original author is a malware author, then even going to the trouble of getting software directly from the original author won't prevent it from containing malware.

Furthermore if someone is so uneducated as to how to to avoid threats how will it being open source help ???


It is a matter of adopting a self-imposed policy. Linux distributions all maintan repositories of source code, and parallel repositories of binary executables compiled from that source code. Anyone at all can download the source code and verify that compiling it produces the corresponding binary executable. This means that people who did not write the code can nevertheless see what it is in the code, they can compile it for themselves to verify the integrity, and they are users of that code on their systems.

Any user adopting a elf-imposed policy of only installing software directly from such repositories is guaranteed to never get a malware infection on his/her system. There is a very long history of vast amounts of open source software delivered via this means which proves this claim.

A malware author can just offer an "alternative download source" and stick a key logger in there for example ... having the source won't help because the uneducated simply won't know any different.


Yes, it will make a difference. Every single user doesn't need to know how source code works, just one user needs to download the source code and discover the keylogger within it, and "blow the whistle" on that code. It can then be added to a blacklist for all users. It only takes one person to spot the malware, out of millions of users.

Also you obviously haven't heard of a checksum then? They use this on Unix/Linux Binary packages as well and also can be used on any file to validate it's integrity.


Certainly. If you use a checksum to verify that you have downloaded a closed source binary package (even directly from the original author) correctly, and the original author did deliberately include malware within that software, then all you have managed to do is confirm that you have a correct copy of the malware-containing package.

For example I remember Windows XP service pack 1 having a checksum key on in the installer properties ... if this didn't match what Microsoft had you had a duff/dodgy download.


Fine. I don't claim that this is not the case, and I do acknowledge that there is a great deal of perfectly legitimate closed-source non-malware software out there for Windows. Windows XP service pack 1 would be one such piece of software, no argument from me. So?

"BTW ... my agenda is merely to point out facts such as these to everybody, so they can make good decisions for themselves regarding which software they choose to run on their hardware. I make absolutely no apology for this agenda.


The thing is you "facts" aren't facts.
"

Oh yes they are. Each and every one of the claims I have made in this discussion is a verifiable fact.

They are opinions from someone that IMO doesn't really have any practical experience of developing or deploying software.


I am a project engineer by profession, leading projects which develop and deploy bespoke software. I have many years of experience. We supply source code to our customers.

Unless you work directly in the software industry as a developer or a manager for a development team you simply don't understand the landscape and the issues that developers face.


OK, so? I do happen to have many years of engineering experience at leading development teams.

Also you are biased in thinking that open sourcing everything is a cure to all software problems. This IMO couldn't be further from the truth.


You are of course as entitled to your opinion as I am to mine.

BTW, I have made no claim that "open sourcing everything is a cure to all software problems". That is your strawman argument. My claim here is only that users who stick to a self-imposed policy of only installing open source software will be guaranteed that their system never is compromised by malware. If you are going to argue against what I am saying, then this is what you must argue against. Friendly advice ... don't make up something I did not say, and argue against that ... doing that will get you nowhere.

"What exactly is your agenda in trying to disparage mine?


Because I think you are biased and do not presents the facts fairly.
"

And I think you are even more biased, you have no idea how to assess technical matters, and you simply do not heed what experienced people are telling you. How does this help the actual discussion?

Edited 2011-01-07 01:34 UTC

Reply Score: 2

RE[7]: BC
by lucas_maximus on Fri 7th Jan 2011 12:03 UTC in reply to "RE[6]: BC"
lucas_maximus Member since:
2009-08-18

It is a matter of adopting a self-imposed policy.


And you need to be educated, trained whatever you want to call it to do that. You don't do it if you don't understand that you need to do that.

Stop making circular arguments.

Reply Score: 2

RE[7]: BC
by lucas_maximus on Fri 7th Jan 2011 22:02 UTC in reply to "RE[6]: BC"
lucas_maximus Member since:
2009-08-18

Oh yes they are. Each and every one of the claims I have made in this discussion is a verifiable fact.


No they are not ... they are an opinion. You make circular arguments. Circular arguments have a fundamental problem and you just don't see it.

I am a project engineer by profession, leading projects which develop and deploy bespoke software. I have many years of experience. We supply source code to our customers.
OK, so? I do happen to have many years of engineering experience at leading development teams.


Don't believe it for a second. You linked me (in another discussion) to using C# binding for GTK when I said I will use Visual Studio and .NET because it works. This is crazy ...

You also said "What is soo special about source code" (in another discussion) ... if you lead software development teams you would know the sweat, blood and tears it takes to make a decent product and also the amount of money.

I also give my source code to my customers .. however in my contract states they may not disclose to 3rd parties else unless they ask for my permission. If they have their own developers they can work on it. Most customers are happy about this ... they pay extra if they want to own it.

BTW, I have made no claim that "open sourcing everything is a cure to all software problems". That is your strawman argument. My claim here is only that users who stick to a self-imposed policy of only installing open source software will be guaranteed that their system never is compromised by malware. If you are going to argue against what I am saying, then this is what you must argue against. Friendly advice ... don't make up something I did not say, and argue against that ... doing that will get you nowhere.


It is inferred in every post you make ... most people "read between the lines". It is certainly obvious to me, and other I have spoke to about your posts on OSNews.

And I think you are even more biased, you have no idea how to assess technical matters, and you simply do not heed what experienced people are telling you. How does this help the actual discussion?


I assess technical matter everyday. I think though decisions on a logical basis almost everyday of my life.

However you have an "open source" agenda that skews your thinking.

Also in software engineer experience only counts for so much ... and it not only me who thinks this ... The author of Code Complete also agrees with me, one of the best books on Software Engineering ever written.

Edited 2011-01-07 22:04 UTC

Reply Score: 2

RE[4]: BC
by lemur2 on Thu 6th Jan 2011 12:49 UTC in reply to "RE[3]: BC"
lemur2 Member since:
2007-02-17

If you downloaded a shell script for Unix/Linux without understanding from a random site and not understanding how it worked and just ran it, it would cause havok on your system as well.


True (providing one goes through the step of making the script executable after downloading it).

This is an excellent reason to avoid the practice of simply downloading software from some random site, making it executable, and then running it.

Fortuantely, it is entirely possible to install and run a complete Linux desktop (open source) software ensemble without ever once having to do such a thing.

Sticking to such a process as a self-imposed policy is the one known and well-proven way to be utterly certain to completely avoid malware and yet still be able to run a complete desktop software ensemble.

Reply Score: 2

RE[5]: BC
by lucas_maximus on Thu 6th Jan 2011 15:15 UTC in reply to "RE[4]: BC"
lucas_maximus Member since:
2009-08-18

This is an excellent reason to avoid the practice of simply downloading software from some random site, making it executable, and then running it.


You need to be educated not to do this, what if someone for example was following commands from a website and one part was to run rm -rf ~/ ... their home directory would be blown away ... however the system is safe.

I see on various Linux forums incorrect advice given to new users everyday, just look at Ubuntu forums. I saw this on there for example

dd /<somefile> /dev/sda

Which would blow away someone whole hardrive.

Fortuantely, it is entirely possible to install and run a complete Linux desktop (open source) software ensemble without ever once having to do such a thing.


Also is is possible with a Windows, MacOSX, Solaris, FreeBSD, Haiku, Amiga, OpenBSD etc. etc. as well.

Sticking to such a process as a self-imposed policy is the one known and well-proven way to be utterly certain to completely avoid malware and yet still be able to run a complete desktop software ensemble.


Which again requires that you have a certain level of competence in the first place i.e. you have a certain set of specialist knowledge ... you have been educated in this certain area of expertise.

Edited 2011-01-06 15:19 UTC

Reply Score: 2

RE[5]: BC
by malxau on Thu 6th Jan 2011 15:20 UTC in reply to "RE[4]: BC"
malxau Member since:
2005-12-04

True (providing one goes through the step of making the script executable after downloading it). This is an excellent reason to avoid the practice of simply downloading software from some random site, making it executable, and then running it. Fortuantely, it is entirely possible to install and run a complete Linux desktop (open source) software ensemble without ever once having to do such a thing.


Really?

1. It is possible, but very, very difficult, to get a booting system without taking binary code from a source you didn't generate yourself. Typically people use distributions as a starting point. But just like binary code on Windows, this relies on a chain of trust - that the binaries are not malware infested. If I want to create my own distribution tomorrow, users can't know whether to trust me or not. In the end, users have to decide trust by word of mouth - what works, what doesn't - just like Windows.

2. Even when compiling by source, it's common to blindly execute code. Consider how autoconf/configure scripts work. Do you really read configure scripts before running them? Source availability gives a means to ensure trustworthiness, but that is only as effective as user habits. As the volume of source running on people's machines increases, and assuming a human's ability to read code does not increase, the practicality of reviewing code decreases over time. Again, this relies on others reviewing the code, and building up communities based on which code is trustworthy and which isn't, which isn't that different to binary components above.

Reply Score: 2

RE[6]: BC
by Nth_Man on Sun 9th Jan 2011 02:10 UTC in reply to "RE[5]: BC"
Nth_Man Member since:
2010-05-16

With the source code, you can see what is going to be done, study it, modify it, etc. If you can't do it by yourself now, you can study how to do it or you can contract someone to do it, etc.

Without the source code it's not you who has the control, you don't control the software or control your computing. As Stallman says: without freedoms, the software is who controls the users.

Reply Score: 1

RE[6]: BC
by lemur2 on Sun 9th Jan 2011 13:38 UTC in reply to "RE[5]: BC"
lemur2 Member since:
2007-02-17

Typically people use distributions as a starting point. But just like binary code on Windows, this relies on a chain of trust - that the binaries are not malware infested.


It is not like binary code on Windows, because people who did not write the code nevertheless can download the source code, compile it for themselves, and verify that it makes the binary as distributed.

It is not just one isolated instance of one person doing this that builds a trust in the code ... the trust comes from the fact that a program such as gcc, and repositories such as Debian's, have existed for well over a decade, through countless upgrades and versions of the code, downloaded by millions upon millions of users over the span of that decade, with the source code visible in plain sight to millions of people the entire time, and not once has malware been found in the code.

Not once.

We can trust Debian repositories by now.

Edited 2011-01-09 13:39 UTC

Reply Score: 2

RE[3]: BC
by Tuishimi on Thu 6th Jan 2011 16:47 UTC in reply to "RE[2]: BC"
Tuishimi Member since:
2005-07-06

That's an interesting point.

MS has talked (in the past) about continuing efforts to cleanly break the Win32 libs from any system-level entanglement.

Maybe this will also be an opportunity to move a little farther in this direction.

Reply Score: 2

RE: BC
by rif42 on Thu 6th Jan 2011 08:52 UTC in reply to "BC"
rif42 Member since:
2005-11-20

@Vivainio

It means there is no longer value in all the "legacy" crap that runs only on windows (shareware, etc), and it means there will be a bunch of windows computers that are immune to computer viruses running around (for a while).

Microsoft thinks there is lots of value in their legacy software. That is why they are porting Windows over to ARM. And remember this is not the first time Microsoft is working on ARM CPUs, they have been making Windows CE/Mobile for a long time. Maybe they will succeed this time, maybe not.

I do not understand your complain about shareware. The ability to test before you buy is much more to the favour of the user. Much better than novelty apps (I am rich 999USD) you must pay an Apple App-store before even testing.

Reply Score: 1

Very interesting
by BluenoseJake on Wed 5th Jan 2011 22:39 UTC
BluenoseJake
Member since:
2005-08-11

A company I used to work for Used to run NT servers on Alpha, they were fast, really fast, and being servers, app compatibility was mostly irrelevant.

While I was there, we tested NT on an alpha desktop, and any apps run were emulated(very slowly) under a layer called FX32 (I think). Hopefully that won't be an issue here, and a native version of office is encouraging.

Reply Score: 5

Comment by fran
by fran on Wed 5th Jan 2011 22:42 UTC
fran
Member since:
2010-08-06

Dont see how you tie this in.

Security improvement in the next version of windows is related to running all applications but the OS itself in a virtual environment.

http://www.zdnet.com/blog/microsoft/more-windows-8-hints-this-time-...

It is not going to be exclusive to ARM.
No where is Windows NT mentioned anyway.

Reply Score: 0

Surely not
by Vanders on Wed 5th Jan 2011 22:42 UTC
Vanders
Member since:
2005-07-06

Also announced today at CES is Microsoft Office for ARM.


Clearly they're confused. Did they not read the previous OSNews story about Windows on ARM where everyone claimed Windows on ARM would never happen because it would be impossible to run Office? Silly Microsoft.

What's that? A compiler you say? Damn you Microsoft!

Reply Score: 20

Nice to see the code's still portable
by madcrow on Wed 5th Jan 2011 22:50 UTC
madcrow
Member since:
2006-03-13

NT had been written as cross-platform from the start, so its nice to see that the code is still portable. I wonder if NT for ARM will get a port of FX!32, too...

Reply Score: 4

jello
Member since:
2006-08-08

In this article: http://www.osnews.com/story/24165/Windows_NT_on_ARM_It_s_a_Server_T... it it was mentioned Windows on ARM would only make sense on servers.

Microsoft seems to disagree on this as they also develop MS-Office for ARM; and normally you don't run this type of programs on a server.

Either way I think it's a bold move from Microsoft, but on the other hand they might think that if consumers accept new platforms (hardware/software) than they even might accept Windows on a new hardware platform.

What might also have triggered this decision is the push from Intel towards Meego - it's payback time...

Please take all my comments with a grain of salt...

Reply Score: 3

Bill Shooter of Bul Member since:
2006-07-14

Yeah, I'm trying to be modest over my skills of predicting an obvious future. There will not be a version for servers just yet. Probably the version after it hits the desktop.


In the linked story, I kept asking why Windows on ARM. I never got a good response from anyone. The announcement from nvidia provides the answer I was looking for. ARM in servers ? Doesn't make sense. ARM + Nvidia GPU on servers Huge amount of sense. The GPU does all the hard work, the main processor just sits there and looks pretty without consuming much energy.

Reply Score: 2

umccullough Member since:
2006-01-26

ARM in servers ? Doesn't make sense. ARM + Nvidia GPU on servers Huge amount of sense. The GPU does all the hard work, the main processor just sits there and looks pretty without consuming much energy.


Actually, in my experience, straight ARM in servers makes plenty of sense... the same way that Atom-based servers do. Power efficiency is always a major concern for large server farms.

It ultimately depends on the server load - for disk-heavy servers, a dual-core Atom (or ARM) may be plenty, and the low power utilization is a major bonus.

At the moment, both of my servers at home now utilize Atom processors each consuming ~30-40w of power at full load. ARM would be even better, but purchasing/building commodity-hardware-based ARM machines isn't terribly easy to do yet ;)

Reply Score: 2

Bill Shooter of Bul Member since:
2006-07-14

Yeah, Heavy disk loads also might make sense. Assuming the analysing of that data is minimal, and disk I/O is the bottle neck.

I was thinking that any low CPU loads would be better served off virtual machines, but disk I/O from virtual machines stinks.

Reply Score: 2

rif42 Member since:
2005-11-20

Power efficiency is always a major concern for large server farms.

If power efficiency matters, do not use server applications implemented in scripting languages.

Reply Score: 3

Lennie Member since:
2007-09-22

Why not have Office on the server ? Many people/companies use some kind of terminal server-like solution running on Windows.

Reply Score: 2

amadensor Member since:
2006-04-10

Actually, we do run Office on servers. We use it to generate spreadsheets for end users. They can request a process, the output of which is a spreadsheet in Excel format with nice formatting, pivot tables, etc. This is done, at least sometimes via actually running Office.

Reply Score: 1

Everyone remember we have seen this before.
by oiaohm on Wed 5th Jan 2011 23:27 UTC
oiaohm
Member since:
2009-05-30

Windows on PPC and other platforms did exist. PPC did have MS Office and most of the MS product line as well. Just no one else was making applications for it.

Name the 1 problem about people migrating there desktop to Linux. Legacy windows programs that will not be ported.

Windows 8 on ARM will suffer the same problems. I do support for Wine we do get regular questions to port Wine to windows to run old legacy games that work in wine but not Windows.

The legacy issue is huge road block to Linux. But is also a reason why some users are using Linux.

With Windows 8 on Arm I cannot see where it will not be just a full road block. Not that I have seen .Net take off enough to counter this.

Final major question. What is going to happen to Windows CE. Is Windows Phone 7 going to be the last CE? It would make sense from cost cutting point of view.

Reply Score: 3

viton Member since:
2005-08-09

Legacy windows programs that will not be ported.

I'm not sure how many legacy programs I used in last several years. Probably zero?
Everything what is not supported do not worth supporting.

Reply Score: 2

.NET
by Zifre on Wed 5th Jan 2011 23:29 UTC
Zifre
Member since:
2009-10-04

Now Microsoft's strategy with .NET makes quite a lot of sense.

Their hope is to get as much software as possible running on .NET so that it will work on x86 desktops and newer ARM computers.

Reply Score: 4

RE: .NET
by lucas_maximus on Wed 5th Jan 2011 23:59 UTC in reply to ".NET"
lucas_maximus Member since:
2009-08-18

Pretty sensible from the outset ...

For devs it is a nice toolkit to use and it makes developing for Desktop, Web and Mobile nice and familiar.

Also considering how bad .NET 1.0 and 1.1 compared to .NET 2.0 and above. There has been a nice steady improvement in .NET since version 2.0.

Edited 2011-01-06 00:00 UTC

Reply Score: 2

Comment by marcp
by marcp on Thu 6th Jan 2011 00:06 UTC
marcp
Member since:
2007-11-23

The whole new ground for mass infections. Yay!

[despiting the fact that CE was not infected and ARM is not x86, but you never know]

Reply Score: 2

enough bits?
by jwwf on Thu 6th Jan 2011 00:27 UTC
jwwf
Member since:
2006-01-19

I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.

Reply Score: 3

RE: enough bits?
by umccullough on Thu 6th Jan 2011 01:47 UTC in reply to "enough bits?"
umccullough Member since:
2006-01-26

I sure hope they are planning to target only a future 64 bit ARM.


Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.

It would be annoying if 32 bit addressing gets a new lease on life due to this.


What a strange thing to be annoyed at... especially given that if you never need 64bit addressing, you're potentially saving the overhead of having to address it that widely to begin with.

Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


Just sounds like lazy development to me - what assumptions in your software would implicitly fail on 32 bit addressed systems? Don't you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend...

Reply Score: 6

RE[2]: enough bits?
by jwwf on Thu 6th Jan 2011 21:46 UTC in reply to "RE: enough bits?"
jwwf Member since:
2006-01-19

"I sure hope they are planning to target only a future 64 bit ARM.


Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.
"

It's not "just to spite people", why would you think that? It is to allocate development resources efficiently, both for OS developers (fewer builds to test) and for all application developers (same reason). In case you haven't noticed, 2008 R2 is already 64 bit only for this very reason. It is a question of "I have X dollars to spend on this project, how can I most efficiently use them?"

Furthermore, there is no 32 bit application base on NT/ARM now, so there is no one who could be spited. My point is, if you are starting with a clean slate, make it clean!


Just sounds like lazy development to me - what assumptions in your software would implicitly fail on 32 bit addressed systems? Don't you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend...


Of course it's lazy! Do you test all your C on VAX and M68K? How could somebody be so lazy as to not do that? ;)

I own a couple of UNIX boxes from the early 90s. I like playing with them. But I wouldn't actually expect anybody writing software in 2011 to worry about it being portable to OSF/1 or Solaris 2.3. My personal belief is that 32 bit x86 is on it's way down that road; others are free to disagree, but as time goes on, I think fewer and fewer people will.

One other thing, just for fun: Let's say that the biggest single system image machine you can buy now can handle 16TB of RAM (eg, the biggest Altix UV). To hit the 64 bit addressing limit, you need twenty doublings*, which even if you assume happen once per year (dubious), puts us around 2030. Obviously it is possible to hit the limit. But the question is, will the programming environment in 2030 be similar enough to UNIX now such that thinking about 128 bit pointers now would actually pay off? On the one hand you could cite my 1990 UNIX machines as evidence that the answer would be "yes", but on the other, modern non-trivial C programs are not usually trivially portable to these machines. So it's hard to say how much I should worry about 128 bit pointers; they may be the least of my problems in 2030. Or maybe not. Who knows.

* OK, disk access issues like mmap will make it useful before then. Maybe we'll even want (sparse) process address spaces bigger than that before then. But it doesn't change the core question of whether you can anticipate the programming environments of 2030.

Reply Score: 2

RE: enough bits?
by lemur2 on Thu 6th Jan 2011 12:19 UTC in reply to "enough bits?"
lemur2 Member since:
2007-02-17

I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


The ARM Cortex-A15 MPCore CPU architecture, which is the one aimed at desktops and servers, is a 32-bit architecture. Nevertheless, it does not suffer from a limitation of 4GB of main memory, it can in fact address up to one terabyte (1TB) of main memory.

http://www.engadget.com/2010/09/09/arm-reveals-eagle-core-as-cortex...

The Cortex-A15 MPCore picks up where the A9 left off, but with reportedly five times the power of existing CPUs, raising the bar for ARM-based single- and dual-core cell phone processors up to 1.5GHz... or as high as 2.5GHz in quad-core server-friendly rigs with hardware virtualization baked in and support for well over 4GB of memory. One terabyte, actually.


I believe the Cortex-A15 MPCore architecture includes a built-in memory management unit to achieve this feat.

Edited 2011-01-06 12:21 UTC

Reply Score: 2

RE[2]: enough bits?
by oiaohm on Thu 6th Jan 2011 12:53 UTC in reply to "RE: enough bits?"
oiaohm Member since:
2009-05-30

"I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don't even consider whether it would be portable to a 32 bit machine, just like I don't consider whether it would be portable to a 16 bit machine. I'd like it to stay that way.


The ARM Cortex-A15 MPCore CPU architecture, which is the one aimed at desktops and servers, is a 32-bit architecture. Nevertheless, it does not suffer from a limitation of 4GB of main memory, it can in fact address up to one terabyte (1TB) of main memory.

http://www.engadget.com/2010/09/09/arm-reveals-eagle-core-as-cortex...
"

Also important note the 4GB limit on 32 bit OS on a lot of x86 chips is garbage as well. PAE mode. 64gb to 128gb. 32 bit mode.

So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.

So we can expect MS to treat ARM the same as what they do x86. Different versions different limits nothing todo with real hardware limits.

Reply Score: 1

RE[3]: enough bits?
by Thom_Holwerda on Thu 6th Jan 2011 13:02 UTC in reply to "RE[2]: enough bits?"
Thom_Holwerda Member since:
2005-06-29

So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.


Lolwut?

Windows' 32bit client versions do PAE but limits the *operating system* to 4GB anyway due to problems it caused with instability with some drivers (Windows Server 32bit do support more than 4GB).

"However, by the time Windows XP SP2 was under development, client systems with more than 4GB were foreseeable, so the Windows team started broadly testing Windows XP on systems with more than 4GB of memory. Windows XP SP2 also enabled Physical Address Extensions (PAE) support by default on hardware that implements no-execute memory because its required for Data Execution Prevention (DEP), but that also enables support for more than 4GB of memory.

What they found was that many of the systems would crash, hang, or become unbootable because some device drivers, commonly those for video and audio devices that are found typically on clients but not servers, were not programmed to expect physical addresses larger than 4GB. As a result, the drivers truncated such addresses, resulting in memory corruptions and corruption side effects. Server systems commonly have more generic devices and with simpler and more stable drivers, and therefore hadn't generally surfaced these problems. The problematic client driver ecosystem led to the decision for client SKUs to ignore physical memory that resides above 4GB, even though they can theoretically address it."


http://blogs.technet.com/b/markrussinovich/archive/2008/07/21/30920...

However, *applications* in 32bit Windows can access more than 4GB if they want to using AWE (Address Windowing Extensions).

In other words, you're talking out of your ass.

Edited 2011-01-06 13:03 UTC

Reply Score: 1

RE[4]: enough bits?
by malxau on Thu 6th Jan 2011 15:12 UTC in reply to "RE[3]: enough bits?"
malxau Member since:
2005-12-04

"So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.
Lolwut? Windows' 32bit client versions do PAE but limits the *operating system* to 4GB anyway due to problems it caused with instability with some drivers (Windows Server 32bit do support more than 4GB).
...
However, *applications* in 32bit Windows can access more than 4GB if they want to using AWE (Address Windowing Extensions).
"

A Windows client SKU won't address more than 4Gb of physical memory. This means that applications can't use those physical pages either. If an app can use those physical pages, you'll have those pages going through device drivers, which is what the article claims is not supported. If an application attempts to address more than 4Gb of memory, this can only be achieved by paging (ie., giving more than 4Gb of virtual address space but without more than 4Gb of physical pages.) So if you want to put 8Gb of RAM in a machine and actually use it, you have to choose between a 64-bit client SKU or a 32-bit server SKU; a 32-bit client SKU will not use half of that memory.

In other words, you're talking out of your ass.


Please don't include this kind of discourse. It's not constructive, helpful, or informative.

Reply Score: 0

RE[5]: enough bits?
by Carewolf on Thu 6th Jan 2011 15:58 UTC in reply to "RE[4]: enough bits?"
Carewolf Member since:
2005-09-08

I think the confusion is that Windows is usually limited to only using 3Gbyte RAM, but using PAE allows it to use up to 4Gbyte (even in client versions). Also the AWE API can be used in the client versions to access that extra 1-2Gbyte of memory if needed. (

Reply Score: 2

RE[6]: enough bits?
by umccullough on Thu 6th Jan 2011 17:58 UTC in reply to "RE[5]: enough bits?"
umccullough Member since:
2006-01-26

I think the confusion is that Windows is usually limited to only using 3Gbyte RAM, but using PAE allows it to use up to 4Gbyte (even in client versions). Also the AWE API can be used in the client versions to access that extra 1-2Gbyte of memory if needed. (


Perhaps that was how they did it with WinXP (I don't know, I've never used PAE mode on XP) because they needed a reason for people to upgrade later on, but on Windows Server 2000 Advanced Server/Datacenter Edition (yes, Win2k), I've seen PAE enabled to provide 16gb of RAM available to the OS *and* SQL Server (via AWE) - so I know you are wrong.

Edited 2011-01-06 17:59 UTC

Reply Score: 2

RE[5]: enough bits?
by Thom_Holwerda on Thu 6th Jan 2011 19:34 UTC in reply to "RE[4]: enough bits?"
Thom_Holwerda Member since:
2005-06-29

A Windows client SKU won't address more than 4Gb of physical memory. This means that applications can't use those physical pages either.


This is simply wrong.

"Address Windowing Extensions (AWE) is a set of extensions that allows an application to quickly manipulate physical memory greater than 4GB. Certain data-intensive applications, such as database management systems and scientific and engineering software, need access to very large caches of data. In the case of very large data sets, restricting the cache to fit within an application's 2GB of user address space is a severe restriction. In these situations, the cache is too small to properly support the application.

AWE solves this problem by allowing applications to directly address huge amounts of memory while continuing to use 32-bit pointers. AWE allows applications to have data caches larger than 4GB (where sufficient physical memory is present). AWE uses physical nonpaged memory and window views of various portions of this physical memory within a 32-bit virtual address space."


http://msdn.microsoft.com/en-us/library/aa366527(v=vs.85).aspx

Reply Score: 1

RE[6]: enough bits?
by Neolander on Thu 6th Jan 2011 20:02 UTC in reply to "RE[5]: enough bits?"
Neolander Member since:
2010-03-08

It's not this simple.

I don't know how Microsoft implemented this in practice, but the way I see it they only have a few choices :
-Having application developers swap data in and out of their application's address space all by themselves (cumbersome)
-Having applications not directly access their data, but only manipulate it through 64-bit pointers which are sent to the operating system for every single operation. They can do that e.g. by having the extra RAM being manipulated as a file (slow because of the kernel call overhead)

Really, PAE is only good for multiple large processes which each use less than 4GB. Having an individual process manipulate more than 4GB on a 32-bit system remains a highly cumbersome operation.

Edited 2011-01-06 20:04 UTC

Reply Score: 1

RE[7]: enough bits?
by oiaohm on Thu 6th Jan 2011 21:59 UTC in reply to "RE[6]: enough bits?"
oiaohm Member since:
2009-05-30

It's not this simple.

I don't know how Microsoft implemented this in practice, but the way I see it they only have a few choices :
-Having application developers swap data in and out of their application's address space all by themselves (cumbersome)
-Having applications not directly access their data, but only manipulate it through 64-bit pointers which are sent to the operating system for every single operation. They can do that e.g. by having the extra RAM being manipulated as a file (slow because of the kernel call overhead)

Really, PAE is only good for multiple large processes which each use less than 4GB. Having an individual process manipulate more than 4GB on a 32-bit system remains a highly cumbersome operation.

32 bit Applications on Linux don't know they are on PAE or not. So application developers don't need to know about it.

A simple trick. Virtual memory ie swap out. To 32 be application this is what can appeared to have happened to the memory. Infact the memory block as just been placed in PAE Memory block outside the 4g address space. It way faster to get memory back using PAE than using swap.

Yes simple PAE treat it as a Ram based swapfile all complexity solved since 32 bit applications have to put up with swapfiles in the first place.

PAE is a good performance boost on 32 bit system running large applications that with 4 Gb limit would be running swap like mad. Yes Harddrive massively slow.

Now drivers and anything running in kernel mode is a different case. Lot of Windows drivers are not PAE huge memory aware. Even so there are ways around this issue while still taking advantage of PAE. Yes again drivers have to be swapaware or they will cause trouble. Being PAE aware does avoid having to pull page back into main memory for driver to place its data. Still way better than if that page had been sent to disk and had to be pulled back.

Basically there is technically no valid reason to limit particular version of windows to 4gb of memory. Heck Windows Starter has a fake limit of 1gb of memory. The 4gb limit was nice and simple to blame on 32 bit limit.

Cannot MS write a simple ram based swap system using PAE?

Yes it gets trickier with PAE 32 bit when you have duel core. Since to get most advantage out of PAE you have to use it for NUMA. Is this something applications need to worry about. Answer no.

Everything to support PAE is kernel based. Just has to be done right.

Reply Score: 1

RE[6]: enough bits?
by malxau on Thu 6th Jan 2011 22:18 UTC in reply to "RE[5]: enough bits?"
malxau Member since:
2005-12-04

"A Windows client SKU won't address more than 4Gb of physical memory. This means that applications can't use those physical pages either.


This is simply wrong.
"
Have you looked at my Bio? I work on Windows full time. If you want me to go over memory management, I can bore you to tears, but it's very unlikely that you'll be able to dismiss me that easily.

Firstly, here's the page that describes limits on physical addressing:
http://msdn.microsoft.com/en-us/library/aa366778(v=VS.85).aspx#physical_memory_limits_windows_7

Second, this section might be helpful (the part that talks about how PAE, /3Gb, and AWE are related and not related):
http://msdn.microsoft.com/en-us/library/aa366796(v=VS.85).aspx



That's all well and good, but it's still subject to the physical memory limits described in the link I gave above. See the part where it says "The physical pages that can be allocated for an AWE region are limited by the number of physical pages present in the machine, since this memory is never paged..." What this link really shows is that a single process, which has 2Gb of VA, can use greater than 2Gb of physical pages on a 32-bit client system. It cannot use more than 4Gb of physical pages, since that's the absolute maximum the client system will ever use.

It also shows that MS' implementation of AWE requires physical pages and is therefore unsuitable to extend addressing. On client systems it's only useful to get from 2Gb to (some value less than) 4Gb.

Reply Score: 1

RE[7]: enough bits?
by Thom_Holwerda on Thu 6th Jan 2011 22:37 UTC in reply to "RE[6]: enough bits?"
Thom_Holwerda Member since:
2005-06-29

It also shows that MS' implementation of AWE requires physical pages and is therefore unsuitable to extend addressing. On client systems it's only useful to get from 2Gb to (some value less than) 4Gb.


I'm sorry, I think we were talking past one another - I thought you meant that client versions of 32bit Windows had zero options to go past the 2GB limit for applications. We actually meant the same thing, except I was unaware of the 4GB limit of AWE. Thanks for clarifying!

Reply Score: 1

RE[3]: enough bits?
by Neolander on Thu 6th Jan 2011 13:14 UTC in reply to "RE[2]: enough bits?"
Neolander Member since:
2010-03-08

PAE allows you to have more than 4 GB of addressable physical memory, but you can only map them in a 32-bit address space*. A single process thus still cannot hold more than 4 GB of data easily.

PAE is just fine for running lots of small processes on a big machine, as an example if you're running lots of small virtual machines on a server. But for the power-hungry desktop user who wants to crunch terabytes of data in some video editing software, on the other hand, I don't think it'll ever be that useful. Except if we start coding multi-process video editing software, but since developers already have issues with multiple threads I don't see this happening soon...

* AMD64 Vol2, r3.15 (11/2009), p120

Edited 2011-01-06 13:16 UTC

Reply Score: 1

RE[4]: enough bits?
by malxau on Thu 6th Jan 2011 15:25 UTC in reply to "RE[3]: enough bits?"
malxau Member since:
2005-12-04

PAE allows you to have more than 4 GB of addressable physical memory, but you can only map them in a 32-bit address space*.


A single process can use AWE to map subsets of data in its limited address space at a time while still using more than 32-bits of physical memory. Or, in many cases, it can delegate that job to the operating system (eg. by using the OS file cache, which is not limited to the process' 4Gb limit.)

...but since developers already have issues with multiple threads I don't see this happening soon...


So perhaps we're moving to 64-bit for simplicity, not necessity.

Reply Score: 1

RE[5]: enough bits?
by Neolander on Thu 6th Jan 2011 17:41 UTC in reply to "RE[4]: enough bits?"
Neolander Member since:
2010-03-08

...but since developers already have issues with multiple threads I don't see this happening soon...

So perhaps we're moving to 64-bit for simplicity, not necessity.

Well, I wouldn't bother going 64-bit if it wasn't
1/A priori faster and certainly easier to use than 32-bit + PAE (no need to have the OS juggle with your data so that it fits in your 32-bit address space)
2/Much, much more convenient on x86 (AMD have taken the opportunity of AMD64 to clean up part of x86's legacy mess, so x86 processors are easier to play with in 64-bit mode ^^)

Edited 2011-01-06 17:42 UTC

Reply Score: 1

RE[3]: enough bits?
by phoenix on Thu 6th Jan 2011 23:39 UTC in reply to "RE[2]: enough bits?"
phoenix Member since:
2005-07-11

Also important note the 4GB limit on 32 bit OS on a lot of x86 chips is garbage as well. PAE mode. 64gb to 128gb. 32 bit mode.

So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.


PAE allows the *kernel* to access more than 4 GB of RAM. However, *processes* can only see 4 GB of RAM, period. Each process can be given it's own 4 GB chunk of memory, though. But they are still limited to 4 GB.

And the kernel has to do a lot of thunking and bounce buffers and hoop jumping and whatnot to manage PAE accesses. And all your drivers need to be coded to support PAE. And all your low-level apps need to be coded to support PAE. And on and on.

PAE is a mess, and should be avoided like the plague unless there's absolutely no way to run a 64-bit OS/apps.

The only way for an app/process to access more than 4 GB of RAM (on x86) is to use a 64-bit CPU with a 64-bit kernel.

Reply Score: 2

RE[4]: enough bits?
by Panajev on Fri 7th Jan 2011 09:27 UTC in reply to "RE[3]: enough bits?"
Panajev Member since:
2008-01-09

64 bit kernel is not necessary, see OS X. X86-64 CPU's can switch between two modes of operations at run-time allowing 64 bits processes as well as a 32 bit kernel and drivers.

Reply Score: 2

RE[5]: enough bits?
by Neolander on Fri 7th Jan 2011 11:08 UTC in reply to "RE[4]: enough bits?"
Neolander Member since:
2010-03-08

Maybe you could, but once you get enough 64-bit support in the kernel to be able to run 64-bit processes, it's just weird to keep the rest of the kernel 32-bit.

Moreover, drivers and kernel could only write in the first 4GB of RAM without being PAE-aware, which could be problematic for things like DMA.

Reply Score: 1

RE[5]: enough bits?
by phoenix on Fri 7th Jan 2011 17:04 UTC in reply to "RE[4]: enough bits?"
phoenix Member since:
2005-07-11

Yeah, I know about the hybrid mode that AMD CPUs can work in.

But, I thought it was only the other way around. You could run 32-bit userland on a 64-bit kernel. Not that you could run a 64-bit userland on a 32-bit kernel.

Reply Score: 2

RE[4]: enough bits?
by oiaohm on Fri 7th Jan 2011 12:10 UTC in reply to "RE[3]: enough bits?"
oiaohm Member since:
2009-05-30

"Also important note the 4GB limit on 32 bit OS on a lot of x86 chips is garbage as well. PAE mode. 64gb to 128gb. 32 bit mode.

So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.


PAE allows the *kernel* to access more than 4 GB of RAM. However, *processes* can only see 4 GB of RAM, period. Each process can be given it's own 4 GB chunk of memory, though. But they are still limited to 4 GB.

And the kernel has to do a lot of thunking and bounce buffers and hoop jumping and whatnot to manage PAE accesses. And all your drivers need to be coded to support PAE. And all your low-level apps need to be coded to support PAE. And on and on.
"
Really name a Linux program that has to be changed between PAE mode and non PAE mode. Answer zero.

PAE does not have to have anything todo with userspace.

PAE thunking is way lighter than swapspace.

What stunts do 32 bit programs that need to use more than 4gb of space use. Memory mapping to file. PAE Provides more access to storage space so can reduce number of disk operations on a memory mapped file.

So don't quote trash. 64 bit system is not the only way to exceed the 4 GB limit.

Yes a program running on a Non PAE 32 bit machine can be using methods already to have more space than the 4 gb limit at the cost of performance. PAE enables you to reduce the cost of those stunts.

Shock horror is just using PAE for swap, disk cache and assisting mapped files to reduce disk accesses don't require you to be running all PAE compatible drivers. Since most drivers should not be messing with this stuff.

Here is the best bit of all PAE used this way is not even new. Its basically using same style as Expanded memory. Yes breaking the 4 gb limit goes back to 1984.

The limit is 4 gb of memory at 1 time on x86 32 bit. Yes memory mapping and other methods means a program could be many times large than that in reality with or without PAE active.

Difference is PAE can remove the speed hits from the methods used by programs to exceed the 4gb limit.

That is the big mistake here. You are presuming that programs will not be using more than 4gb of memory. That presume is based on the idea that the OS did not provide programmers with a way around that problem. What is incorrect.

Reply Score: 1

No downside for Microsoft.
by Ravyne on Thu 6th Jan 2011 01:13 UTC
Ravyne
Member since:
2006-01-08

I've been claiming that ARM is the biggest competitive threat that x86/x64 has ever seen pretty much since ARM's A8 core, when it was clear to me that they were eying ever higher performance. Intel confirmed the threat with Atom, and now Microsoft has endoresed ARM's march into the x86 stronghold.

We've already seen niche thintop and netbooks based on ARM, servers are announced and nVidia says their high-end ARM design will find its way into fuller-featured laptops and even the desktop. Exactly as I was predicting around the time Atom was announced.


Windows 8 "for SOCs" as they are saying, is actually a pretty interesting product, and supporting a limited number of SOCs means that the number of hardware permutations is much lower, and a known quantity. They can also throw away tons of cruft -- no BIOS, no AGP, PCI or ISA -- but more importantly, no need to support every device ever conceived and built.

On the software side, port Windows itself, Microsoft's First-party stuff, .Net, and get some primary ISVs involved and most people will be happy -- particularly users of iPad-like tablets or i-ified netbooks, who's usage/input model essentially demands new apps anyhow -- people who live on the net and enjoy a few focused, snack-size apps.

Even if Windows 8 on ARM SOCs fails to oust the PC from its traditional space, Microsoft still wins because they'll have succeeded in migrating mobile devices off of a Windows CE-based OS and onto an NT-based OS. One code-base to move forward, mostly-overlapping APIs -- CE will hang on for awhile longer, ultimately, but be relegated to industrial-type uses, probably even morphing into an RTOS. Windows NT will be the only consumer / business -facing OS.

On the other hand, if it succeeds, Microsoft gains an exit strategy if x86 ever tops out, or programming models change so drastically anyhow that it no longer makes sense to be tied down to the legacy processor.

Reply Score: 2

RE: No downside for Microsoft.
by vivainio on Thu 6th Jan 2011 01:29 UTC in reply to "No downside for Microsoft."
vivainio Member since:
2008-12-26


On the other hand, if it succeeds, Microsoft gains an exit strategy if x86 ever tops out, or programming models change so drastically anyhow that it no longer makes sense to be tied down to the legacy processor.


I don't see how ARM would deal with upcoming programming models better than x86.

For now, ARM systems are cheaper and have more mature power management, but they are far away from x86 in performance. Calling x86 "legacy" in this light is a bit preposterous.

Reply Score: 8

galvanash Member since:
2006-01-25

Totally agree there. It's natural that everyone get excited by announcements like this, but ARM is still ARM - its got a lot of things going for it but has a LONG way to go to compete with x86 on pure performance.

ARM is good enough at what it does that I think it could easily become a serious player for systems where power use is critical, but it will have to undergo quite a lot of changes to compete on pure horsepower. I'm not an EE, but I would bet that mechanics of making a built-for-speed chip use less power (i.e. Atom/Bobcat) is a lot easier to tackle than the other way around.

Reply Score: 2

collinm Member since:
2005-07-15

i think for a lot of people that not so important...

arm now with dual core, soon tricore and quad core are enought powerfull for web, office....

Reply Score: 2

galvanash Member since:
2006-01-25

i think for a lot of people that not so important...

arm now with dual core, soon tricore and quad core are enought powerfull for web, office....


Yes, I agree again. I'm simply saying that if you are one of those who DO care about performance and power usage is not your primary concern than ARM is not going to be very attractive now, and maybe never will be.

Reply Score: 2

merkoth Member since:
2006-09-22

Here's an idea: Don't depend on the CPU for raw performance. CPUs are very "smart" when it comes to logic but they choke easily with big datasets. On the other hand, GPUs are vectorial, kinda "dumb" processors which excel at parallel number crunching. So why strain the CPU trying to make it decode one video frame as fast as it can when you can use the GPU to decode 10 frames simultaneously? Maybe in a 1 vs 1 comparison some CPUs can defeat GPUs, but when it comes to parallel processing the diference is abysmal in favor of the GPUs.

For now, we developers depend on tools like CUDA or OpenCL to tell the computer what kind of hardware we want to use for each task, but eventually we will have toolchains smart enough to figure that out for themselves.

It's just a theory, of course, but this looks very similar to what AMD wants to do with Fusion. Maybe we should call this APU instead of CPU+GPU.

Reply Score: 1

Ravyne Member since:
2006-01-08

Totally agree there. It's natural that everyone get excited by announcements like this, but ARM is still ARM - its got a lot of things going for it but has a LONG way to go to compete with x86 on pure performance.

ARM is good enough at what it does that I think it could easily become a serious player for systems where power use is critical, but it will have to undergo quite a lot of changes to compete on pure horsepower.


ARM may not be directly competetive now in performance, but they've been under 20+ years of evolution towards low-power, embedded applications. A single, current-gen ARM core alone draws maybe 500mw at load. Intel's most frugal Atom draws, IIRC, 4w at idle and twice that under load. You can lay down 16 ARM cores in the same thermal envelope as a single Atom core (though doing so would be of dubious use). My point is, though, that the comparison isn't all that fair, since all current ARM processors are fighting with both hands behind their back.

Even so, ARM performance has grown by leaps and bounds in the last 5 years, coming from PII levels of performance with Arm 9 and 11, to being nearly on par with Intel's Atom with the A-9. Thus far they've made those advances without throwing power consumption to the wolves, but imagine if someone came along with the 'radical' idea of even a 10 or 20W power envelope on an ARM implementation. Imagine indeed -- this is exactly what nVidia promised to do today, aiming at the desktop and server markets.

The ARM ISA isn't what's holding ARM back -- its been the power/thermal requirements of their core markets (SOCs, Embedded). Given power and die-size to burn, there's no reason ARM won't make a processor just as beastly as AMD or ARM (experience in doing so notwithstanding).

ARM has a similar problem to Intel in that they utterly dominate all the current markets where they compete -- this is why ARM is eyeing intel's turf and vice versa.

Intel may have a massively larger market cap, but ARM has volume that Intel can only dream about -- to give you an idea of that, it took McDonalds 21 years to sell a billion hamburgers -- and 3 billion ARM cores were produced last year alone. When ARM itself (A-15) and others (nVidia) want to push ARM to the limits, they'll find the market waiting.

I'm not an EE, but I would bet that mechanics of making a built-for-speed chip use less power (i.e. Atom/Bobcat) is a lot easier to tackle than the other way around.


Which is easier, to take a rich man who drives a fast car and convince him to drive a run-of-the-mill sedan, or to put a poor man into that same sedan?

Define easier.

Reply Score: 2

galvanash Member since:
2006-01-25

Which is easier, to take a rich man who drives a fast car and convince him to drive a run-of-the-mill sedan, or to put a poor man into that same sedan?

Define easier.


By "easier", I mean less technically challenging... Atom/Bobcat are fundamentally more similar to their ancestor designs than current designs. Intel and AMD are taking the route of removing complexities in order to achieve lower power use (Intel by going in order, AMD by sharing execution resources with a single frontend). The point is the complexities they are removing in many cases are what make their higher end parts perform so well.

ARM has never had those types of complexities in the first place - most of the special sauce on ARM (Thumb for example) is to optimize things for the embedded space - smaller binares, better performance with smaller caches, etc. No one has ever tried to make an ARM core where performance was the primary goal - all existing cores where designed for power envelopes an order of magnitude or more smaller than high end x86 parts.

I'm not at all saying you can't make a very fast ARM core - I'm just saying it isn't as simple as just ramping up the clock speed and doing some minor reworking - a 3Ghz ARM might be possible with current designs - but even at 3Ghz it would have a long way to go to reach the performance levels of a similarly clocked i5/Phenom core, let alone match them when they can legitimately run at 4Ghz or more themselves. Im just saying it will take a lot of work to make ARM competitive if you factor out power use, and I have no reason at all to believe that nVidia could accomplish such a feat.

Also, I want to stress I am talking about single threaded performance, i.e performance per core, not overall performance. ARM can scale up very well by just throwing more cores at the problem, but that is not the same thing.

Reply Score: 2

RE[2]: No downside for Microsoft.
by Ravyne on Thu 6th Jan 2011 07:46 UTC in reply to "RE: No downside for Microsoft."
Ravyne Member since:
2006-01-08

Arm doesn't deal better, and that's kind of the point -- no traditional CPU does or likely ever will. The closest paradigm shift on the horizon is GPGPU, and specifically heterogenous on-chip computing (AMDs Fusion, Nvidia's Tegra2 and Project Denver announcement). The first of these products look like CPUs with little GPUs attached, but over time that will shift towards looking a lot more like big GPUs with little CPUs attached.

Ultimately there's a limit on how many 'serial' processors (heretofore "CPUs") are useful in a system. parallel processors (heretofore "GPUs") on the other hand, are happy to spread the load across as many computing elements as they have available. Tasks for the GPU are high-throughput data parallel, while tasks suitable for the CPU are, comparatively, low throughput and data-serial or I/O bound -- there's only so much actual compute work to be spread around. Paralell tasks are also the 'sexy' ones -- graphics, gaming, HPC and the Serial tasks are not. Eventually, the CPU will become little more than a traffic-cop routing data into and out-of the GPU.

Now, this, in and of itself is not good for ARM -- they're in no better position than x86, or MIPS or Sparc. What makes this a good thing for ARM is that we are nearing an inflection point where the traditional hardware ISA compatability isn't going to amount to much. Its not actually true that Sparc or MIPS has as good a chance as ARM -- neither are a 'consumer-facing' architecture, yes, only geeks know or care about ARM vs x86, but by consumer-facing I mean that ARM runs what the typical user desires (email, facebook, flash content, streaming video) and does it in form-factors that are popular and while undercutting the competition on price. When the CPU architecture no longer matters a great deal, the x86 (and specifically intel) market share is so high that it can only decline. My argument is that only ARM will be there to pick up the pieces if or when that happens.

There's something of a perfect storm aligning against the traditional lockin Intel and x86 have enjoyed -- heterogenous computing (Fusion, Cuda, OpenCL), the 'cloud', a shift away from desktops to laptops (and eventually smaller iPhone-like devices) -- ARM is ready for this.

Reply Score: 1

RE[2]: No downside for Microsoft.
by Kivada on Sat 8th Jan 2011 11:21 UTC in reply to "RE: No downside for Microsoft."
Kivada Member since:
2010-07-07

Initially it'll be the low raw crunch servers, http://www.linuxfordevices.com/c/a/News/ZT-Systems-R1801e-/

Think that, but with 8x 2.5Ghz Cortex A15 based quads with 256Gb+ of ram running Windows Server 2012 for ARM.

It's a scary thought, but thats what we'll be seeing.

I'll probably be another 2+ years before Win8 on ARM would be all that useful for general consumers though as it'll take some time for the non business apps to filter in.

Reply Score: 1

And the plot thickens
by Poseidon on Thu 6th Jan 2011 02:09 UTC
Poseidon
Member since:
2009-10-31

Wow, this is a very interesting turn of events. I heard about the new multi-core ARM CPU's and thought it would be interesting if they would give intel a run for its money, but now with windows on ARM, this is a whole new can of worms.

I can't wait to see what happens. I am sure nothing but incredible performance gains on both x86 and ARM platforms.

Reply Score: 1

Fantastic news!
by WereCatf on Thu 6th Jan 2011 03:56 UTC
WereCatf
Member since:
2006-02-15

I've been hankering for a good ARM desktop for ages now but so far the biggest hurdle seems to have been a major software company delivering anything for such and thus no one hasn't been bold enough to start producing ARM desktops. With Microsoft now openly embracing ARM there will definitely come such desktops out during the next 2-3 years.

And hell, having an NVIDIA GPU in addition to low-power, low-heat ARM processor core(s) means it'll even be able to support gaming, very multimedia-rich applications and all that.

Companies will have no choice but to aim for easily portable code so as to reach Windows-users on both architechtures and that _could_ also spawn more Linux-versions, though I suppose the chances for that are still somewhat small. Something is still better than nothing.

As for Windows itself.. well, I am strongly for open-source, free software -- not necessarily free as in beer, mind that -- but ever since I got myself Windows 7 I've noticed myself booting to Linux less and less. Thus I'm slightly ashamed to admit it, but I will most likely be Win8 user on my ARM system unless they manage to screw it up in some really major way..

Reply Score: 5

RE: Fantastic news!
by Mellin on Sat 8th Jan 2011 22:11 UTC in reply to "Fantastic news!"
Mellin Member since:
2005-07-06

and i will have to pay for windows if i want a computer even if i will never ever use it

Reply Score: 2

Comment by kaiwai
by kaiwai on Thu 6th Jan 2011 07:40 UTC
kaiwai
Member since:
2005-07-06

I've had a read over at a few other website I saw this interesting piece:

http://www.neowin.net/news/rumor-windows-8-to-feature-tile-based-in...

Apparently Silverlight is going to play a greater role in the future of Windows application development:

"The Windows and Office teams are betting very heavily on this new app type, according to my source, and development has already begun using a beta version of Visual Studio 2012."


With the mixing and matching of native and managed code/Silverlight as shown with the improvements that'll come in Silverlight 5 are we going to see a migration away from the win32 GUI components to using Silverlight for all the visual presentation. Siverlight is native for mouse and touch, a migration to a Silverlight interface will provide the sort of flexibility that allow Windows 8 to run on touch hand held devices, laptops, desktops etc. I just hope that Windows 8 have a back bone and push through instead of compromising for the whingers and whiners demanding that their 40 year old punch card application to work flawlessly with Windows.

Reply Score: 2

pica
Member since:
2005-07-10

With pure I mean without any unmanaged C/C++ legacy code.
These should run out of the box on the ARM variant of Windows 8.

Any numbers ???

pica

Reply Score: 1

Tuppence worth
by ameasures on Thu 6th Jan 2011 14:31 UTC
ameasures
Member since:
2006-01-09

My suspicion is that Microsoft realised that ARM is a dynamic architecture and if they didn't run with it then ... others would as ARM dominates tablets and potentially other new forms of system.

The key driver here, I suspect, is the consumers who will (asap) become intolerant of anything less than (say) a 12 hour battery life.

If ARM takes less silicon to make and consumes less power then that battery life is easier to achieve.

Microsoft being a massive corporation will have ported NT (all versions) behind closed doors in any case.

Reply Score: 2

Ate my hat
by ARUmar on Thu 6th Jan 2011 15:24 UTC
ARUmar
Member since:
2009-10-08

As this was one of the announcements i expected to see in the sam place as Duke Forever getting released (oh wait ..) imho its a positive if only win can get on a clean slate and leave all the crummy baggage of legacy code lurking under its hood and maybe just maybe thay can try and be a tad more open this time.And secondly give linaro and the othe linuxARM affiandos a run for their money , complacency stifles innovation and with this hopefully well see more value in the LinuxARM space.Not to belittle whats already going on in that sphere of dev but with the headstart they already have now would be a good time to prove their worth before the 900 pound gorilla gets in on the act

Reply Score: 1

too little, too late
by garyd on Thu 6th Jan 2011 18:02 UTC
garyd
Member since:
2008-10-22

Microsoft have many obstacles to overcome in this category of the market -- the least of which is time. How long have ARM netbooks and mobile devices been around? How long has WindowsCE had an ARM port? And how much long term success has Microsoft had with its NT family of operating systems running on architectures other than x86-32/64 ? Sing the doom song, GIR.

-Gary

Reply Score: 1

Sabon
Member since:
2005-07-06

What would be REALLY interesting is OS X which probably would already run on it.

Windows on ARM? There isn't enough NoDoze in the word for that.

This should really wake up Intel though.

Speaking of Apple (I was, I didn't read the other posts because they are probably about Windows), they probably already know about this. I wonder if they know enough about that they will base their low end computers on it in the beginning and then later, maybe all of their computers on it.

Reply Score: 2

steve_s Member since:
2006-01-16

Apple were rumoured to be running full-blown OS X on Intel machines years before they made that transition. I expect that they've also had full-blown OS X running on ARM for nearly as long. I also expect that they also have 10.6 running on PPC.

It seems to me that this move by Microsoft validates Apple's pre-Intel position - where they perpetually claimed to mostly deaf ears that PowerPC was better and that it didn't matter that Macs didn't run on x86 chips.

It also seems to me that this opens up the prospect for Apple to not only produce low-end Macs based on ARM chips, but also should they desire to do so frees them up to produce high-end Macs based on PPC chips. They did after all buy PA Semi whose PA6T was pretty danged quick... Whilst IBM kinda failed a bit with their G5 roadmap, they do now make POWER7 chips with 8 cores, running at over 4GHz, and their PowerXCell 8i chips (as used in the IBM Roadrunner supercomputer, based on the Cell in the PS3) is pretty nifty and would work nicely with OpenCL and Grand Central....

Reply Score: 2

Did you take a close look at the device?
by eantoranz on Thu 6th Jan 2011 20:07 UTC
eantoranz
Member since:
2005-12-18

Probably it wa s windows virtual machine running on top of QEMU (or something like that) on top of LINUX on top of ARM :-)

Reply Score: 2

MS can do whatever they like ....
by fithisux on Fri 7th Jan 2011 12:31 UTC
fithisux
Member since:
2006-01-22

having closed source drivers is the problem. With the PowerVR situation, VIA GFX situation and NVIDIA situation they just take the hardware support with proprietary drivers problem to another platform. The unfair war on Open Source by vendors continues.

Reply Score: 2