The ARM version of Windows 8 might have just become the most desired version of Windows in our hearts and minds. After us talking about legacy code and backwards compatibility in Windows for years now, an Intel senior vice president, Renee James, has just stated that Windows 8 on ARM will not have any form of compatibility for legacy applications whatsoever. Update: Microsoft has responded to Intel’s claims. “Intel’s statements during yesterday’s Intel Investor Meeting about Microsoft’s plans for the next version of Windows were factually inaccurate and unfortunately misleading,” the company said, “From the first demonstrations of Windows on SoC, we have been clear about our goals and have emphasized that we are at the technology demonstration stage. As such, we have no further details or information at this time.”
It’s no secret that Microsoft is working on a tablet user interface experience for Windows 8. This new user interface will run on both Intel and ARM chips (obviously), but Intel has confirmed that while the x86 version of Windows 8 will obviously be able to run the vast collection of legacy applications, the ARM version will not. “Our competitors will not be running legacy applications. Not now. Not ever,” James said.
The setup seems to be that Windows 8 will come in two trees, if you will: Windows 8 ‘traditional’, as Intel puts it, runs on x86 and includes a Windows 7 mode to run legacy applications. The ARM version of Windows 8 will not have this Windows 7 mode for legacy applications. What intrigues me is this: does this mean that on x86, the Windows 7 mode is optional? That you can simply not install it and only run new applications?
If that’s the case, then we could be looking at a significantly leaner Windows 8, where much of the legacy stuff has been relegated to an optional package. This would be the culmination of a lage, ongoing project within the Windows team to componentise and restructure the Windows operating system, a process which started somewhere in 2002 or 2003.
In any case, if Intel is telling the truth here, it will at least mean that the ARM version of Windows 8 will not include any Rosetta-like technology (not unsurprising, since ARM is not (yet) fast enough to properly emulate x86), and that the recently demonstrated ARM version of Office is indeed fully native.
This shouldn’t be particularly shocking to anyone, and honestly it’s probably for the best. Less headaches for everyone in the long run with a clean break.
Well fucking said. It’s about damn time someone broke up with this ridiculous “legacy” stuff. Piling on brand new stuff on top of a shitty foundation is never a good idea in the long run.
The only thing Windows has had going for it is “all the apps” vs. the competition – now they’re taking that away. Guess it will give people a chance to switch away from it more cleanly.
“All the apps” aren’t going away. Their developers will just re-build them for the ARM version of Windows.
The only programs this will affect are the ones that aren’t being actively maintained.
That’s actually not that easy. A lot of applications and software use optimized assembly code which cannot simply recompiled.
Examples are hardware drivers, JITs in Google Chrome, Firefox, Flash Player, Java, multimedia codecs etc.
People still code assembler nowadays and for a reason, mind you!
Adrian
As long as the underlying HAL is code-optimised (crucial job for OS developers), developers shouldn’t be so engrossed with the performance of the architecture. I think that is his/her perspective.
But of course, the race to productivity is another issue, essentially it is my opinion that not any amount of assembly-optimisation can replace bigger monitors, better desk and chair, a good keyboard and mouse, for work requiring interactivity.
Else, non-interactive work should be relegate to something like itaniums, x86, ppc or sparc….
Sorry, but this is non-sense. Anyone is interested in A/C codecs and JIT compiler with decent speeds, this got nothing to do with productivity.
If you want to find out what difference hand-optimized assembly code makes, just compare the JavaScript performance of Internet Explorer (32 bit) with the JavaScript performance of its 64 bit counterpart which does not yet have a JIT compiler.
Ask yourself why it took Sun ages to port the Java plugin to x86_64 or why Adobe still hasn’t been able to push Flash on 64 bit further than beta status. It’s because they have to keep up the assembly.
You’re probably also going to tell me that one never needs more than one CPU cores because one person cannot perform more than one task at the same time, right? (completely neglecting high performance applications like data centers, rendering farms, compute clusters etc)
Adrian
You made a point of “performance” which is the argument of the previous poster, but not the crux of my argument.
I take your argument is something along the lines of: If people want a faster Internet Browser on ARM, not any amount of ‘simple porting’ of IA-32 optimised code to ARM will work.
My argument is:
1) However, if people wants just a simple application e.g. internet browser on ARM, there are alternatives. E.g. Chromium initially requires SSE2 to compile but such dependencies are removed in order to run on ARM and pre-pentium 4 computers.
2) If productivity does not depend on the cpu performance (e.g. word processing, emails, stuff), one should never put performance as a priority.
3) Keeping hardware optimisation at the application level is absurd (IMO) for most applications, such optimisations should remain hidden under the OS. Only when very high performance/accuracy is required that developers should ‘peer’ closer to hardware.
4) You trying to ‘put the point about CPU cores’ does not make any sense to me, I assume you know more about CPUs than I do.
So, name the app from your list that isn’t working on ARM.
Edited 2011-05-19 12:19 UTC
Name any app in the above list that isn’talready cross platform. Bad examples.
The possibly problematic apps (I’m no expert here)…
photoshop, autocad, 3d studio max.
If you get windows8 with arm and already paid for any of the above do you expect to get the arm versions for free??
Edited 2011-05-19 14:04 UTC
These, who are on the Adobe/Autodesk heroin supply, usually repaying for these apps every year.
You selected 2 slowest and bug-making companies.
I don’t expect anything from them. It takes several years for adobe to port flash. Autodesk can’t even fix annoying bugs in their $6k product, and they are on the way to screw up the Maya finally.
Edited 2011-05-19 15:16 UTC
Let’s not forget where ARM processors would typically be used. We’re talking iPad style tablets and some laptops for now, not replacing the workstations heavy hitters like CAD and 3D modelling software typically need. At least not in the short term
Sorry, grammar is seriously off
I wrote it in hurry 😉
This is one of the most prominent examples. Flash on x86_64 still hasn’t reached stable status and it has been in development for years!
And that’s exactly the point, why it’s not just a matter of recompiling. If it was, I highly doubt that Intel had been so successful with their x86 platform and that Microsoft had stopped development of NT on MIPS, Alpha and PPC.
Adrian
It’s not about running them, it’s about PERFORMANCE. It makes a huge difference if you just use (highly portable) C code for A/V codecs and JIT compilers or assembly.
Ever used “mplayer” and wondered why it tries to detect the CPU you are using?
Ever compared the JS performance of IE 32 bits vs. 64 bits?
http://www.cnkeyword.info/javascript-performance-competition-64-bit…
Just have a look at the benchmark results and you will understand.
Furthermore, you won’t be able to run any plugins unless you have something like “nsplugwin-wrapper”.
http://support.microsoft.com/kb/896457/en-us?fr=1
http://en.wikipedia.org/wiki/NSPluginWrapper
Trust me, I’m running Linux on loads of plattforms: x86, x86_64, m68k, SPARC and PPC, I know what can easily be ported and what not =).
Adrian
I understand it perfectly. I work with multiplaform high performance code. But at some point these apps was not optimized.
There is a big difference between slow app and non working app. So get’em running is 80% of deal.
x86 assembly code can be machine-transcoded to ARM without a much problems and NEON ISA is actually more flexible than SSEx.
Do you mean MS didn’t optimized 64bit version?
Linux @ x86_64, ARM, CELL(in past)
Edited 2011-05-20 11:54 UTC
Actually, that’s a good point.
Does anyone know what a ‘legacy’ application is in this context? Is it just a binary that hasn’t been targeted to ARM when it was compiled, or is it more than that – no support for older APIs or 32-bit mode?
Hi,
In this context, I think “legacy” means Windows 7 applications (as opposed to native Windows 8 applications for either x86 or ARM).
– Brendan
Ok, but what is a Windows 7 application? Is it anything that runs on Windows 7 today? If I take a Win32 API exe targeted for x86, it won’t run on Windows 8 ARM – fair enough because it’s not built for that chip. But what if I use a new version of Visual Studio to target it to ARM and re-compile – is the resulting exe able to run on Windows 8 ARM?
If anyone knows of a document that provides a formal explanation of what legacy means w.r.t. Windows 8 on ARM, I’d like to see a link to it posted. As it stands, the statements being made so far by Intel are so vague as to be almost meaningless.
If I understand it correctly it will be similar to the OSX switch.
OS9 Apps did not run on OSX. There were some carbon (fatt) apps that were compiled to run on both but these tended to not be able to take advantage of functionality a native (cocoa) application could in OSX. CoreAudio for example.
I can see MS introducing something similar (fatt apps) as I doubt they want to force their whole dev community to rewrite their apps from scratch.
If MS were going t have a clean break codebase wise, then they’d more likely just drop stuff like Win32 but keep the .NET framework.
This would mean that the majority of .NET apps should recompile easily as well as tie into MS current trends of promoting .NET as their “platform independent” one stop shop for MS platforms (XBox, mobile platforms, etc)
Wrong I was there during the transition period, OSX in its initial form had a classic mode which allowed to run OS9 applications in osx, it was dropped in later revisions. The PowerPC – Intel transitions were similar, they added rosetta for that task which was dropped a while ago. Apple usually gives an emulation layer for such things which provides a certain grace period for a handful of years after that a clean cut is performed.
I was there too. Even there during the move to PPC. You shouldn’t second guess the age of others
Classic mode was an emulation layer. If you remember, used to load up the whole OS in a window first. OSX itself did Not run OS9 apps. Apple did a good job of making them Look like OSX apps though so it was as seemless as possible.
Given what you say below, you don’t understand the OS9 to OS X Switch, so I’m doubtful…
Wrong. There were two ways in which OS9 apps ran on OS X:
1) Carbon. A Carbon app was built with a subset of the “Classic” Mac OS API (called, um, Carbon or Carbonlib if you are pedantic) and was compatible with *both* OS9 *and* OS X. The same exe. No “fat” involved. In OS9 it ran in a slightly more restricted way (like, no protection and co-operative multitasking.)
2) “Classic” which was based on the earlier BlueBox technologoy found in Rhapsody. Mac OS9.x ran in a VM as a process and the apps ran in that VM instance fullscreen in a way that appeared enough like OS X to make it feel sort of “okay”.
There were no such apps. A FAT app is from an even earlier transition. Fat Apps were apps with both 68000 and PowerPC code forks. They came about when Apple transitioned to PowerPC in the early 1990’s. They are nothing to do with Carbon. Carbon was a common subsystem that ran the same PEF (Portable Executable Format – the standard PowerPC exe format used for Mac OS classic apps, aka CFM with IIRC stands for “Code Fragment Manager”) executables on both Mac Classic OS 8.6 and up (with varying degrees of success with the earlier versions of Carbonlib) and Mac OS X up till 10.5 on PowerPC (and I guess on Intel through Rosetta.)
Yes and no. Carbon apps were not meant to dip outside of their sandbox without verifying the API existed. I also seem to recall that if you compile a Carbon app that uses the OS X API I think you will end up with a non PEF/CFM exe (as all OS X native Cocoa apps and “libs” (frameworks) are in Mach-o format), and that won’t run on OS9 anyway. This bit I’m a little hazy on.
Somehow, I doubt that Intel is privy to Microsoft’s internal decision making on this. Also, while NT for ARM may need multiple different released for different ARM platforms, the whole talk about one version of ARM Windows not being able to run stuff compiled for another seems shaky: ARM Linux, after all, has binary compatability across different sub architectures, as does NetBSD/ARM. Why would NT/ARM be any different?
It says it won’t support x86 apps. it makes no mention that this is a super slimed down optimized version, free of legacy code and what not. Only that it won’t run x86 applications. It’s still windows after all
My guess is that “pure Windows 8 applications” will be .net only. That way, the .net runtime will emit ARM or x86 assembly depending on the platform it’s running on.
So by “legacy Windows 7 applications” I guess they mean non .net software that will only be able to run on x86.
Only time will tell, though…
I agree on the .Net part. One of the initial goals was to replace the Win32 API with the .Net CLR. So this makes sense.
Hope they don’t do a WinMo 6.5/WinMo 7 on us, and depreciate the current .Net GUI frameworks, though. That left no way to target both platforms at once.
So as long as I can continue to run my WinForms, WPF, and Silverlight apps I’d very happy.
I will be very happy if they will drop Win32 and MFC for good. They are ugly from a programming point of view.
And some native implementation of C# compiler and WPF would be nice, too. Would be more elegant than using p/invokes.
Agreed, but for starters, it would mean MS would have to rewrite Office, Internet Explorer, and all of their other apps that haven’t been converted yet in .NET. Ain’t gonna happen. Shit, we just got a 64-bit Office last year.
Maybe a quick dirty hack using some win32 .NET bindings?
Sure they can rewrite all major apps in .NET. I don’t think it would be harder to implement Office or IE in .NET than is to implement Visual Studio 2010.
After all, they put all their money on .NET.
IRIC, only parts of VS 2010 were converted over to .NET, such as the code editor. I don’t think they rewrote the compilers and the entire UI.
You simply cannot unless you want to run into serious speed hits. .Net probably has less of a problem from a UI standpoint because they can hook natively into the Windows controls. But as soon as you write custom controls expect some speed hits. Bearable but you have them. Entire IDEs have been written entirely in java and people can work with them quite well, I am one of those who uses Java tools day in day out, but you cannot neglegt the fact that the underlying vm takes its toll to some degree. The same goes for .Net which has a similar performance.
Only Winforms hooks into the Windows native UI.
The Visual Studio 2010 editor uses WPF, which manages and draws the entire UI itself. At the bottom layer, of course, it goes into DirectX and hits native code. But everything above that is managed. All the built-in controls, from buttons to scrollbars — all written in C#.
Maybe a quick dirty hack using some win32 .NET bindings?
Sure they can rewrite all major apps in .NET. I don’t think it would be harder to implement Office or IE in .NET than is to implement Visual Studio 2010.
After all, they put all their money on .NET. [/q]
It wont happen that everything will run 100% in .net. I personally think they will add a dual code bundle exe format in the next windows as well. So that they can bundle arm and intel code side by side, the same goes for the dll format. And they will add the compiler options in vstudio.
.Net itself will be a no brainer once the vm runs everything pure .net runs out of the box.
They actually have Office and IE running on this thing. There was a long feature on Edgadget, with a shot of the Office running as well:
http://www.engadget.com/2011/01/08/editorial-windows-on-arm-is-a-bi…
Thus they might be keeping some of Win32, or they might have already converted the application.
Microsoft already demoed Office 2010 on ARM, so no it’s not just .NET.
Interesting that they demo’d Word, though.
Excel is known to include a bunch of x86 assembly code.
Assembly code is not a terribly big problem. Just use a assembly -> C translator, and you get a compilable file to use. It’ll work the same, maybe slightly slower, but it’ll allow you to actually get a working executable for now and you can optimize it again for the new platform later when all the more important tasks are done.
Uh, I highly doubt that this will work in all cases.
When people code in assembly, they do it for a reason and not for fun. It’s about SIMD instruction sets like MMX, MMX2, SSE etc. I want to see a decompiler which can create usable C code from such highly optimized assembly code.
If all of this was so easy, it hadn’t taken Adobe and Sun so long to get their plugins (Flash and Java) running on x86_64.
I’m not saying that it is impossible to port these applications, but it takes a lot of ressources and any software company/developer will always have to decide whether it’s worth the effort to port to a completely new architecture.
Why would they now switch again after just having switched to x86_64?
I actually highly doubt that Windows on ARM will have real chances on the desktop market. The only market that would be eligible are tablets and smart phones where competition is very very strong thanks to Android and iOS =).
Adrian
assembly->C is hard
x86 asm -> ARM asm is not
It’s only hard if you want to get a higher level view of what’s going on (ie. “decompile”).
For mere translation it’s easy: create a struct representing all registers, then replace “mov %eax, 0” with “registers.eax = 0” and so on. x86/ARM even shares endianess and register size (in the common configurations).
It will look ugly, it will be lots of “goto” statements, but a C compiler will happily create acceptable ARM assembly from that.
You’ll need to create fake stack and emulate calls. That will be a real mess (and very slow mess).
Microsoft had tablets running wWIndows long before Apple. Their only mistake, but a fatal one, was that they used a desktop OS for the tablets instead of developing a new UI for tablets. I hope that this time they will do it right.
The nicest things for devs, will be that software we will write will run on PCs, clouds (Windows Azure), Windows 8 tablets and Windows Phone.
Write once, deploy on four platforms. And all that, supposedly in 2012.
.
Edited 2011-05-18 23:38 UTC
Just do the math:
Quad core 2 GHz ARM chips with dynamic recompilatin of x86 code would suffient for most programms that handle mainly text and numbers.
Multimedia and games are a different thing, but most enterprises would be happy with that as long as .NET and Java have native support.
Seriously, I wonder if some posters in this site do even know what a computer does. Other than “handling numbers” a microprocessor really doesn’t do much (characters in text are nothing but numbers).
Once Intel gets their new 3D process going, their new Atoms will have a very good power/performance envelope. In fact they will be rather competitive with those mythical “quad 2GHz ARMs.” Given the binary compatibility they offer, it is going to be really hard for ARM to break into the data center.
And yes, binary compatibility is still an issue. That is why SPARC still has a market.
Intel and Microsoft have a long history of playing these games to do hardball when it comes to negotiate. Microsoft always dangles the hardware abstraction angle to force intel to submit, and Intel always threatens Microsoft with their next big HW platform not based on MS. It is a dysfunctional dynamic duo.
Yep, see here: http://www.osnews.com/comments/24753
Well, MS doesn’t depend on Intel and Intel doesn’t depend on MS (at least theoretically).
I meant office programms like Excel (numbers) or Word (text), maybe I buthered that a litte. Sorry, not a native speaker.
And sure Intel will maybe get really good, maybe even better than ARM, but still a quad core ARM with a good emulation could reach speeds similar to todays slowest Atoms, which would be suffient for a lot of office work.
To run office on Windows 8 on Arm, MS will only need to recompile. But they may implement Office in .Net, so it will be pretty much hardware independent.
To implement an architecture using another architecture requires a big amount of computin power and ARMs aren quite powerful.
It doesn’t have to be done on .NET.
NT-derived OSs are based on a Hardware Abstraction Layer from the get go. So most even code using the Win32 API only needs a recompile to be portable.
Windows has run on MIPS, PPC, Alpha, and as CE it has even run on ARM platforms before.
The issue when it comes to the desktop is that ARM will have a hard time convincing users to buy their platform to run a limited subset of Windows apps. When for a few dollars more, or even at the same price, they may have access to an x86 platform which runs the entire Windows SW catalogue.
That is the gimmick that has kept Intel at the forefront on the mid and high ranges in processor sales. Their massive inertia of legacy x86 SW has been both a blessing and a curse for them. GIven their financials I’d say mostly a blessing.
And why would anybody buy a processor to run emulated code slower than the native intel alternative, which performs better and will probably have a similar price point running that non-emulated code?
Transmeta, DEC, and all the companies which have tried to sell processors doing dynamic x86 translation have failed miserably. In the handheld/embedded/mobile market, ARM has a clear value proposition vs. x86. In the desktop and data center not so much.
Running emulation for the sake of emulation is a headache that few people want to pay for.
I agree with the situation on the desktop, too much inertia, but in the data centre it’s a whole different kettle of fish. Most companies these days when running Windows Server usually run the products provided by MS like Active Directory and MSSQL. If those are ported to ARM then I don’t see an issue with data centre adoption of the platform.
Furthermore, the FLOSS Unix and *nix-like OSs out there are mostly a recompile away from running on ARM (and often already do). With physicalization (surprisingly) taking off, there are a whole bunch of applications where ARM would fit in quite snugly and with the ARM virtualization code being implemented currently, I’ve already spoken to some people expressing interest in running the platform as a test bed in the hope of further reducing their current power costs.
When it comes to tablet based systems, an area MS has said it’s getting back into in a serious way with Windows 8, the market has shown that as long as you have apps that run on the platform people will buy it. Let’s see if WP7 actually takes off and if so, MS can leverage what they’ve learned from Apple.
Frankly, I think that what’s happening at the moment, in that ARM chips are forcing Intel to work towards ULV chips that can compete, is a very good thing. Intel with X86 has had far too much control of the market for far too long and although AMD has given them the odd run for their money, this is the first time I’ve seen Intel sweat for as long as I can remember. Competition truly is a good thing.
Why do you think ARM is a beast from a computing point of view? It can have 8GHZ and 16 cores and it will still suck. It needs some architectural changes to bring some performance. And if you do some architectural changes, it won’t be an ARM anymore.
Why do you think ARM is a beast from a computing point of view? It can have 8GHZ and 16 cores and it will still suck. It needs some architectural changes to bring some performance. And if you do some architectural changes, it won’t be an ARM anymore.
http://www.geek.com/articles/chips/calxeda-to-offer-480-core-arm-se…
Servers, power savings, virtual machines, cloud computing? Ever heard of it? Money talks, and if Microsoft doesn’t hear the train coming, they wouldn’t touch ARM with a ten foot pole. Intel is putting on a very brave face, but they are getting thier a$$ handed to them on two fronts; smart-phone’s, and tablet’s. Are they going to make that server’s and workstation’s ,also?
I would wager to say that Intel, not Arm, is the on that needs to change their architecture .
Also, CentOS, was given Microsoft’s blessing’s this week, so more money could be put into things like virtualization and cloud computing. Me thinks Intel should be sharpening their pencils, if they want to stay alive . IBM got complacent, and history has a habit of repeating itself if you don’t learn from other’s past mistakes.
And this is why MIPs missed the boat. It’s already been proven to be scalable to high performance. The license holders just allowed arm to outmaneuver them.
Hi,
Intel wasn’t even trying when they created the first generation Atom (the one that the article you linked to compares to ARM Cortex A9 – look at the dates). First generation Atom used an old power hungry chipset that used more power than the CPU, old 45 nm fab, etc. For performance/watt it got beaten by just about everything (Nehalem, VIA’s Nano, AMDs CPUs, etc). More recent Atom isn’t much better – they improved the chipset (but didn’t do much for the CPU itself). It’s like Intel were just toying with the idea as a way of getting more use out of manufacturing plants that had become too old for their main product lines.
If Calxeda’s 480-core server was actually good, they would’ve compared it to Intel CPUs intended for servers (Core 2, Nehalem or Sandy Bridge Xeons) instead.
I’d wager that Intel will actually start trying; Microsoft won’t continually keep making new distributions of Windows for each new ARM SoC (and ARM notebooks and smartphones will be stuck with the 4 existing SoCs that Windows 8 will support, which will become obsolete fast); after about 3 years (when Intel has caught up on the low power/low performance market) Microsoft won’t be able to see why they bother with the hassle of many different ARM distributions; Windows 9 will be 64-bit 80×86 only; and ARM will go back to embedded systems.
The only real hope for ARM is if they get together and create a standardised platform (rather than just a standardised CPU/instruction set); so that it’s possible for Microsoft to create one version of the OS for all ARM systems rather than having to customise the OS to suit each different ARM system. It really wouldn’t take too much either – just slap something like OpenFirmware on top and fill out any of the missing features. Sadly, it’d be like herding cats, and I can’t see it happening quickly enough.
– Brendan
The way it worked in the NT 3.1 to NT 4 days was, Microsoft made the port to your architecture, but you, as the motherboard manufacturer, were responsible for making the HAL.
So, every single DEC Alpha motherboard that supports NT has a unique HAL for it.
Yes, they all use cheap x86 hardware which provides a lot of mips per dollar.
Just check top500, the vast majority there run x86 (64 bit) for a reason:
http://www.top500.org/stats/list/36/procfam
The strongest, non-x86 architecture is Power architecture (not to be confused with PPC) and it’s just 10% of x86_64 alone.
No company that still got all their senses together will invest in anything but x86 hardware. Yes, I agree that the Power and SPARC are incredibly nice architectures and probably beat x86 in many fields.
But NO architecture will ever be able to beat x86 when it comes to mips per dollar and that’s what primarily counts for the applications you mentioned. Even Intel dropped it’s non-x86 architecture, Itanium, at some point. Just because x86 was way more successful (even though many claimed that Intel just introduced to “clean up” the market by kicking all the other architectures with Itanium and then dropping it after there was just x86 and Itanium left).
On the other hand, ARM beats x86 when it comes to mips per watts and that’s why ARM is so widely adopted on mobile platforms. Intel actually had a very low-power architecture with their Pentium M which were based on the Pentium III Mobile. Not sure of how much of platform is still present nowadays, Intel’s processor genealogy is quite complex and ramified.
Adrian
Core and Core 2 were direct descendants of the Pentium 3/M architecture. That’s why they had such good performance per clock compared to P4. Atom is actually based on an even older version of x86: it’s almost pure P5 (Pentium 1) in terms of its basic design.
I never said that it could compete with a modern X86 CPU. I would need huge caches etc. Not going to happen in 2012.
BUT if the emulator is really advanced it could handle simple Office work with ease. That is what I said…
Calm down your x86 fanatism. Atom is highly inferior to cortex A9 in all areas except of memory bandwidth (in popular SOCs). BTW there are 6-core ARM A9 with 512bit memory bus (Toshiba CEVO).
A15 is a new arch with triple issue OoO. 4 core a15 will rival core2duo with a much, much lower power consumption.
I read a rumor that VMware was working on getting x86 apps to work on ARM with ThinApp.
Take that info with a grain of salt.
Wouldn’t be nice for Intel to start cutting off some old, unsupported instructions and enhance greatly x86 with this news version of Windows. I mean, if Windows 8 is breaking backward compatibility by requiring a Windows 7 layer, that layer could support some kind of emulation on the new chips. Dropping real mode and old, unused instructions could free up some nice space on the sillicon.
Take that with a grain of salt, but I think I’ve read somewhere that on modern x86 chips, real mode is emulated anyway, so you’d only save some kB of ROM.
Plus, you still need real mode for some fairly useful BIOS instructions and extensions.
Edited 2011-05-19 12:31 UTC
I guess you are talking about the “Virtual 8086 mode”?
http://en.wikipedia.org/wiki/Virtual_8086_mode
This is not the real “real mode” but a virtual real mode on top of protected mode. The real “real mode” still exists native on any x86 CPU and it actually takes a reset cycle to get from protected mode back into real mode =).
If you want to know the details (and got the time), I recommend the programmer’s handbook for the 386, which covers everything you need to know about x86 processors.
Linux Torvalds mentions somewhere in the early kernel sources that he read this manual as well besides to Tanenbaum’s famous book on operating systems, of course.
Adrian
No, I don’t mean unreal mode I have read the system programming part of the x86 references from Intel and AMD (especially AMD) enough times to know what I’m talking about.
What I think I’ve read somewhere is that real mode instructions do not use any dedicated hardware execution units anymore on modern x86 processors. That they are fully implemented using microcode and the hardware used by higher-level modes.
Edited 2011-05-20 06:06 UTC
there’s a Story over on the register:
http://www.theregister.co.uk/2011/05/19/microsoft_contradicts_renee…
where a microsoft rep said that “recent comments from Intel software chief Renée James on the next version of Windows were “factually inaccurate and unfortunately misleading.””
interesting I think.
Reminds me of the Intel FUD (Fear, Uncertainty and Doubt) campaigns of old when it looked like AMD or Motorola were going to eat Intel’s lunch.
Edited 2011-05-19 11:52 UTC
What Microsoft said is this:
“What Intel says is true, but we would like them not to talk about it. We, at Microsoft, can spin that stuff in a more positive way.”
For about 10 years now Microsoft been pushing developers to make code in .NET for windows… Many have… I figured the reason behind .NET beyond just pissing on Java was so Windows can run software outside the traditional model. For example moving from a 32 bit OS to a 64 Bit OS, changing platforms and mobile devices.
Did Microsoft really just make a development platform that runs slower then a straight compile yet, only works on one platform and hardware standard.
If i where Microsoft one thing i would really really not like, is that if developers where to re-write there ARM version on a cross platform development platform. Which many might do with Linux’s user base growing and a good corporate customer base.
Would the majority buy a $350 windows tablet if the $200 Linux version runs all my applications as well?
And for developers themselves ..I cant even imagine what Adobe thought about rewriting Creative Suite.
…at least native support in Windows could probably get blocked by Intel as MS probably doesn’t have the right licensing to x86 technology to make the virtual machines.
That said, even Win7 is not-backwards compatible all with WinXP applications; and the older the version of Windows required the less likely it will be compatible. That’s why the introduced the XP-mode virtual machine in Win7 – it runs WinXP under a special virtual machine instance (complete with desktop) to run those legacy applications in.
However, to put it on ARM Microsoft would have to get licenses to the x86 instruction set – licenses they likely don’t have. Further, they would need licenses not just from Intel, but also AMD and likely a few other smaller players too (e.g. Transmeta) if they want to do certain things – like support AMD64 (which utilizes Transmeta technology)
So yeah, they may try – but they’ll likely be irrelevant any how. Just one more thing to break the elephant’s back. Didn’t expect them to topple a tree onto themselves though..