The Future of Computing Part 2: The Hard Road ahead

In Part 1 I discussed how the software development world is about to be turned on it’s head. Now in Part 2 I look at how the hardware world may be about to undergo even bigger changes and why it wont be a hardware manufacturer leading the way.

I’ll get royally slagged off for this so I’ll get this bit over with first…

PowerPC takes the lead from x86

The IBM PowerPC 970 (aka G5) is close to the x86 in performance with differences due to compiler and code quality currently holding it back. The new 90 nanometer 970FX will increase clock rates to well beyond 2.0GHz and IBMs compiler has already shown >30% gains over GCC on existing 970s. When updated CPUs ship and software using the IBM compiler gets into production I expect the PowerPC will finally again pull ahead of x86 after a long period in the doldrums. With 3GHz PowerPCs due later in the year long rumoured to be based on the more advanced POWER 5 architecture, I expect both Intel and AMD will remain behind.

I do expect that Intel could stay ahead in SPEC marks (or perhaps AMD [1]) but it’ll become increasingly obvious that even the best intended benchmarks can be tamed by a sufficiently determined marketing department. Intel’s newly announced Prescott core is proving to show little if any performance improvement over the existing Northwood core at the same clock speed yet SPEC scores are up markedly [2].

That said Prescott is only starting out and it has clearly been designed for high clock rates, the first CPUs shipping are not even beginning to show what this CPU is capable of. However, Intel are currently facing problems with high transistor leakage current in their 90 nm process and this increases power consumption dramatically, this could hold the Prescott core back from it’s true potential. In any case we can expect top end Intel CPUs to climb to well past 100 Watts power consumption this year, Intel themselves have talked about water cooling and given the power consumption of early Tejas samples [3] that might not be such a bad idea.

On the other hand both IBM and AMD are using Silicon-On-Insulator technology which reduces leakage current, consequently the 970FX already has considerably lower power consumption than the 970. We can expect both the Opteron and Athlon64 to follow the same pattern. Intel will get it’s power back to sensible levels when 65 nm manufacturing comes on line sometime around 2005-06.

What happens in 2005 is anyone’s guess, but with a third player in the performance desktop CPU scene things will be very good for consumers. Many readers no doubt will have a different prediction for performance levels this year but one thing we can all be sure of is the performance levels of the different CPUs will be discussed, discussed and discussed…

Will the Itanic sink?

For many years the dominant computer system has been Windows running on an x86 CPU.
Both Microsoft and Intel have both tried to change this to no effect, Windows NT initially ran on multiple platforms but this is no longer the case today. Intel wants us to all to switch to the Itanium but the initial performance was weak and even today it’s promised performance gains over RISC CPUs have never materialised [4]. With Intel themselves now looking likely to produce a 64 bit x86 CPU, it looks like the Itanium and it’s 13 year, multi-billion development cost could potentially turn out to be the biggest mistake in commercial history.

On the other hand Intel now have most of the team who developed the Alpha CPUs and they are working on the 3rd generation Itanium called “Tanglewood” due in 2005, perhaps Itanium has only had a difficult childhood and it may yet become blossom into a highly successful adult.

x86 will die

One of the x86 providers sell x86 chips which are not x86 chips. They do have some hardware to address the oddities of the x86 instruction set but nevertheless they cannot natively run x86 code. If you are into chips you’ll know that it is Transmeta I am talking about.

The AMD Opteron / Athlon64 has more registers than the standard x86 instruction set defines but software has to be recompiled in order to use them. I’m curious to know what would happen if AMD was to include “code-morphing” software which recompiled the 32 bit x86 code to use these extra registers, would 32 bit performance be as good as it is in the hardware compatibility mode? Could performance even go up? If performance was comparable the 32 bit hardware mode could be largely removed, simplifying the CPU and making it both faster and cheaper.

What is more interesting is if such a technique worked on an AMD processor could it also work on something completely different? Well, yes, Transmeta have already proved this, but none of the Transmeta designs to date are not high performance CPUs. x86 code has ran on non-x86 performance platforms before and sometimes at very high performance [5].

Advantage Alpha

In the 1990s DEC produced Alpha processors and these ran Windows NT. However being incompatible with the x86 instruction set meant most of the existing software base would not run. In order to address this DEC produced a technology called “FX!32” which could emulate x86 CPUs at reasonable speed using a technique not unlike that Transmeta uses. In addition to this the Windows NT libraries were native Alpha code and a lot of programs spent a lot of time in them, this in combination with the Alpha’s then huge speed advantage meant that x86 programs ran faster on the Alpha than they did on the x86.

Performance is no longer important for the vast majority of computing needs. As such, when CPUs can be made which can execute x86 code at high performance and / or low cost compared to native x86 designs there will be little point making them native any more. When x86 performance which is “good enough” and cost sufficiently low we could see the end of the native x86 processor.

Raise the Itanic

The Itanium had a hardware x86 emulator but it’s performance has always been lacking to say the least. Intel now intends to remove this hardware and use the very same FX!32 technology to run x86 binaries on Itaniums with better results. It can already match a low end Pentium 4 and it’ll certainly be interesting to see how well Tanglewood will handle x86 code. It’ll be even more interesting to see how it handles 64 bit x86 code which should be a lot simpler than x86-32, Emulators should do a lot better job when the machine they are emulating is simpler. Once a large body of software has moved to 64 bit and 32 bit performance is “good enough” in software I expect we will see a lot of the original x86 ISA removed from the native x86 CPUs – the primary idea extolled by the RISC movement – the simplification of the ISA – will have finally won.

I expect Intel will use this enhanced performance to move the PC world to Itaniums. Some see the apparent pending announcement of an Intel x86-64 as being a major climb down for the worlds largest CPU company, Intel may view it rather differently, they may be using x86-64 as nothing more than a stepping stone to the Itanium and in this case AMD may have unintentionally saved them a lot of effort. Moving into the x86 domain however will make the Itanium’s future as a high end CPU rather questionable, perhaps Itanium will replace the x86 as the main desktop CPU and become the major low end server CPU as Xeon is today. Of course it’ll be up against the Opteron and it’s successors – oddly enough also designed by ex-Alpha designers.

The x86 ISA will eventually become a purely software problem, CPU architects will be free to be truly creative once again and we should see some interesting designs as a result. With Transmeta, AMD’s x86-64 extensions and Intel’s x86 emulation on Itanium we are seeing the beginning of this process, the x86 is being killed, not by competition from other processors or ISAs, but by the very companies who make them.

Whatever happens this bodes well for the future of Microprocessors, with the x86 instruction set changing into a purely software problem we could see a wider range of CPUs moving into the PC field. Transmeta, Opteron and Itanium all look to be future contenders for your future PCs. One has to wonder what the results would be if IBM was to run Transmeta style “code-morphing” software with a PowerPC 970FX, could PowerPC also be a future PC processor?

The future on the CPU front certainly looks like it is going to be active and interesting. Unfortunately not everything looks good in the future of the PC.

Microsoft will attempt to take over your computer

You may think they have got control of most computers already but this is not the case, they only control the software, I expect Microsoft to try and control the hardware. Why would they want to do such a thing? Simple, if they control the hardware they can make money from it.

There was a time when Microsoft could insist on companies on paying a license fee for all computers whether Windows was included or not. They can’t do things like that anymore but you can bet they would sure like to. One way would to do this again would be to get something they have patents on included in PC hardware, they could then get a share of profits from all PCs – even if they are not running Windows.

However that involves adding something to the PC, Intel and AMD are not going to want to share their profits so wont agree to this, not even Microsoft can force something to be added against their wishes. Microsoft need to find a way to force them to agree to add their IP and I think they’ve found it.

With the X-Box 2 Microsoft can switch to any CPU they want and indeed they are doing exactly that.
But, Microsoft are going much further than just swapping parts. With X-Box 2 Microsoft are becoming a hardware company, or to be more precise, a semiconductor company [6].
The change of CPU in the X-Box 2 may only be a taste of what’s to come, Microsoft could also use the same hardware to produce an “Office-Box”, it’ll be plenty fast enough for the majority of applications and with a reasonable emulator it can run the huge back catalogue of x86 applications, most of which do not need massive computing speed.

Microsoft already have much of the technology they need to do this. The .Net CLR (Common Language Runtime) allows them to host .Net applications on different hardware. The VirtualPC emulator they purchased last year allows them to run the existing x86 software base. It can all run on Windows or perhaps the newer cut down version of XP they’ve just announced. This new version of XP is important because it means they can build a low cost thin client, which will make them difficult to compete with. Of course all this would mean a serious amount of rewriting of the OS, Oddly enough Longhorn is talking a remarkably long time for an OS update, indeed some rumours have suggested it could debut as late as 2007-08. This is a very long time and the resulting development cost will be amazingly expensive, it’ll run to several $billion.

Microsoft now have the option of switching to another processor architecture altogether. If AMD or Intel do not want to bend to their wishes Microsoft can threaten to dump x86, with the X-Box 2 they can prove their point. IBM or Motorola may not be keen on adding Microsoft IP either but I doubt either would turn down the opportunity of producing over a hundred million high margin processors a year.

It could be that the widely rumoured multi-CPU PowerPC based X-Box 2 is correct and this is the weapon that they intend to use against AMD and Intel. That the X-Box 2 is PowerPC is almost beyond doubt – what other “state of the art CPU technology” does IBM have? A multi-CPU design allow Microsoft to run not only applications at speed but also removes the need for various hardware devices (sound chip etc.) This is useful in a consumer games machine but this sheer power will also enable emulated x86 applications to be run alongside native apps at speed, a multi-PowerPC X-Box 2 makes a powerful – and low cost – competitor to a PC.

One company who this could effect almost by accident is Apple, they will have more powerful PowerPC processors by then but a low cost box with multiple processors will bring their pricing structure into sharper focus ever before. There are other ways to boost computing power – potentially massively – and I expect Apple will be looking at these to differentiate future Macs.

Tax .Net

If Microsoft can control the hardware it means they make money from it, it doesn’t matter if you are running Windows or Linux, MS will still get their pound of flesh. Microsoft are and always have been in it for the money, this ensures that even if Microsoft loses their Operating System dominance they’ll still make plenty of money. Of course other Operating Systems will need to access the hardware to operate and this will be arranged by licensing a piece of Microsoft code. Run Linux and you’ll not only pay the hardware tax but you’ll also pay Microsoft more for the pleasure of running Linux.

Having said that this may prove problematic for Linux. If running an OS involves adding NDA’s code to the OS, BSD licensed software will not have a problem, they can work with closed source code.
Linux on the other hand is covered by the GPL, the GPL like any other license has terms and conditions which you are not free to break. If running on MS hardware requires including NDA’d closed source code this may break the GPL and Linux may not legally be able to run on a Microsoft computer[7]. This will not be the fault of Microsoft, it will be the fault of the GPL – free software is not quite as free as one might think, if the GPL was truly free there would be no terms and conditions to break [8] and there would be no problem. I expect this will not really be a problem however, the answer will be a binary driver of some form, but again you’ll have to pay Microsoft for it.

Could this work?

If Microsoft actually tried this would it work? This is a difficult question but Microsoft are probably the one company who could do it. Microsoft’s attempts to get into other markets have to date met come up with pretty feeble results, but in the computer industry when Microsoft say Jump the only answer they get is “how high?”. Microsoft have the power, the determination and the staying power to actually do something like this. But, is even their power enough?

IBM, Intel and even Microsoft have all tried to get the market to change and have never succeeded. However conditions are different now and Microsoft have the technology to force this on the industry if they want it or not. The combination of the .NET CLR, VirtualPC and the lessening need for performance means the industry can jump. The CPU itself is becoming irrelevant, what is relevant is Palladium, because this is the part Microsoft own and most likely the part they want to put in every desktop computer on the planet. It’s meant to be a protection system but it’ll probably be hacked so it doesn’t matter if it works or not, what matters is that your PC includes it and you pay the tax.

And then of course you’ll have to pay next year, and the year after that. With hardware in the PC Microsoft will be able to charge you rent for your own PC. Who needs upgrade fees when people are not upgrading as they used to, charging them rent instead is equally lucrative and you don’t have to pay for R&D.

Now I don’t know if this is what Microsoft are actually planning but consider the facts:

.Net CLR

VirtualPC

X-Box 2 with non x86 CPUs

Longhorn’s highly expensive extended development.

Palladium & Microsoft’s apparent openness with it.

A lessening dominance of the industry.

Slowing upgrade rates.

Microsoft’s money supply is increasingly under threat and they need to do something about.
Microsoft have motive and opportunity.

Then, Chaos

If Microsoft can get the hardware they want into standard PCs and switch to another processor architecture the result on the PC industry will be chaotic to say the least. Other companies will attempt to jump in with alternative solutions with different processors most likely running Linux. Linux however still has the problem of being overly and unnecessarily complex, it’s ready for many users desktops but clearly not ready for all of them, it needs to get a move on to get ready in time. It’s not necessary to remove the complexity, just hide it from casual users, a user should be able to use Linux without ever having to use the command line, if OS X can do it…

There is the distinct possibility such a drastic move could backfire for Microsoft and give Linux and other Operating Systems the chance they’ve been waiting for. On the other hand it could also backfire in other ways, if entire PCs can be emulated couldn’t the Microsoft hardware also be emulated? Would you need to pay the tax then?

It seems there are some battles ahead in the computer industry, could we end up some day running Windows on a PowerPC PC, part of which we’ve leased for Microsoft?
I can’t say I know the answer to that but what I do know is that the answer is irrelevant.

Even If Microsoft win their battles there is another battle they will lose.
This one which will change the technology industry forever. The challenge will come out of nowhere, it will be laughed off by most of the industry, the very same industry it will go on to replace.
It will win because it will strike at the very economic rules and assumptions that this entire industry is built upon.

Stay tuned for Part 3…

—————

References & Notes

[1] Opteron does very well on SPEC with the Intel compiler, but requires a flag Intel have disabled.

http://www.realworldtech.com/forums/index.cfm…

[2] Prescott improves on SPEC but not much else – according to Intel!

http://www.theinquirer.net/?article=13851

[3] Early samples of future Intel CPU “Tejas” use 150 Watts.

http://www.xbitlabs.com/news/cpu/display/20040111115528.html

[4] A Register reader points out how Itanium isn’t so great outside of SPEC marks.

http://www.theregister.co.uk/content/61/35154.html

[5] Wolves in CISC clothing – x86 CPU Emulation

http://www.realworldtech.com/page.cfm?ArticleID=RWT122803224105

[6] Microsoft and Chips

http://www.mdronline.com/publications/mpw/issues/mpw114.html#item2

[7] That’s not quite true, you would be able to run Linux but would not be able to distribute it.

[8] The GPL is Free as in “out in the open”, it is not Free as in “unrestricted”. This difference is not made clear by the FSF.

Copyright (c) Nicholas Blachford February 2004

Disclaimer:

This series is a purely personal work about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.

46 Comments

  1. 2004-02-09 8:50 pm
  2. 2004-02-09 9:02 pm
  3. 2004-02-09 9:06 pm
  4. 2004-02-09 9:22 pm
  5. 2004-02-09 9:33 pm
  6. 2004-02-09 9:37 pm
  7. 2004-02-09 9:52 pm
  8. 2004-02-09 9:55 pm
  9. 2004-02-09 10:28 pm
  10. 2004-02-09 10:31 pm
  11. 2004-02-09 10:31 pm
  12. 2004-02-09 10:32 pm
  13. 2004-02-09 10:41 pm
  14. 2004-02-09 10:46 pm
  15. 2004-02-09 11:18 pm
  16. 2004-02-10 12:02 am
  17. 2004-02-10 12:37 am
  18. 2004-02-10 12:41 am
  19. 2004-02-10 1:24 am
  20. 2004-02-10 1:46 am
  21. 2004-02-10 2:22 am
  22. 2004-02-10 3:20 am
  23. 2004-02-10 4:37 am
  24. 2004-02-10 4:45 am
  25. 2004-02-10 4:53 am
  26. 2004-02-10 5:33 am
  27. 2004-02-10 5:53 am
  28. 2004-02-10 6:57 am
  29. 2004-02-10 7:20 am
  30. 2004-02-10 7:25 am
  31. 2004-02-10 7:37 am
  32. 2004-02-10 7:58 am
  33. 2004-02-10 8:11 am
  34. 2004-02-10 8:53 am
  35. 2004-02-10 10:28 am
  36. 2004-02-10 10:43 am
  37. 2004-02-10 12:50 pm
  38. 2004-02-10 5:12 pm
  39. 2004-02-10 5:24 pm
  40. 2004-02-10 5:44 pm
  41. 2004-02-10 8:49 pm
  42. 2004-02-10 9:00 pm
  43. 2004-02-10 9:16 pm
  44. 2004-02-10 10:10 pm
  45. 2004-02-10 11:00 pm
  46. 2004-02-11 2:51 pm