With the benefit of hindsight, [iAPX432] seems misconceived on just about every level. Six years in development, it was repeatedly delayed and when it was finally launched it was too slow and hardly sold at all. It was officially cancelled in 19861, just five years after it first went on sale. It’s not an exaggeration to call it a commercial disaster.
[…]So whilst it’s interesting to look at the reasons why the iAPX432 failed, it’s also useful to consider why Intel’s senior management thought it would work and why they got it wrong. If they could make these mistakes, then anyone could.
We’ll look at the story of the iAPX432, examine some of its technical innovations and failures, and then try to understand why Intel got it wrong.
An excellent deep dive into iAPX432, an architecture most of us will have zero experience with. Considering the recent passing of Gordon Moore, take some time to understand one of his company’s major bets that didn’t work out.
With the Motorola 68000 available since 1979, they couldn’t stand a chance.
That’s nonsense, the 68000 was superior to the 8088/8086 too, and yet, here we are…
jalnl,
Perhaps if the iAPX432 could have been released sooner, before x86 became entrenched, but that didn’t happen and once x86 become dominant everything else would end up facing significant barriers to entry. For better or worse software compatibility was often the deciding factor in what hardware would succeed. x86’s early lead has carried it through today. Newer hardware designs may be better but chicken and egg market challenges pose major hurdles to adoption. At least now with the rise in platforms using byte code and the development of good enough emulation, switching to alternative architectures isn’t as much of a non-starter as it used to be. Although we’re still in monopoly territory.
@ jalnl
The 68000 also had its own set of issues, which is why just about every 68k vendor eventually dropped, or was in the process of dropping, the platform by the end of the 80s.
Most notably its 24 MiB memory limit. And the clock speed that hasn’t evolved very much (never reached 100 MHz).
Kochise,
These were upgradable specs though. The memory limit was 4GB by the end of the series. And though 75MHz clocks look bad today, they were reasonable for the time compared to early Pentiums of the same era.
https://www.cpu-world.com/CPUs/68040/index.html
https://www.cpu-world.com/CPUs/68060/index.html
Obviously they discontinued the 68k line in favor of PPC, but I think jalnl’s point is quite valid x86’s design was even more limited. I believe intel owes a lot of it’s early success to having strong business partners despite having a somewhat clunky architecture. There’s no crystal ball that can definitively show what would have happened if PCs with DOS had standardized on 68k instead of x86, but I do think there would have been way fewer memory problems.
Consider that x86’s architecture lied at the root of many PC software pain points for so many years. Not only did x86 segmented memory modes make developers miserable, but end users would have to stumble their way through autoexec.bat and config.sys to configure himem, ems, xms, dpmi, etc to get different games, applications, and windows running and they didn’t always run under the same configuration. Microsoft eventually added a dos feature to select configurations at boot, but the fact that this was even necessary speaks to how clumsy x86 was for DOS users in those years. All the way through DOS 6.22 the x86 16bit limits were still rearing their ugly head such that microsoft was still writing tools to help users deal with it.
https://dfarq.homeip.net/optimizing-dos-memory/
Of course, it’s all ancient history now.
@ Kochise
Almost all the 68k series had some serious issues, that most people don’t remember because they internet told them that x86 was ugly.
The 68020 and 30 had serious issues with mmu and cache, which is why many vendors made their own. There were also functional bugs which Motorola never addressed. Plus moto’s process lagged consistently.
Motorola ended up pissing off most of their customers, which is why they were leaving in droves by the late 80s.
The 68000 was better than the 8086 in a lot of things. But Motorola were not able to build on that lead and execute at the level intel did during the 80s.
By the time the 386 was out, the 68k had no significant advantage over x86.
javiercero1,
Granted, most people won’t remember much about the 68k, it was before my time. However to be fair a lot of the x86 criticism is actually from first hand experience. While it’s true that 386 introduced 32bit addressing, Keep in mind that many PC users & programs still depended on 16bit segmented memory well into the 90s and many people were still experiencing the memory limitations on a regular basis because it hadn’t yet become an archaic CPU mode.
True the memory paging for x86 real mode was not the most elegant. The 68k also had issues with lack of orthogonality in it’s addressing modes between the 68000/68010 and the rest of the 68k family.
People compare the 68000 vs the 8086/88, when it’s direct competitor was the 286, which came only a few months after. By then intel offered the same 24 bit address (16MB) space, and with protected mode x86 was doing proper multiuser paging, which the 68000 couldn’t. And as I said, by the time 386 came out, the 68k didn’t have any significant edge.
People love to rail against x86 as it being “ugly” but it is just arbitrary qualitative aesthetic opinions. But intel managed to both execute better and keep proper backwards compatibility, and the issue by the was mostly with Microsoft’s weird OS strategy in the 80s, where they actually initially expected Xenix to be their flagship OS, of all things.
In the end, the straight forward compatibility and cadence of execution are things that the market favored more than a slightly cleaner programming model. Also, as I said, motorola managed to alienate a lot of their customers because there were lots of brain dead design decisions between the 68000, the 020, and the 030 for example which broke certain compatibility and made the ABI a bit wonky. And they left some serious bugs unaddressed… which is why by the end of the 80s, every 68k systems vendor had either jumped to another architecture or was in the process of doing so.
x86 has managed to be extremely resilient, and has proven that people buy machines to run existing software, not on the promises of enabling future applications. And even intel tried to kill it 3+ times, and they never went anywhere.
javiercero1,
Backwards compatibility is a two edged sword. On the one hand we can say compatibility is good, but on the other hand each generation accumulates cruft from the past that has to be carried forward. At least moore’s law enabled CPUs to introduce more CPU modes to overcome previous limitations, but it also introduces more complexity over a more strait forward architecture, which is bad. Because of this PC software and operating systems became more complex too, often ending up with conditional code that was difficult to understand, optimize, and maintain. This affects assembly and high level languages alike. Memory segmentation is a big one. Early intel engineers were quite wasteful with opcode instructions too. which probably didn’t matter at the beginning because they weren’t planning for the future. But many instructions (think AAA) were completely useless on PCs, meanwhile more frequent and useful instructions would have to get longer encodings.. The x86 floating point unit was notoriously bad and buggy. Poor implementations of some x86 opcodes made the C standard worse, like the undefined behaviors for edge cases of SHL and SHR…
(uint32_t)i >> X, where X=32 mathematically equals zero, yet it cannot be defined because intel CPUs failed to implement this in the silicone. It comes up often in bit shifting algorithms and can easily trip developers. I believe AMD finally defined it for AMD64. Incidentally AMD cleaned up a lot of intel’s original cruft, though that’s a whole new topic.
Maybe it’s a bit subjective, but some of us argue that intel PCs also had many brain dead designs decisions. Still, they had a substantial business software advantage and the hardware was still “good enough”.
Yeah, I know. even the monopolists have a hard time reinventing themselves. I do wonder what we could have gotten if AMD had invented a new 64bit CPU architecture instead of breathing new life into x86. With neither intel nor AMD investing in x86 anymore, something else would have replaced it, though we can’t say what the replacement would have been. The 64bit transition probably would have been the best opportunity to replace x86. Instead of apple migrating to x86-64, maybe both ms and apple would have migrated to something better. Oh well.