I was wondering what the IBM Personal Computer would have been like if they had chosen the Motorola 68000 instead of the Intel 8088, so I used my MCL86+ to emulate the 68000 and find out!
The MCL86+ is a board which uses a Teensy 4.1 to emulate a microprocessor in C code as well as use its GPIOs to emulate the local bus of the Intel 8088. It can be used as a drop-in replacement for the Intel 8088 and can be cycle accurate as well as run in accelerated modes.
That’s a neat trick.
I’m not surprised, and I’ll also add I wouldn’t be surprised that if IBM had used the 8086 and had 16-bit paths on the bus, it would have been similar performance also (but faster). Really the main speed killer was the 8-bit bus, which brought down the performance of the PC to that of a good Z80 based system. These were competitive speed-wise until the price of RAM came down enough to take advantage of the larger address space in the 8086/8088 architecture which made 64K seem rather cramped. 8-bit only machines like the Z80 and 6502 of course had bank switching since forever, but software had to be hardware specific to take advantage of that.
According to Gates, they didn’t go with 68000 because it wasn’t ready. But interesting to see it wouldn’t have been that different performance wise. Also, they wanted a low end machine to avoid cutting into their other machines business segments..
https://web.archive.org/web/20010823113747/http://www.pcmag.com/article/0%2C2997%2Cs%3D1754%26a%3D11072%2C00.asp
Yup. The 68000 was a better choice from a technical standpoint. But a lot of people forget that companies sell products, not technology 😉
IBM required second sourcing for their parts, which Motorola didn’t want to do. Also Intel had a whole development ecosystem, that motorola lacked. So IBM internally was more acquainted with x86 development than 68k. And time to market was of the essence. Which is why they outsourced the OS to Microsoft.
The PC was supposed to be a stop gap product to get something into a market that was booming and they had no presence. And I don’t think IBM themselves thought they were creating an industry standard in the least. Alas…
The product that resembles more what a PC w a 68000 would have looked like is the AT. As the 80286 was the direct counterpart to the 68000,
If only…
Kochise,
Haha. Many of us think 68000 had a better architecture at the beginning. It’s really quite remarkable how IBM’s choice of partners at the dawn of the industry would end up shaping the hardware and software players decades later. With IBM backing 68000, it could have become dominant whereas intel may not have survived. Same with microsoft. It’s the butterfly effect, we could spend hours contemplating how different things could be when you change variables here and there 🙂
Although I loved it, I am not sure on the “conclusiveness” of this test.
The 68000 is a hybrid 16/32 bit processor which a boat load of registers relative to Intel. The 8088 was an 8 bit chip ( closer to the 6800 ).
As the video says, the performance of the ASCII to screen scrolling is almost certainly limited by all the other 8 bit chips involved in this task. IBM chose the 8088 over the 8086 largely so that these cheaper 8 bit components could be used ( as I understand it ).
So, ya, when you are basically just using these other 8 bit chips, not much difference. What if you were doing math though or even more complex application logic? I would think that the 68000 would pull away pretty quickly. How much memory could each system conceivably access?
I think this choice could have been much more consequential than indicated.
That said, the 8086 was also an option and they did not even use that.
If the 68000 was a 16/32 bit cpu, then the 8088 was a 8/16 bit processor. At least be consistent 😉
Totally fair. I should have made this point myself.
They did not use the 8086 because Intel couldn’t sell it for the price IBM wanted, due to contracts with other customers, but the 8088 being a “totally different” product, even though it’s like 99% the same, didn’t fall under those contracts, so it could sell the 8088 cheaply to IBM.
Here’s some “proof” for what I wrote above: https://www.eejournal.com/article/how-the-intel-8088-got-its-bus/
The 68000 was 16 bits internally but could handle 32 bit math. Think of how much more powerful Lotus 1-2-3 ( THE killer application for the IBM PC ) would have been on a processor like that. That would be the benchmark to run if you want to do a “what if” between the 8088 and the 68000.
The 68000 could access 24 bits of memory ( 16 MB ) instead of 20 bits of memory ( 1 MB ). How much different would the industry have been if the PC could be expanded to 15+ MB of RAM for applications instead of 640 KB?
With a 32 bit ISA and 16 MB of RAM, the PC world would have looked a whole lot more modern right from the start.
Now, not only was the 68000 not ready yet but I am sure it cost a fair bit more. I am not questioning the IBM team at all here. It just seems a bit crazy to say “I tested what an IBM PC with a 68000 in it would have been like and concluded that it would have been about the same experience as the 8088”.
The 68008 is maybe the processor to compare the 8088 to as it also had an 8 bit data bus and 20 bit addressing ( 1 MB of RAM ) although it still had a 32 bit ISA and there was a 22 bit ( 4 MB RAM ) version. It came much later though–powering computers like the Sinclair QL in 1984.
The 68000 did not handle 32-bit math. It’s ALU was 16-bit only. BTW.
Yet it did 32 bits math transparently without extra ISA as an extension. It was only an ALU upgrade like in the 68020 that allowed it to reach its true performance. And also external 32 bits bus. Before the 386.
@ Kochise
You’re right, the ISA was 32-bit orthogonal.
The 68000 has 32 bit registers, a 32 bit program counter, a 32 bit address space, and a 32 bit ISA. It had instructions that operated on 32 bit values and it could do 32 bit math. Pointer arithmetic ( memory addresses ) was always 32 bit. It had a flat 32 but address space that could refer to up to 4 GB of memory.
You are correct that the 68000 had a 16 bit ALU but it had two of them. It also had a 16 bit data bus and so 32 bit values required two reads ( and the same for writes ).
From a programmers perspective, the 68000 was a 32 bit CPU ( like the Intel 386SX ). The 68020 and newer were true 32 bit CPUs and the 68000 ran the same software.
The 8088, by contrast, not only had an 8 bit data bus but was a 16 bit CPU. Even though it had 20 address lines, it could only address memory 16 bits at a time ( 64 KB ) which made memory management much more difficult. It had a single 16 bit AlU and could do 16 bit math. The ISA was 16 bit and operated on 16 bit values.
The Intel CPU that first supported 32 bit instructions was the 386. The 386SX had 24 address lines ( 16 MB max RAM ) and a 16 bit data bus just like the 68000. In “protected mode” the 386 also had a flat 4 GB address space like the 68000 did. The difference between a 386SX and a full 386 is like the difference between the 68000 and a 68020.
“it could only address memory 16 bits at a time ( 64 KB )” – that’s a bit confusing. Like you said before, it has a 20-bit address space, and it’s byte-addressable, so it can address 1 MiB. Since the 8088 has an 8-bit data bus, it can read a single byte at a time, as opposed to the 8086 that can read 16 bits at once, if alligned on a word boundary. The 64 KiB limit has to do with the index registers being 16 bit, so you can only use 16 bits to address memory, which is indeed 64 KiB. To allow the full 1 MiB to be addressed, the 8088/8086 has a kind of built-in bank-switching, but with a 16 byte granularity. This allows applications to have a seperate 64 KiB code space, a 2×64 KiB data space and a 64 KiB stack. It also allows, via “far jumps”, to extend the code space.
jalnl,
As an x86 developer that’s not confusing. It could only access 64KB at a time. Access to high address bits required changing the segment registers. That was obviously possible, but access was 64k at a time. You could switch out the segment register on every memory access to arbitrarily access the full 1MB, but this would have been extremely slow.
tanishaj,
I agree, the ISA is 32bit. While there are many ways to implement efficiently and/or not efficiently, it’s still a 32 ISA and this is unquestionably better than an 16bit ISA for future proofing. The x86 ISA had very little future proofing and It would require more hacks and operating modes to extend it throughout the years. Obviously the extensions were technically doable, but it resulted in an architecture with a lot more complexity, inefficient instruction prefixes, and friction on the compiler & software side with unpleasant & unnatural long jumps and near and far pointers for software developers. And it wasn’t just bad for developers, x86 went on to be downright frustrating for millions of users fighting with memory extenders and limits. I wouldn’t be surprised if the world lost billions of dollars of productivity for having gone with the x86 architecture with inferior specs.
Yeah but “time to market” defines the winner, once the consumers are locked-in.
@ Tanishaj
The 386sx and 386 were internally the same, with different data bus width.
The 68000 had a different internal structure, especially in terms of ALU width compared to the 68020. Also the 020 had a lot more extensions, esp in terms of memory protection wrt to the 68000.
Good points. I am not suggesting that the 68000 was a head-to-head 386SX competitor. The 386 had about four times as many transistors as the 68000. The parallels are interesting though given the 6 years between when they where introduced.
My point was regarding the similarities between the 68000 and the 68020 not being equivalent to the 386 vs 386sx.
Emulating another processor in C code on a microcontroller is fun. I did that with a 6809 emulator in an Adafruit Metro M4 board (an Arduino-like board with an ARM CPU), and in a RasPi. They ran 3 and 15 times faster than a real 6809.
To really see what a 68000 would run like as an IBM PC machine, you would need something real, like a Peripheral Technology PT68K-2. It’s a 68000 (8, 10, or 12 MHz) motherboard which replaces the PC motherboard in a real PC case. It was designed with 1980s 68K-ish support chips, 1MB RAM, ancient floppy disk and hard disk controllers, and can use IBM video cards. So it’s very much of a 68K hardware equivalent to the PC. The OS is another story. It came with a system monitor named MONK and an OS named REX, similar to SK-DOS. Both were pretty atrocious in my opinion. I’ve always intended to write a better OS for the thing, but it’s never going to happen.
Very cool. I have never heard of the PT68K-2 but I agree that it looks as close to a 68000 based IBM XT as you could hope for.
It seems it also ran OS9. I found one link that said Minix had been ported to it as well. I do not see why a Linux port would not be possible. M68k Linux exists. It would all be drivers but maybe some of these XT cards are already supported in Linux.
There is also a PT68K-4 it seems that comes with 4 MB of RAM.
IBM also had the System 9000, which is probably the closest thing for a 68000-based PC we got. The problem was that it was nothing out of the ordinary for the price, thanks to it being crippled by design just to comply with the multiple IBM standards and regulations they had at that time. Same thing that happened with the IBM 5150 when it was designed.
If I remember correctly the entire reason that IBM chose the Intel 8088 over the 68000 microprocessor was the cost associated with 16-bit dram, required for the 68000, vs. 8-bit dram for the 8088. Back in the day ram costs where much much higher than nowadays. I remember back in ’83 that a 512k ram board for my TRS-80 Color Computer 3 cost almost twice as much as mc6809e processor used in that system. While emulating a 68000 processor is kind of neat, as described in this article, it doesn’t actually tell us much of anything- the 68000 was cycle for cycle faster across the board relative to the 8088. If 16-bit wide memory was used and all the address lines were wired up you would have had at least twice the bandwidth available to the 8088. And the software tested in this article is also rather useless. The only OS the article mentions that was remotely serious was OS9, originally written for mc6809, it was ported to the 68k series from Motorola. It was a true multi-user, multi-tasking OS with real time applications, and was used extensively by NASA on multiple Space Shuttles. And although the original OS used by Apple on the Lisa was neither multi-user nor multi-tasking, like MS-DOS, comparing the Apple Lisa to the original IBM PC would reveal far more about performance than what the author in the article did.
I always wonder what the computer landscape would look like if Atari had not been such a wild culture, as IBM visited them first to talk about building / designing the IBM PC.
leech,
Me too. It’s easy to see how alternate realities might have happed, like if someone else made a better impression on IBM and ms + x86 hadn’t won, but the big question is how would today be different? The further out we project, the fuzzier things get. I think market consolidation would have happened anyways because that’s just the influence that markets have. Different leaders and technologies might have drastically changed the path of evolution, but it’s difficult to make definitive conclusions. Statistically it is likely our events are somewhere in the middle of the bell curve – some alternate outcomes would be better and others would be worse. It does make me wonder how much further humanity could be in the case of nearly perfect evolutionary choices. I imagine we’d be working with higher level abstractions and I’m certain we wouldn’t be programming in C, haha.