The microprocessor changed the world: how did we get from the first 4-bit models in the 1970s to today’s 64-bit multicore monsters? This article covers the history of the micro from the vacuum tube to today’s dual-core multithreaded madnes.
Great Moments in Microprocessor History
2004-12-27 Hardware 28 Comments
some of that info there i didnt know until now IMO a nice bit of info a good read too
Interesting to see how many good chip designs died for reasons other than lack of merit. Marketing, cost, compatibility, and luck play such a big role. It will be interesting to see where things head from here. There doesn’t seem to be as clear a path as there was for the last 10 years.
In 1974, I was a student who had the great fortune to get my hands on one of the first (if not the first) 16bit Microprocessor System. Yes, 16 bits when intel was still mucking about with the 4004.
Ok, it was not a single chip 16 bit system but 4*4 bit slice Processors but it did 16bit arithmatic and had a 16 bit address space. This was the IMP-16 made my National Semiconductor.
It had a cross assembler written in Fortran which I got running on a PDP-11/40 (serial 00240) under the Dos V6 O/S.
The object output from the cross assembler was sent directly to the paper tape punch for onward loading into the IMP-16
Arrgghhh the memories of the error
F342 – Odd Address or Other Trap 4
This was equivalent to the BSOD in Windows…
I wish histories like these were more accurate. In 1976 I saw a single chip 16 bit CPU. Where was intel at this time? This was in the labs of Ferranti in the UK.
That was a valuble piece of information, Thanks a lot. ASHLB thank you too for a update.
“Interesting to see how many good chip designs died for reasons other than lack of merit. Marketing, cost, compatibility, and luck play such a big role. ”
Well thats how survival of the fittest works. In nature it’s not survival of the better designed creature. Its usually the one that mates more and so forth. Rarely in-fact is it truly the most fit in the sense of performance that wins, since that creature usually has made the most tradeoffs in it’s design which dooms it.
Far as chips go, I don’t care if you make some super chip, if it’s to expensive, or can’t deliver enough of them (hello IBM), or had to work with, you should die, since you have missed reality.
agreed. surivival of the fittest includes many attributes. and in nature often “jack-of-all trades, master of none” is good enough.
but this isn’t nature. some of these architechtures were impractical at the time because of resource restrictions, cost, or just unpopular (under-marketed). Here’s the difference: Extinction in the technology world doesn’t have to be permanent. Some of these ideas ought to be resurrected. Instead, it seems that many improvements come from squeezing the last drops of performance from already archaic architectures.
to downplay the importance of all that, maybe software is the real problem. all the bloat from backwards compatibility and eye-candy seems to be the only thing driving the need for faster equipment. there certainly don’t seem to be a lot of desktop apps that really require a lot of computing power.
they see to only think of stuff that was comercially of militarly used.
They lack some far more important stuff like http://en.wikipedia.org/wiki/Setun. Also i find it weird that IBM don’t even mention it’s ZISC processor.
As far as I remember the C64 has always used a 6510 CPU, not a 6502. I always thought the 6502 to be in the PET and VIC20 models.. but I could be mistaken of course. Don’t have time to take the casing off my aging C64’s here to have a look so if anyone here could enlighten me…
The article states that AMD’s early x86 clones such as the 386dx outperformed Intel because they translated instructions to RISC instructions internally. This is not correct. AMD’s 386 and 486 chips were direct clones of Intel, as far as I know. The implementations diverged starting with the Pentium/586 class of CPUs.
Was it a microprocessor though?
It’s amazing how far CPU’s have gone. My first one was a 6502 (Apple 2E, 2+) than a 6809 (COCO, COCO 2), then finally the x86 series(8086, 286, 386, 486, Pentium, Pentium III, K6-300, Athlon K7 2200XP and next month, Pentium 4 3.5GHZ).
I worked on all of these platforms. It’s really gone ahead of what I would have done. Would like to see more though.
Yes, you’re right the C64 was a 6510. The difference between that and a standard 6502 was –
MOS 6510 is an extended version of the 6502. It has got a built-in 6-bit I/O port, but apart from that, it is identical and fully compatible to the 6502. It was exclusively used in the Commodore C64. In later models, “8500” is printed on it, bit the chip is identical.
Ah, sticking your assembly code in the memory above the cassette buffer, those were the days …
In high-end UNIX, DEC has phased out Alpha, SGI uses Intel, and Sun is planning to outsource production of SPARC to Fujitsu (IBM continues to make its own chips).
When did this happen? Sun outsourced SPARC to Fujitsu. IBM needs better research. Sun is still developing SPARC. Niagara and Rock are SPARC cpus wholly developed by Sun.
Only one project is being co-developed with Fujitsu AFAIK.
you’re right. AMD’s 386s were exact copies of Intel 386s, only clocked higher (AMD’s 40 MHz 386DX vs Intel’s 33 MHz, and AMD’s 33 MHz 386SX vs Intel’s 25 MHz). it was only when the pentium came out that AMD couldn’t use the pentium core anymore, so they came up with the 5×86 processor (which used an enhanced 486-class core). It was only with their purchase of NexGen that the K5 braniac CPU (on integers only; FP performance sucked) and eventually the K6 and K6-2 were brought to life
I wonder why IBM posts this information. Hmm oh yeah, it seems that IBM wins overall.
They only mention SPARC 2 times–the most popular 64-bit processor in the PC market today currently.
Why does IBM post it? Because it is in an online journal and they want to attract readership. Because IBM has a tendancy to record bits and pieces of the industry regardless (usually as a part of documenting their corporate history). Unlike a lot of companies, who stuff history in their endnotes, IBM seems to stick interesting tidbits in the margins. And as biased as it may be, some of us do appreciate it.
so what is the best architecture for general pc performance, and should apple migrate to it?
i would think compact 16-bit RISC processor would be best
They’ve left out the Inmos transputer series parallel computing architecture (http://en.wikipedia.org/wiki/Transputer).
Shame, as the current vogue form SMP and multicore architectures for running multithreaded apps was what the T series chips were targetted at way back in the late 80’s.
Mind, programming in Occam was a pain in the arse, even with it’s explicit threading built into the language….
Anyone remember the Atari ATW workstations running Helios? That used the Inmos chips.
“They only mention SPARC 2 times–the most popular 64-bit processor in the PC market today currently.”
SPARC is a workstation/server CPU, not part of the PC market..
Beside the Athlon64 has probably already the biggest installed base (SPARC is a low volume CPU), granted while these CPU are 64bit capable, currently they run in 32bit mode mostly, but this may change soon..
(A non-technical and rather potted history, just for fun)
I often wonder how it happened, just why am I sitting here using a clunky old architecture, when the one I used to use was sooooo much better.
I started out programming in z80, a right royal pain in the arse of an instruction set, very limited in scope. No matter, I was learning and promptly learnt to hate it 😉
I upgraded from my old ram-pack-wobbly Z80 to a shiny new ZX Spectrum when they first came out, I remember getting it for Christmas and opening the box…ahhhh the nice manuals, the coolness of the black box and how SMALL it was, and the rubber keys!
I regressed to writing in BASIC, it seemed a hell of a lot easier and more logical than all that pushing and prodding in assembly language, and it helped me to write extremely messy and buggy code, so I was happy with it 😉
More power Igor!
I switched platforms to a C128, well, the games were better and I’d heard good things about the cpu which you could run at double speed (compared to a C64) and, well, the games were better hehe, no seriously, I was interested in computer-based music, so the SID chip was a very interesting prospect for me.
After writing my own controller for SID, messing around with it’s wonderful filtering system and getting to grips with an entirely different assembly language, in instructions and concepts I got bored….
Guess what? I needed more power!
Then came the Amiga!…with better games! My best friend bought one of those hideous Atari ST things, and we used to have coding-fights where I would show him that my slightly slower Amiga with added custom blitter assistance could whoop the ass of his ST, anytime. (let the flamewar begin, lol)
We were happy, we were young, we were coding in (the sun?) 68000 and enjoying life……
Then came the Archimedes!….Eureka! with no games! We’d read about this “thing” called an Archimedes in the press here in the UK, with great excitment. It pretty much kicked the ass of anything and everything, a supercomputer in a (quite hideous looking) cream box…..it had great graphics, it had great sound capability, it had an ARM RISC cpu, we needed it!
After many months of saving, and the announcement that Acorn were releasing an Amiga-style version called the A3000, we took the plunge and got one each.
Wow! From that day on RISC ruled, I’d never experienced such power! I could write assembly language so quickly and it executed soooooo quickly, neat!
I became obsessed. ARM assembly is an interesting language, there are often many ways of doing the same thing, but only one is the quickest! …more competitions! My friend and I coded many blazingly fast exercises; line-drawing, masked plotting, polygons, blah…. all much faster than the ones the OS provided, sometimes by as much as 500x!
When I left college, I started writing code for a living, I became “realistic” and started to understand why Acorn’s unacceptably slow OS routines were actually “ok”.
Call it market pressure, call it “a job” but something changed which made me view things differently.
I witnessed the rise of Microsoft. At the time whilst I was working on stuff for the Acorn machines the developers looked upon Microsoft’s offerings as something of a joke. Ironic ehy? Windows 3.1 was a thrown-together mess of an OS which didn’t even do multitasking, of any kind. My new RiscPC kicked the ass of desktop 386/486 PCs, in performance, style and IDE.
The BASIC interpreter on the RiscPC was the fastest I’ve experienced before or since, with wonderfully useful enhancements to move memory, to write inline assembly, to act as a multitasking application.
In addition to the excellent BASIC, the C compiler written by Acorn worked pretty well, and if you needed assembly language speed you could write inline code or create “Modules” which resided in module-space, pretty much akin to the threads on XP and Linux, but with the concept of SHARING resources. So I could write a new module which enhanced the OS and other apps could access, great.
Now this is the part where everthing goes foggy:
I remember seeing all the ads for OS/2 Warp on TV and wondering what all the fuss was about, it looked like DOS and behaved like DOS, what was the point? Microsoft knew it, they upped the pace during the 90s and Windows 95’s release brought them back into the same playing field as RiscOS and MacOS, rumour control at Acorn suggested that many features had been lifted from RiscOS, and MacOS.
A zillion copies of 95 were sold, given away to businesses, or thrust upon unsuspecting passing chidren (but only the kids with satchels big enough to carry all the floppies!). So, due to demand, the cost of PCs slid dramatically, so much so Acorn and Apple could no longer compete on price. Intel and AMD had finally produced a CPU in the Pentium (and clones) which could at least get somewhere near the ARM cpu’s performance, and 95 wasn’t so bad, really, or was it? (no drag & drop, no smooth fonts, no windows themeing, no OS intergrated development tools, sloooooow redraw).
So PCs were cheap, the 386 architecture finally had something of a reasonable OS to run on it, and suddenly non-geeks wanted a computer. Uh oh, consumerism, the death knell of innovative computer advancements! Yikes!
Acorn gave up, IBM too (de-warped?), Be tried but were squished by the might of Bill’s left Reebok, and now Apple switch to selling lossy-music just to make ends meet (and annoy the Beetles).
But there’s always Open Source and Linux, the cause of many a late night and much infuriation….an OS in text files, built for the internet.
Sigh…. Meantime, our PCs get faster and we buy more memory because it’s so cheap, but alas Cagey’s law kicks in and the apps get fatter and the OS slower in order to almost exactly compensate. What to do? When there are so many contradictions, oh well, when the Pentium 5 10GHz is released, it’ll all be ok…..at least when its running Haiku v2 😀
Actually, the descriptions and contributions of Digital’s series of microprocessors from the original LSI-11 (four chips if I recall correctly without looking it up) to the at least three later single chip implementations through VAX contained serious omissions,
Alpha has not been abandoned, what was dropped by Compaq was the development of the next generation of the Alpha chip, referred to as EV8. EV7-series Alpha processors continue to be sold and will be sold through 2006, if I recall correctly.
The comment about Itanium, as I pointed out in a posting the other week, are factually incorrect. Intel and HP shuffled the roles, HP did not “abandon” Itanium. My earlier article can be found at http://www.osnews.com/story.php?news_id=9191.
– Bob Gezelter, http://www.rlgsc.com
the Newest Pentium 4s? I mean… really… who wants a proc taht is THAT hot, but all I have read from your comments is “Intel this and Intel taht” well I think that is how the problem known as x86 came about… taht sort of mentality
and I suppose the Xeon is the nost higest performing microprossor for lower end servers ever….
for those who want more details…
I agree this article is heavily slanted towards IBM. The tile is just horribly misleading.
SPARC is a workstation/server CPU, not part of the PC market..
Hunh, the zilog 80 and the Alpha were PC chips then right??? The article talks about those.
The article is heavily biased and factually incorrect as many have already pointed out. Sorry, Just because IBM published it doesn’t make it true.
Yes, the Transputer. The T800 was many, many years ahead of all competition when released (designwise). OK, it was never a big player, but the unique design and abilities ought to get it a mentioning. The Transputer-Transputer (don’t remember the name right now, sorry) interconnection system was unheard of, and very useful, but tricky to handle. A shame it did not go further, really. And yes, Occam was a real pain in the b*tt, especially for interconnect. Too bad. Went to Hitachi and was killed, right? Sigh!
And no, I’m not from the UK… No hidden agendas.
>They’ve left out the Inmos transputer series parallel computing architecture >(http://en.wikipedia.org/wiki/Transputer).
>Shame, as the current vogue form SMP and multicore architectures for >running multithreaded apps was what the T series chips were targetted at >way back in the late 80’s.
>Mind, programming in Occam was a pain in the arse, even with it’s explicit >threading built into the language….
>Anyone remember the Atari ATW workstations running Helios? That used >the Inmos chips.
Well Transputers will likely come back as half baked multi cored cpus without any proper schema for building parallel systems. Right now some Transputer folks use StrongArms + FPGA for the Link HW, somewhat of a drag just for 1 node.
Anyway I’ve been building a new RISC Transputer in FPGA composed of multiple PEs sharing a MMU. The PEs do the basic processing, the MMU is where the Transputer architecture fills out the good stuff (process swapping, links, communications, caching). Its heading towards about 100Mips per PE (each 4way threaded) for about $1 of FPGA per PE, that means you can stuff quite a few copies into a bigger FPGA. Each nPE+MMU group can be replicated into any Transputer network you’d like. The splitting into PEs & MMU allows for each half to be designed, debugged, rearchitected independantly. So far the PE is nearing HW debug, the MMU is still in planning.
The likely language to program this is a mix of C with Occam extensions, some asm and also possibly Verilog for the heavy code which can later be separately be spun back into HW. The compiler is on hold till PE HW is running.
BTW I was at Inmos so I feel there’s unfinished business to bring a better Transputer back. It will probably end up in embedded parallel computing, can’t wait to use it myself for several projects:-)
Even if I failed, I am sure others must also be following same idea, FPGAs are just getting to be very good ways to build lower perf RISCs that can later be ASICed at much higher perf. Right now most FPGA cpu architectures are very simple & not very interesting (throwbacks).
As for the article, pretty fluffy reading, the wiki & other “great upus” links are more informative!
Almost every cpu mentioned esp those that are still in use can or has been FPGAed too, ie 68k,Z8000,6502 etc.
BTW IBM did some work with Transputers as well for some research super micros.
I also remember the ATW, perhaps with a new Transputer, we’ll see that sort of thing again although these won’t be big on FP math.
well, the article still mentions SPARC so it is a workstation chip.. just as the power and alpha, etc..
they just don’t mention it because it has been much more successful than power stuff. (heavily biased) but every company releases stuff like this that is biased.. but IBM appears to be doing it the smart way.
If NASA chooses Power over SPARC I would assume that Power would be better for mission critical situations.