You’re probably familiar with modern processors made by Advanced Micro Devices. But AMD’s processors go back to 1975, when AMD introduced the Am2901. This chip was a type of processor called a bit-slice processor: each chip processed just 4 bits, but multiple chips were combined to produce a larger word size. This approach was used in the 1970s and 1980s to create a 16-bit, 36-bit, or 64-bit processor (for example), when the whole processor couldn’t fit on a single fast chip.
The Am2901 chip became very popular, used in diverse systems ranging from the Battlezone video game to the VAX-11/730 minicomputer, from the Xerox Star workstation to the F-16 fighter’s Magic 372 computer. The fastest version of this processor, the Am2901C, used a logic family called emitter-coupled logic (ECL) for high performance. In this blog post, I open up an Am2901C chip, examine its die under a microscope, and explain the ECL circuits that made its arithmetic-logic unit work.
A very detailed, technical look at this processor.
36-bit? Simple mistake or an interesting story all by itself?
I can’t comment on whether this particular chip was used in 36-it configurations, but some early systems before the standardization of things like ASCII and EBCDIC did use a 36-bit word size. The idea was to match the precision of 10-digit BCD machines (such as the IBM 7070), which in turn were 10-digit designs because they tried to match the precision of older mechanical calculators.
There have also been a number of other ‘exotic’ word sizes, including 12-bit (PDP-8 being a famous example), 18-bit (from the same era as 36-bit, on smaller systems), 24-bit (seen in a number of Control Data Corporation machines), 26-bit (seen mostly in addressing in the IBM System/370 and early ARM chips) 31-bit (also an addressing thing), 48-bit (baseline addressing for 64-bit x86, as well as Burroughs mainframes) and 60-bit (the CDC 6000 series).
24/48/96 bits words are often used in DSP
Kochise,
Yeah, it’s kind of funny that those are chosen because they’re multiples of a byte that are compatible with computers rather than because they’re physically significant. In cases where 16bit is marginally insufficient, they’ll bump it up to 24 even though that has a lot more range.
I tried looking up what might use 96bits…
https://www.xilinx.com/support/documentation/data_sheets/ds889-zynq-usp-rfsoc-overview.pdf
With microcontrollers like ATMEGAs that are built to be cheap and practical, you can often find 10bit ADC. 8bit is clearly insufficient for post projects. But they didn’t bother to go all the way to 16, which would provide a lot more resolution over 10bit.
My guess is it was a typo, but it’s interesting to dig deeper anyways.
Since you’re including address line sizes, the 1MB 20bit addressing of the original x86 should be noted as well.
Obviously BCD was much easier for a calculator (or human) to decode into decimal for display on 7-segment displays or the older nixie tubes:
https://en.wikipedia.org/wiki/Nixie_tube
(how classy is that!)
But BCD is wasteful on a binary computer that can perform base conversions.
36bits /4 = 9 nibbles = 9 BCD digits.
(2^36bits)-1 = 68,719,476,735
So using the exact same sized register, BCD wastes quite a bit of range while adding complexity to the ALU over strait binary.
0 through 68,719,476,735
0 through xx,999,999,999
26-bit (seen mostly in addressing in … early ARM chips)
This is not an ‘exotic word size’, but address bus limit.
Like 68000 was limited to 24 bit address range, despite 32-bit address registers.
Error detecting ram usually has one bit extra per byte for a parity, giving 9 bit memory. Words become 18 bits, longs become 36 bits, and so on. Some computers and consoles took advantage of this memory to squeeze a little more power from existing parts. Example: the Nintendo 64 used 18 bit ECC RDRAM for it’s memory. The CPU didn’t use the extra two bits (not even for parity), but the GPU could use them for extra alpha bits when rendering for better transparency effects.
JLF65,
Good observation, however that doesn’t seem to be the case with AM2901.
To the best of my knowledge, parity isn’t implemented at the CPU register level even in high end servers. It gets added on by the memory controller or IO circuitry. I’d be very interested in learning about any counter examples though!
I was somewhat implying that in the case of the 2901, someone might make a custom processor 36 bits to take advantage of the 9-bit ram. 9 slices with four ram chips/SIMMs.
That’s not really how 18 and 36-bit word lengths came about. In reality, it was to do with character sets.
Many early computers used 6-bit character sets, which gave you uppercase letters, numbers, and a good smattering of punctuation and control characters. These 6-bit characters fit well in a 12-bit or 18-bit word, with several characters per word.
Therefore, it was sensible to make word lengths that reflect this length of character code.
Of course, when ASCII was standardised, IBM standardised on an 8-bit byte for all there machines, as it supported the 7 bits needed for ASCII quite well. Other companies wanted in on the ASCII compatibility, and that meant it was easier to just adopt the 8-bit byte as well. DEC did this when it released the 16-bit PDP-11, and many other manufacturers, like CDC and Data General, also went down the ASCII path, and adopted the 8-bit byte.
Eventually, 8-bit bytes were adopted by everyone, giving us the familiar 8, 16, 32, and 64 bit lengths we know and love today.
However, it’s worth noting that there is no inherent reason 8-bit bytes should be used, other than ASCII support. Computers can be implemented on practically any bit length, though things get pretty impractical under 4 bits bits long.
I never said anything about how 8-bit bytes came about. I said that AFTER 8-bit bytes came about (however that was), 9-bit memory came about to provide error checking for systems that needed it. 9-bit memory became the standard for that purpose, which lead to 18, 36, and eventually 72 bit memory widths.
The123king,
As you probably know already, IBM needed to support 8bit EBCDIC as well…
https://en.wikipedia.org/wiki/EBCDIC
Side note, before reading this wikipedia page, I had never studied the character values in EBCDIC. Go take a look at that and notice that the character ordinals are not sequential. Also notice how ‘~’ comes between ‘r’ and ‘s’. Wow that looks so stupid, I always knew EBCDIC to be a different character set but I never realized quite how bad it was.
The Am2901 was used in a lot of machines, including late model mainframes and minicomputers of the 1970’s. Many of these machines used strange (to us modern users) bit lengths.
It was used to implement the 12-bit PDP-8, `as well as the 36-bit PDP-10. Various 16-bit PDP-11 models also used them.
I’m sure they were also used for 24-bit machines too, but i can’t find any data on that.
There’s a list of machines using it on the wiki page: https://en.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips