Linked by Thom Holwerda on Tue 15th Nov 2011 22:32 UTC
Intel You may not realise it, but today one of the most important pieces of technology celebrates its 40th birthday. In November 15, 1971, a company called Intel released its Intel 4004 processor - the first single-chip microprocessor, and one of the most important milestones in computer history.
Thread beginning with comment 497340
To read all comments associated with this story, please click here.
Comment by MOS6510
by MOS6510 on Wed 16th Nov 2011 09:26 UTC
Member since:

I read it would take 360.000 4004 CPUs to get the same performance of a regular desktop we use today.

We've come a long way.

Reply Score: 2

RE: Comment by MOS6510
by gpsnoopy on Wed 16th Nov 2011 17:48 in reply to "Comment by MOS6510"
gpsnoopy Member since:

Actually, it's probably several order of magnitude higher than that. I'd roughly say 10^9 to 10^12 4004 units to reach the performance of current CPUs.

The things is that current CPUs do in hardware things that have to be emulated (i.e. slowly) on the 4004:

- The 4004 is 4 bits per instruction, x86-64 can work directly on 64 bits registers. Hence operations on normal 32 or 64 bits integers take several instructions on 4004 while only one on x86-64.

- The number of cycles needed to execute complex integer operation such as division and multiplication has been dramatically reduced on modern CPUs.

- The 4004 has no hardware support for floating point math. Compare that with the Core i7 Sandy Bridge that can execute an operation on 8 floats in one instruction via AVX.

If you throw into the mix things like GPU (CUDA/OpenCL), it's just getting ridiculous.

Yet, it would only take of few years for the 4004 to do more computation than what civilization has ever done before the advent of electronic computer in the 1940's.

Reply Parent Score: 1

RE[2]: Comment by MOS6510
by transputer_guy on Wed 16th Nov 2011 22:09 in reply to "RE: Comment by MOS6510"
transputer_guy Member since:

That is certainly one way of looking at it to maximize the difference up to gazillions.

I prefer to compare the productivity of PCs built with useful processors even if one has orders more devices and clock speed than the other.

An early 68000 has around 46000 devices running near 8MHz while the 4004 has 2300 devices at 0.8MHz. The overall difference is 200 more power for the first nice CPU you could actually use to work on graphical documents with windows and mouse. The 486 would be similar.

A modern x86 may have 10,000 as many devices and 400 times the clock speed, which means a PC today has maybe 4 million times the theoretical performance of the 68K Mac.

Some how the quad core I am using today does not feel like it has 4M times more performance or responsiveness than my first Mac. Maybe it feels 100 time faster but it still has too many odd latencies. It does have 20 times the screen estate and full color so it needs at least a 1000 times the power for software based graphics but then again the graphics is done by nVidia/ATI so what the heck is the CPU doing, its mostly idle as was the 68K.

For me the usefulness of a processor seems to follow more the log of the devices x clock speed.

So 4Mx200 is close enough to the 1 Billion suggested. Of course these arguments don't really make much sense, like comparing an amoeba to higher life forms.

Reply Parent Score: 2