The IRS has a lot of mainframes. And as millions of Americans recently found out, many of them are quite old. So as I wandered about meeting different types of engineers and chatting about their day-to-day blockers I started to learn much more about how these machines worked. It was a fascinating rabbit hole that exposed me to things like “decimal machines” and “2 out of 5 code”. It revealed something to me that I had not ever considered:
Computers did not always use binary code.
Computers did not always use base 2. Computers did not always operate on just an on/off value. There were other things, different things that were tried and eventually abandoned. Some of them predating the advent of electronics itself.
I’ve often wondered why computers are binary to begin with, but it was one of those stray questions that sometimes pops up in your head while you’re waiting for your coffee or while driving, only to then rapidly disappear.
I have an answer now, but I don’t really understand any of this.
I remember learning about (but never using) non-binary computers in college. The basic idea seemed to be that you could store multiple voltage levels and use those levels to represent base 3, base 8, base 10, or whatever you wanted. The problem tended to be that multi-level voltage systems were not always reliable (tended to burn out) and error prone (you might store a “3” and get back a “4”). The binary on/off approach was, as I understand it, more reliable and won out over the other systems.
jessesmith,
The on/off bits are the easiest form of discrete logic to implement because there’s only one critical threshold, at the other extreme: analog computers. Obviously many algorithms need exact discrete values for which analog is unsuitable, but for those where an estimate will do, analog adding circuits can be made significantly faster & denser than binary ones.
Many operations can be done directly in analog form. Instead of large bit buses and full adder circuits, which add values bit by bit while cascading carry bits, we might just use a single wire carrying an analog voltage while adding/subtracting them using opamps.
http://www.circuitstoday.com/half-adder-and-full-adder
https://www.electronics-tutorials.ws/opamp/opamp_1.html
Division, which is a relatively time consuming operation in binary, can be achieved in analog using voltage divider networks.
https://en.wikipedia.org/wiki/Voltage_divider
Obviously analog was used for high speed circuits long before digital circuits became fast enough to handle them with discrete logic. Now virtually all analog computers have been replaced:
https://en.wikipedia.org/wiki/Analog_computer
We’ve focused heavily on digital algorithms and problem solving. Yet I wonder if there might be some useful applications in the future for analog computation. Theoretically an analog graphics card could be faster and use less power than a digital one. Also, our brain’s neurons are not digital but analog. The neural nets that we use in AI applications use floating point, but conceivably could run faster on an analog computer.
Apparently some people in the research community are trying it:
https://insidehpc.com/2018/02/mit-helps-move-neural-nets-back-analog…
Edited 2018-06-11 06:45 UTC
Nicely explained.
Here’s Scanimate. An analogue video processor. Awesome!
https://www.youtube.com/watch?v=ispW6-7b2sA
Any synth from the 70’s stuffed full of CEM/SSM chips is pure analogue “computation” heaven – especially if controlled by something like the Korg SQ-10 (analogue step sequencer). Even the SID chip is part analogue with digital control.
Apparently there have been a few attempts at balanced ternary computer systems. Never, ever seen one.
kuiash,
That’s really fascinating. Here’s another video with the engineers explaining scanmate.
https://www.youtube.com/watch?v=i1aT_CqhyQs
https://www.youtube.com/watch?v=UHjkMThH0aE
Scanimate reminded me about Atari Video Music which apparently was also analog:
https://www.youtube.com/watch?v=-NWwtZCpC2M
https://en.wikipedia.org/wiki/Atari_Video_Music
Best part of that Wiki article:
Check out Wiki links I gave in a reply to Alfman just below.
Balanced ternary, of https://en.wikipedia.org/wiki/Setun (and from https://en.wikipedia.org/wiki/Ternary_computer “it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.[3]” …it might have merit) or negabinary (-2 base, Polish Elwro computers: https://en.wikipedia.org/wiki/UMC_(computer) ) seem at least easy enough to be implemented quite early on in the computer revolution, when every electronic component was at a premium; and -1,0,1 seems similarly reliable (what jessesmith was focusing on in his comment) to binary.
As for analog computers: though aren’t they harder at “reprogramming”? – it requires rewiring them; there’s nothing like Arduino, I think…
(BTW Setun, Soviets were also very much into analog computers IIRC)
zima,
If you’ve got an article, submit it to osnews!
This ~electronic stuff seems way over my head… Maybe one day.
It was just a tidbit I read somewhere once…
I do know that Soviet/Russian rockets were for the longest time analog, even with some curious “limitations” of sorts stemming from simplicity – for example, the R-7 family of rockets (the most succesfull rocket family ever, with close to 2000 launches – starting with R-7 the first ICBM, through Sputnik launcher, Vostok and Voskhod rockets, to current Soyuz launchers: https://en.wikipedia.org/wiki/File:Roket_Launcher_R-7.svg …yes, first launch over 60 years ago; considering a new launch complex was built just a few years ago in French Guiana, a century of service seems well in R-7 grasp.) originally couldn’t change inclination in flight, couldn’t “turn”/roll …what turned for the desired orbit was the launch platform itself (though latest gen digital Soyuz can turn; Soyuz launch complex in Kurou in French Guyana has fixed launcher).
But the rockets and their control systems are very robust, able to launch in almost hurricane-level gusts of wind (common on the steppes of central Asia), in all extremes of temperature and humidity (summer and winter of Kazakhstan, far north of Plesetsk kosmodrome, Kurou near equator).
And R-7 control system allows for marvellously simple way of jettisoning booster rockets – they aren’t attached by any hardpoints, the rocket simply “hangs” on them as they push it forward, and at the end the control system stops the flow of propellants to booster engines at precisely synchronised moment, so they simply fall off at exactly the same time; without a glitch (forming https://en.wikipedia.org/wiki/R-7_(rocket_family)#Korolev_Cross )
Though all that is not really OSNews material…
Edited 2018-06-17 23:51 UTC
One of the more interesting bugs I remember encountering was as a junior developer sitting with an experienced guy… the database had somehow added a bunch of numbers and instead of summing to 3.0, had somehow produced a total of “2.10”… that is, the second ‘digit’ was more than 9.
This was my introduction to binary-coded decimal…
Edited 2018-06-11 09:45 UTC
I actually used an analog computer in a work setting. I performed pulmonary function tests, and our test for airway resistance used an analog computer to analyze the waveforms produced by panting in a closed box (plethysmography).
fretinator,
That’s awesome. I’ve never seen an analog computer myself, it would be awesome to play with one to understand it better. I’ve never even used a slide rule, which is a sort of analog multiplier/divider, but my parents did.
https://en.wikipedia.org/wiki/Slide_rule
I can not recommend reading the book Code: The Hidden Language of Computer Hardware and Software by Charles Petzold more highly. Go, now.
MLC, TCL, QLC etc… and 3D Xpoint all have > 2 levels of information storage per cell.
It would be interesting if computation for things like error correction were done with non binary computation but I doubt that it is.