Honestly, I don’t think the Atari 2600 BASIC has ever had a fair review. It’s pretty much reviled as a horrible program, a horrible programming environment and practically useless. But I think that’s selling it short. Yes, it’s bad (and I’ll get to that in a bit), but in using it for the past few days, there are some impressive features on a system where the RAM can’t hold a full Tweet and half the CPU time is spent Racing The Beam. I’ll get the bad out of the way first.
Input comes from the Atari Keypads, dual 12-button keypads. If that weren’t bad enough, I’m using my keyboard as an emulated pair of Atari Keypads, where I have to keep this image open at all times.
Okay, here’s how this works.
An older story – it’s from 2015 – but fascinating nonetheless.
This was an interesting read, however the author seems unaware that dual BCD / binary support wasn’t actually unique to atari: x86 worked the same way. The fact that engineers back then felt the need to implement two different internal numerical representations seems wasteful to me, but they felt it was important and so x86 has continued to supported it until AMD did housecleaning with AMD64.
Also, we simply don’t need special support for negative numbers in compliment notation. Like the atari, the x86 doesn’t have special instructions to add/delete/multiply negative numbers. In assembly you use the same opcode regardless of signed types. These are mathematically redundant in 2’s compliment. It’s kind of neat to plug in numbers and see how it works. x86 does expose some overflow and sign flags, which can be handy for conditional jumps, but the bits stored in a variable don’t change.
BCD had two nice feature:
1. It was easy to present number as a text. You just had to do
AND $0f
ORA $30
and you have second character of the number. LSR four times, AND OR and you’ve got first character. It is much faster then div 10 and mod 10 of binary number.
2. It was accurate for financial operations. You won’t have additional error that occurs during conversion from decimal to binary and vice versa. That’s why you should not use Javascript numbers for financial operations because of the errors.
No, not exactly; there is no error converting from decimal to binary. Did you mean to say floating point? If so, that’s true but there’s no reason one has to use floating point at all. Fixed point is easy enough to use and doesn’t have that problem.
Obviously none of this matters now, it’s all ancient history…but if I were at the drawing board I’d have told them not to waste their time making microprocessors compute BCD numbers in hardware. If I were to guess, I suspect the engineers who did this may have had experience building early digital calculators, in which case it would have been much easier to directly decode BCD digits into a 7 segment display without the need for a prohibitively expensive microprocessor to perform a base conversion.
More importantly though, I would have told them to watch out for artificial hardware and protocol word size limits as those will cause endless headaches down the line, haha!
Edited 2018-04-19 07:25 UTC
IIRC the initial reason for Intel making the first microprocessor was to use it in a calculator…
As for the article: I always thought coding for 2600 must be insane, only 128 bytes of RAM! (there was even a chess game; the position of pieces on the board must take non-trivial amount of RAM ) Also, not a long time from now it will be half a century since the launch of 2600 …good occasion to make something for it (even if only in BASIC)
PS. And Thom, older articles are perfectly fine; without them we could end up in a http://www.osnews.com/story/30164/Google_memory_loss world…
zima,
I agree, in fact I tend to prefer them to the barrage of google/apple/ms centered news.
Of course I’m right! ;P …from https://en.wikipedia.org/wiki/Intel_4004
Also some other pages…
https://en.wikichip.org/wiki/intel/mcs-4/4004
https://www.intel.com/content/www/us/en/history/museum-story-of-inte…
http://gunkies.org/wiki/I4004
The 2600 Basic cart was written by Warren Robinett, who also wrote Adventure for the 2600 (amongst others).
I had 2600 Basic back in the day, and it is very cool and an amazing accomplishment, but lets face it: it is a toy. There is just not enough RAM available to the cart to do anything interesting, other than run it.
Even an additional page (256 bytes) would have been enough to write some simple games or such.
Still it is amazing
I could be wrong, but is not the reason that BCD math was included in early CPUs a carry-over from the main frames where BCD math was a requirement for some financial programming languages, I am thinking of COBOL in particular.
I first learned to program the 6502 in the late 70s and what I was told is that BCD mode was there for “business math” like accounting applications and using the 6502 in embedded apps like calculators.
The point being it wasn’t a holdover, but a needed and used feature of the day.
Edited 2018-04-19 18:19 UTC
jockm,
Ah, how funny is this, you posted the exact opposite of what I just posted, haha. I’d like to press you further on this though, why would this have been a technically needed feature for a microprocessor?
To do decimal accurate math. Sure you could do it in software, but it would be slower. The 6502 was no slouch, but we are still talking a 1Mhz processor. It didn’t take that many gates to implement BCD, so easy peasy.
If you want to do fixed point math, BCD is the way to do it. If you are doing financial math then BCD is what you need. It wasn’t some holdover from big iron because big iron had it; it is a feature that was needed. This is why you see BCD in many microprocessors of the time. It was there because customers needed it.
Does that help? I am happy to answer anything you want.
You know, it occurs to me it might not be obvious why you might want BCD. I mean I have tossed out phrases like “fixed point math” and “financial applications” but that may not be cleat.
Floating point math is based on the idea that we divide the number into two parts: the mantissa and the exponent. I won’t go into a lot of detail, I highly suggest checking out: https://en.wikipedia.org/wiki/Floating-point_arithmetic.
Crudely, the mantissa represents the number without the decimal point, and the exponent says where the decimal point is.
The advantage to this approach is that you can represent very large number and very small number using a fixed number of bytes (these days normally 32,64, or 80 bits). The downside is that because it isn’t an exact representation of the number with all of its digits. So over multiple calculations you may not have the exact result, but an approximate result. Most of the time this doesn’t matter, and you use a predictable amount of storage every time.
However if you are dealing with money, and compound calculations, and the like; saying “here is the approximate amount of money you owe” doesn’t go over well. Instead you store the number much like we were taught to do math as we did as kids, using decimal digits, with a specific number of digits behind the decimal point always present.
Then you can do things like use industry standing rounding practices, and always have a precise result. The downside is that if you want to have a 20 digit number with 4 digits after the decimal point, then that is going to consume 12 bytes (96 bits) every single time.
Of course you can do all of this in software (and most languages have fixed point libraries to do this very thing), but back in the day every cycle was precious, as was code space. So having BCD support in the CPU made a difference.
Fixed point math doesn’t have to have a fixed number of digits before the decimal, just a fixed number behind*.
I wrote a fair amount of arbitrary precision fixed point math back in the day. Little endian systems had the edge here, but that is a conversation for another day.
*: you could use BCD with arbitrary placement of the decimal point. It required a bit more code, and had it’s uses; but it was less common
jockm,
Yes, I’m aware of what floating point arithmetic is and I agree with what you say about it.
jockm,
No, I’m afraid that does not help. As you know, BCD essentially emulates a base 10 digit on a binary machine. This allows a computer to perform base 10 arithmetic as humans do. That’s all fine and dandy, however I strongly disagree with the notion here that BSD is “what you need” for financial or fixed point math. Integral arithmetic in base 10 will always result in the same value as the computation performed in base 2, 16, or even 7. There’s nothing special that requires us to use base 10 other than convention.
All integers have a finite 1:1 equivalent representation in any other number base. It’s not until we get to fractions that the notorious floating point errors arise.
For example, in base 3:
1 / 10 = 0.1
However in base 10:
1 / 3 = 0.33333333…
There is no finite 1:1 equivalent number in decimal that equals 0.1 in base 3. At some point the decimal representation of this number is truncated and results in an error, which I’m pretty sure everyone will agree is the source of floating point error.
However floating point error does not occur with fixed point arithmetic because we are always working with integers, and all integers have a 1:1 equivalent in every other base, so the number base is mathematically irrelevant.
I think part of the confusion is that people are accustomed to thinking of pennies as fractions of dollars like $0.43. However with fixed point you don’t consider them fractions at all, you count them as 43 whole pennies.
Representing fixed point numbers is a common enough problem that SQL has a DECIMAL datatype that can be used to represent dollars and change with complete accuracy. The details are completely abstracted from the user, but I’m willing to bet that you wont find a database like mysql implementing it’s decimal types using BCD, it will use normal binary numbers!
Well if you use an INT, BIGINT, FLOAT, or DOUBLE (or the like) then yes they are probably stored as what we are calling as binary numbers — though this is up to the database in question.
However if you use DECIMAL(len,dec) numbers then it is using BCD or something BCD like. Again this is up to the database to determine.
In the case of MySQL we know that it stores DECIMALs in a packed binary format (see: https://dev.mysql.com/doc/refman/5.7/en/precision-math-decimal-chara…).
The other thing to remember here is that while BCD can be used as a storage format, that isn’t what it’s real use is for. It is the encoding for doing the actual calculation. So you could easily store and retrieve a double, but do all the intermediate math in a BCD like format (like BigDecimal in Java for example).
jockm,
If you look at BigDecimal in java, I’m pretty sure that’s going to be implemented in binary too. The only time BCD should come into play is when the binary number is being converted to/from displayable text. It doesn’t make much sense for intermediate calculations on a binary computer to be done in BCD. I was hoping my last post would illustrate why there isn’t a reason to prefer one base over another for fixed point math, but I feel you skipped that.
Anyways, it doesn’t much matter. I’ve enjoyed chatting about this and reminiscing, thank you. I wish we had more technical discussions!
Please note that:
in decimal and bcd
30/5=6 and it is an “exact” result
in binary:
30/5=irrational number so, no matter if you use decimals or integers you have no more an exact result you have lost something.
So I am sorry but your example about dollar and cents does not hold here and for money you need always bcd.
mgiammarco,
Why would you say 30/5 is an irrational number? It exactly equals “6” in decimal and “110” in binary.
All numbers that can be expressed as ratios are explicitly rational numbers by definition (regardless of number base):
https://en.wikipedia.org/wiki/Irrational_number
1/3 = 0.333333… is a rational number because it’s digits repeat.
PI = 3.141… is an irrational number because it does not repeat (and cannot be represented by a ratio).
I didn’t expect so much pushback on the topic of BCD
Why would you say 30/5 is an irrational number? It exactly equals “6” in decimal and “110” in binary.
Sorry I made the wrong example.
Please do the following calculus:
6/5= 1.2 in decimal
Now do 110/101 in binary and you will understand why you need bcd (the number obtained will need to be truncated and you lose “a little bit”)
Please do not cheat:do the calculus in binary.
Use a binary calculator with floating support like:
http://www.exploringbinary.com/binary-calculator/
Edited 2018-04-20 17:46 UTC
I try to explain better.
If with a pocket calculator you do:
1/3= 0.3333
Then 0.3333+0.3333+0.3333=0.9999 and it is obvious there is an error you lost 0.0001.
More subtle is that if you do in decimal:
1/5= 0.2
0.2 + 0.2 + 0.2 + 0.2 + 0.2=1 (you have lost nothing)
But if you do it in binary you risk that the result of the sum is not 1 but something like 0.999999
mgiammarco,
This way we have an exact 1:1 representation for 2 digits past the decimal point. And as I tried illustrating earlier, this has many uses such as the SQL DECIMAL data type.
So for your example,
6/5 = 1.2
Would turn out to be 120 in our fixed point arithmetic illustration, which can be represented in whatever number base is convenient (aka binary).
I hope this makes sense to everyone now
Earl C Pottinger,
Who knew this would be such a fun little discussion!
You are absolutely right about mainframes using BCD. In fact, if you look at VSAM files and copybook formats, BCD is still used on mainframes to this day. It makes it a cinch to look at the raw data and understand what it represents in decimal.
I think that design made it easier to program the mainframes using punchcards and inspect the register debugging lamps on the mainframe.
Today we take binary<->decimal conversion for granted. Even windows calc can do it. But back then they would have needed to use another computer just to debug & convert the values being presented by the mainframe.
While there’s nothing special about BCD in terms of mathematical requirements, it definitely could have been a practical consideration for not having better user interfaces at the time. Back when the human<->machine interaction was as raw as it was with the mainframe, BCD would have made things much easier for human operators to understand. It was lights, switches and punchcards at the beginning!
By the time interactivity became more sophisticated the BCD data format was probably well established and it just stuck around.