Linked by Thom Holwerda on Wed 3rd Apr 2013 21:45 UTC
"Thanks to 35-year-old documents that have recently surfaced after three-plus decades in storage, we now know exactly how Apple navigated around that obstacle to create the company's first disk operating system. In more than a literal sense, it is also the untold story of how Apple booted up. From contracts - signed by both Wozniak and Jobs - to design specs to page after page of schematics and code, CNET had a chance to examine this document trove, housed at the DigiBarn computer museum in California's Santa Cruz Mountains, which shed important new light on those formative years at Apple."
RE[2]: Not me
by xiaokj on Sat 6th Apr 2013 10:35 UTC in reply to "RE: Not me "

Member since:
2005-06-30

"Taylor is slow to converge, so I've heard. I'm sure there was a standard way to do trig by 1975."

It's very easy to recognize on a plot after even just 3 taylor terms, each additional 2 terms represents another full sine cycle, it takes shape pretty quick around the origin, but I hadn't measured the actual accuracy.

It's not very good, but wikipedia does have a picture:
http://en.wikipedia.org/wiki/Taylor_series

Anyways I was just curious if you knew what algorithms the early computers actually used, not that it matters much.

If you read up numerical algorithms, you might find some gems. For example, I do know that lookup tables are small and very worthwhile. Also, there are trigonometric identities to exploit -- keep a copy of pi and pi/4 in a constant somewhere, and map everything to the first quadrant. Then use double angle formulae and so on to make the initial value really small, then do one good sine or cosine taylor series expansion. Then manipulate algebraically into the value you want.

There are also repeated fractions, rational functions and other algorithms, all interesting in their own right.

For example, the tangent or arctangent (I cannot remember) is particularly inefficient in Taylor's expansion. Repeated fractions truncated somewhere tends to give a FAR BETTER calculation.

Do note, however, that all methods suffer from indirection -- the error terms do add up, and they can be bigger than the actual value if you are not careful enough. One must be careful to stop further calculation when the improvement in accuracy is more than destroyed by a degradation of precision.

It is no joke that the standard sine and cosine calculations are hundreds of instructions long, if not a lot more. It is also the reason why computer games have all sorts of crazy approximation methods that run far faster, and why computer games use up more and more resources -- people just stop bothering with the older approximation methods, so that resource usage creep upwards for no real gains.