The CDC6600 and its family members are part of the computer industry history. A decade before the Cray 1, the members of the CDC6000 family were not only expensive and the most powerful systems at the time of introduction. They were also lean and wonderful architectures ! The elegance was also conveyed in several publications (many by CDC), where all the necessary knowledge was explained from the ground up. All you needed to know was supplied, clearly laid out, not just hints for efficient programming. Basically, you could rebuild your own computer by reading these books. 50 years later, they are invaluable reminders and tools, we can see where the computer industry comes from and realise that it is not that hard to do it ourselves, too.
Via HackerNews. This is an amazing resource.
The CDC was a little bit before my time. At Uni I used DEC KL10s fronted by older ICL1904s for I/O support, ie card scanners and line printers and programmed in Algol60, BCPL, Fortran and esp lovely Asm etc.
Not long after graduating I heard rumors of a KL10 in a suitcase, some company other than DEC had reduced a large room of ECL hardware into a custom chip. Today this would be almost a trivial exercise for any computer architect graduate with FPGAs, but the software, what would be interesting enough to run on it.
The other thing to appreciate is code size, today I notice that several Mac apps like iTunes, Acrobat Reader are in the 100s of MB of code, while back in the day, the entire mainframe memory would have been a MB or so if that, and possibly not even semiconductor, ie ferro core or wire etc. Even the large platter drives might only have stored a few MB each.
Knowing all this sort of now holds me back, I can’t stomach to think why code is so bloated, what is going on, is it all just 512^2 pixel maps to make it look darn pretty.
Perhaps, programs were short because they were coded to do one thing and do it well, as computing time was expensive and you had to allocate time to use terminals or card punchers. I remember people trying all crazy optimizing techs they could devise.
Things changed a lot, most coders don’t try to optimize anything and even ask for faster computers frequently so that their code, which sometimes should have been improved, can run at an acceptable pace.
There is also the code related to GUI and input filtering. Where there was crap output if invalid data was supplied, now we have all kinds of check-ups, of course, they have a cost.
And last, but not least, people put all type of things, even barely justified ones, on their code for the sake of “completeness”, be whatever it may mean on their mind.
That’s because there is nothing to stomach. Modern software does orders of magnitude more things than trivial manipulations of very limited data sets requiring just little sequential IO.
Increased functionality leads to increased productivity, with increased complexity as well… since nothing is “free” in this universe 😉
Yes, plenty of modern software sucks. But so did plenty of binary fodder from “the good ol’ days.”
Edited 2013-12-28 01:08 UTC
I do not think in ol’ days no one could ever build so much features tucked in into one application like those you have mentioned with all the resources. You can still program in assembly language to code a program that does only a couple of things in a fraction of size. But do you think it is worthy?
give me a break.
Well, in the case of iTunes for OSX, the actual executable + bundled libraries is about 64MB – and contains a boatload of feature. All the different views, support for every iPod (even going back to the original), support for the store, managing the music database, music encoding and decoding, error handling, parental controls, a dynamic and customizable interface supporting a number of view modes, and more. And, it also supports two architectures – i386 and x86_64.
Is it bloated? And, if so, is it bloated from poor code density, or is it bloated because it contains features you don’t use?
The bulk of iTunes.app is contained in localization. 34 languages are supported, totaling up to 139MB.
There’s a ton of graphics – device icons have a dark and light version, and file-type icons go extremely high resolution – there’s about 25MB of graphics (Maybe more – I didn’t look too hard)
The iTunes help file alone is 24MB.
So, better code optimization could bring down the size of one of these things – the actual executable size, but is this important? For an app like iTunes, when 99% of what the application is doing is waiting for the user to do something, would there be a noticeable benefit?
With the amount of processing power and available memory of modern systems, especially when compared to the cost of today’s systems, is absolute code efficiency even that important? It certainly was when a megabyte of RAM cost thousands of dollars, but for pennies per megabyte, especially when the application isn’t performance sensitive?
I’d say no.
You still pay for it… more paging activity, more I/O, slower startups…
And a LOT more error prone.
The problem is that mixing user interface and application makes for a very bug hospitable environment.
And that translates into system vulnerabilities.
That is what modularity is supposed to address.
Unfortunately, throwing everything and the kitchen sink into an application make a poorly performing program.
Is it really? Or, do you just think it is?
Is Office ’95 more stable than Office 2012?
No. It isn’t. Not even close. It is also far more capable and better in pretty much every way, given that it also runs of far more capable hardware.
When people are so bothered by potential perceived inefficiency without giving a thought to what is actually happening behind the scenes, by what they perceive as “bloat,” which 99% of the time is just a euphemism for “features I don’t use,” it makes me think you think it would have been better if software just stood still for the past 20 years despite the massive increase in hardware capacity.
There’s always been crummy software, but at least now, there’s a lot more usable software, too.
And usability is what’s actually important – not some imaginary utopia of efficiency or correctness.
Software is only useful if it’s usable.
The CDC 6600 and Tomasulo’s algorithm from the 360 model 91 are staples of computer architecture history for out-of-order execution. I go over these before going into more modern techniques involving register renaming.
I learned programming on a CDC6600 at Northwestern University. Remember Nuchess http://chessprogramming.wikispaces.com/Nuchess ?