Linked by Kroc Camen on Sun 7th Nov 2010 19:43 UTC
Hardware, Embedded Systems "I had reduced the size of my ongoing Z80 project down to something more wieldy by using CPLD chips, but it was still a bit too bulky to fit into an acceptably sized case. The next step was to look into FPGAs and see where they could take it. One thing led to another and I ended up building a self contained post modern home computer laptop.. thing." Kroc: Can I haz port of BBC BASIC plz?
Thread beginning with comment 449049
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: um why?
by burnttoys on Mon 8th Nov 2010 09:48 UTC in reply to "RE[3]: um why?"
burnttoys
Member since:
2008-12-20

Indeed. By the time the 386 had finally supplanted our old 286 machines (sometime around 1989) most of us already saw the writing on the wall for 68K. The hold outs (Amiga, Atari, Apple) were fading, maybe except for Apple but we weren't in DTP software so it meant little to us!

The compiler thing - there seemed to be performance parity between x86 and 68K until around 386. This is a tricky one... Many 68K applications were written directly in assembler whereas x86 code tended to be C (albeit non-portable C with segmented address capabilities) with core routines in asm.

After 386 it became much easier to compile code and by 486 things were ticking along nicely for Intel. Nearly all the 'gruft' (notably those pointer issues I mentioned) had gone. Pentium was a big change in optimising and some of us went back to asm to rebuild our graphics routines etc. Of course by that point I hadn't seen a 68K in years. Only the hard core had ST's or Amiga's pimped with 68040's. I certainly never saw them viewed as a platform for commercial software development beyond a few niches.

How much 8086 code did you have to write!

It was fun but I wouldn't want to go back there. To be frank I'm not sure I'd want to go back to 68K. For the time it seemed packed with features but reality was that it probably didn't need to be! Instruction decoding for a 68K is pretty heavy - How many of those address mode do we _really_ use?

Reply Parent Score: 2

RE[5]: um why?
by ricegf on Mon 8th Nov 2010 12:49 in reply to "RE[4]: um why?"
ricegf Member since:
2007-04-25

The 68k series was doing fairly well commercially, particularly in the Unix workstation and workgroup server and high-end embedded (remember VMEbus?) markets, I believe. It shipwrecked on the RISC tsunami due to bad management.

My team built custom high-end embedded computers, and went to Motorola to preview their next-generation processor. What we found were two teams in heated competition bordering on open warfare - think the Lisa and Macintosh teams at Apple. The 68040 team championed the legacy line, and the 88000 team had a new RISC design with a truly awful 3-chip implementation that seemed optimized for high cost. (We used the 88000 anyway for a very demanding real-time application, and it did at least have the performance we needed.)

In the end, neither team won - the 68040 was the end of a proud line, while the (single chip at last) 88100 morphed into the IBM PowerPC architecture favored by Apple. Of course, Apple then deep-sixed PowerPC for Intel...

A final note. National Semiconductor's 32000 series was originally favored for Atari's "Amiga-killer", until the prototypes ran dog slow. It seems that in practice, the greater the orthogonality of the architecture, the slower the performance, so they chose the 68000 family for their very forgettable (especially compared to the Amiga's awe-inspiring) design.

Never get an old engineer reminiscing. :-D

Reply Parent Score: 4

RE[5]: um why?
by tylerdurden on Tue 9th Nov 2010 01:02 in reply to "RE[4]: um why?"
tylerdurden Member since:
2009-03-17

I had to write a ton of 68K, x86, MIPS, SPARC and other less knowns archs.

The thing is there is no such thing as a "pretty" ISA, there is just people who know what they are doing and people who do not. :-)

Reply Parent Score: 2