Linked by Kroc Camen on Sun 7th Nov 2010 19:43 UTC
Hardware, Embedded Systems "I had reduced the size of my ongoing Z80 project down to something more wieldy by using CPLD chips, but it was still a bit too bulky to fit into an acceptably sized case. The next step was to look into FPGAs and see where they could take it. One thing led to another and I ended up building a self contained post modern home computer laptop.. thing." Kroc: Can I haz port of BBC BASIC plz?
Thread beginning with comment 449043
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: um why?
by burnttoys on Mon 8th Nov 2010 08:05 UTC in reply to "RE: um why?"
burnttoys
Member since:
2008-12-20

I'm not trying to argue with you, I really don't know what's so great about the M68K series.


Compared to Intel it was a remarkably clean and frankly, powerful (massively overused word). The word everyone used at the time was "orthogonal" which made it easy to program in asm and easy to write compilers for. I wrote large amounts of 68K and built a compiler for a Forth like language - it was fairly simple.

8086 on the other hand was a mess of segment registers (even C code became non-portable on 8086 due to having at least 2 pointer types, often more). Registers had special meanings (counter - cx, index - si, di etc) so writing compilers became a tedious mess of register allocators (or lots of wasteful stack operations). Worse than that was the way using different registers for the same operations took differing amount of time - I seem to remember a memory read using "si" as the register took more time than using "sp". Optimisation was damn painful!

They were the big ones for the programmer - 68K's flat 32 bit memory model versus Intel 16+16=20bit segmented model and the register set (68K had a much bigger set as well as orthogonal instructions). 68K also had a rudimentary supervisor mode.

There were others - I had a brief hack on the 32000 (nat semi?) series - they were almost 68K goodness but the market never picked up on them (lazy marketing?). Z80000 was "ok" but far too late and Zilog also made a bit of a mess of the chip packaging.

The most astonishing CPU of that era was probably the first gen Transputers (the 414 32bit GPU and 212 16 bit IO CPU). I saw a data logger built using them some 20 years back - it was a brilliant piece of design - I've still got my OCCAM book somewhere.

Having said all that I saw guys building hi-end data loggers using 6800's well into the 90s. It's amazing what good hardware design and an excellent algorithm can accomplish. 6800 was lovely, so was the 6303 super-clone.

6502 was nice but that damn page 1 stack always got in the way (stack is limited to 256 bytes) but making a BBC B multitask BASIC was a delightful trick... Recursive image compression is a bitch with a tiny stack!

The worst thing about all the early 16/32 bit CPUs was the speed of multiply.

Sorry... nostalgia :-D

Soooo... for it's time the 68K had everything - very good instruction set, excellent hardware interfacing (including being able to use older peripherals), reasonable price, plenty of supply. Sadly we know where the market went but by the time x86 had hauled it's backside through to the Pentium it wasn't _too_ bad a CPU.

Reply Parent Score: 5

RE[3]: um why?
by tylerdurden on Mon 8th Nov 2010 09:25 in reply to "RE[2]: um why?"
tylerdurden Member since:
2009-03-17

You mean, compared to the 8086. Yes, the 68000 was clearly a superior microprocessor/architecture. But by the time the 386 came around, most of the issues you were mentioning were addressed.

Then there was also the "tiny" issue that x86 compilers were managing higher performance than the 68K counterparts. Which is ironic since people like to wax poetic about how much easier to program 68K was supposed to be. Usually I see people giving qualitative arguments for these sort of matters as red flags, usually meaning that their argumentation is not based on first hand experience.

Reply Parent Score: 2

RE[4]: um why?
by burnttoys on Mon 8th Nov 2010 09:48 in reply to "RE[3]: um why?"
burnttoys Member since:
2008-12-20

Indeed. By the time the 386 had finally supplanted our old 286 machines (sometime around 1989) most of us already saw the writing on the wall for 68K. The hold outs (Amiga, Atari, Apple) were fading, maybe except for Apple but we weren't in DTP software so it meant little to us!

The compiler thing - there seemed to be performance parity between x86 and 68K until around 386. This is a tricky one... Many 68K applications were written directly in assembler whereas x86 code tended to be C (albeit non-portable C with segmented address capabilities) with core routines in asm.

After 386 it became much easier to compile code and by 486 things were ticking along nicely for Intel. Nearly all the 'gruft' (notably those pointer issues I mentioned) had gone. Pentium was a big change in optimising and some of us went back to asm to rebuild our graphics routines etc. Of course by that point I hadn't seen a 68K in years. Only the hard core had ST's or Amiga's pimped with 68040's. I certainly never saw them viewed as a platform for commercial software development beyond a few niches.

How much 8086 code did you have to write!

It was fun but I wouldn't want to go back there. To be frank I'm not sure I'd want to go back to 68K. For the time it seemed packed with features but reality was that it probably didn't need to be! Instruction decoding for a 68K is pretty heavy - How many of those address mode do we _really_ use?

Reply Parent Score: 2