Linked by Kroc Camen on Tue 22nd Jun 2010 12:46 UTC
Amiga & AROS The fabled Amiga X1000 has been spotted in the wild, in the homeliest of places--Station X, a.k.a Bletchley Park. "The AmigaOne X1000 is a custom dual core PowerPC board with plenty of modern ports and I/O interfaces. It runs AmigaOS 4, and is supported by Hyperion, a partner in the project. The most interesting bit, though, is the use of an 500Mhz XCore co-processor, which the X1000's hardware designer describes as a descendant of the transputer - once the great hope of British silicon." With thanks to Jason McGint, 'Richard' and Pascal Papara for submissions.
Thread beginning with comment 431201
To view parent comment, click here.
To read all comments associated with this story, please click here.
renox
Member since:
2005-07-06

The 80s are calling and they want their assumption the more registers the better back.


Uh? AFAIK nobody had such assumption ever, it's the classical trade-of of size vs speed, let's just say that the x86 chose badly (x86-64 is better).

First off, most x86 designs are out-of-order making the number of registers exposed to the programmer irrelevant.


Not true, 1) if your code wants to use more variable than there are registers exposed, those hidden register don't help, you have to spill to the cache..
2) even heard about Intel's Atom? I heard it's quite popular in NetBooks, and it's in-order..


Second, the X86_64 doubles the number of general purpose registers anyways. So you get basically the same number of registers exposed to the programmer than in most modern RISC processors.


"Basically the same" as in 16 vs 32??
That said, I agree that the difference in performance is much, much lower..

Now for real, what is so awful about X86, or at least X86_64 from people who actually program in assembly in both (RISC and CISC).


It sucks because the lack of regularity, lack of registers and stupid choice like little endianness makes it more difficult to program in assembly than it should be..

Reply Parent Score: 2

Zifre Member since:
2009-10-04

stupid choice like little endianness

Are you joking? How is little-endian a stupid choice? Admittedly, the way numbers are written in many human languages is big-endian, but for computers, little-endian is simply technically superior.

Personally, I think programming in assembly with little-endian architectures is easier. For example:

mov [eax], 5
mov ebx, [eax]
mov cx, [eax]

ebx and cx will both be 5. This makes many things simpler.

All of this is not that important, but the world would be better off if we settled on one endianness. Little-endian has small technical advantages, and big-endian has cultural advantages that only apply to certain languages. Therefore, little-endian would be the logical choice.

Reply Parent Score: 2

chmeee Member since:
2006-01-10

Given that IP, and most network protocols built on IP, is big endian, a little endian dominance is quite stupid, IMHO.

Reply Parent Score: 1

corto Member since:
2005-08-30

I don't see where is your demonstation about the endianness and the hypothetic strength of the little endian model. For me loading a value that is swapped compared to how it is stored is not natural. And not logical.

Else, I am pleased to see this new computer, and the new Sam460 as well. New PowerPC products is automatically good news for me.

Reply Parent Score: 1

renox Member since:
2005-07-06

"stupid choice like little endianness

Are you joking? How is little-endian a stupid choice?
"

No, I'm not joking.
You gave yourself the answer below why big endian is better: it makes analysing memory dump much easier, and yes, human matters, otherwise why would we use HTML/XML protocols instead of binary?

Admittedly, the way numbers are written in many human languages is big-endian


I'm not English, yet I use it to program, to communicate on the Internet, etc. The human language which matter for computer is English, not Arabic..

but for computers, little-endian is simply technically superior.


No: some algorithms are better in little endian, some other algorithms are better in big endian, there's no big benefit in any way..

Reply Parent Score: 2

tylerdurden Member since:
2009-03-17

Most of the stuff you're talking about is taken care of directly by almost any competent assembler.

Again, do some of you have actual experience in programming at the assembly level or are we just going to hear the same qualitative arguments repeated ad infinitum?

Reply Parent Score: 2

renox Member since:
2005-07-06

Most of the stuff you're talking about is taken care of directly by almost any competent assembler.


Sure a good assembler may help but I don't think that register's allocation is handled by an assembler, otherwise I'd call it a compiler.

And little endian make 'memory dump' analysis more difficult each time you need to do it (unless you have a tool of course), not only when programming in assembly..

As for the rest, I've never said that x86 weaknesses are new..

Reply Parent Score: 2