Linked by Hadrien Grasland on Thu 13th Jan 2011 11:23 UTC, submitted by vodoomoth
Hardware, Embedded Systems "Despite the progress, ARM, which licenses its designs to chip makers, is keeping its focus on smartphones and tablets. The company's CEO, Warren East, sat down with the IDG News Service to discuss Windows, the PC market and future architecture developments."
Thread beginning with comment 458107
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: INITALLY
by Kochise on Sat 15th Jan 2011 09:43 UTC in reply to "RE[4]: INITALLY"
Kochise
Member since:
2006-03-03

Nope, memory segmentation is NOT what I would call a "gift". Sure, legacy x86 is pretty well documented, because absolutely not usable otherwise. New ARM chips (Cortex, MP) perhaps with the help of nVidia (Tegra 2) will leverage this "per-device basis" development process.

If ARM become more broad and mainstream, sure Wikis will consider the issues, OSDev will open a dedicated ARM section, and then what ? ARM chips provide so much more than x86 (registers, memory access, power consumption, ...) otherwise anyone would have instead a x86 in their dsl box, phone, game console, whatever...

ACPI is just of little interest, do not tell me wrong, otherwise anyone from the "community" would have shred the 700+ pages papers into well explained and/or documented pieces, just like the 250000+ leaked notes from Wikileaks. Just wonders why the former happened, but not the first ?

Kochise

Reply Parent Score: 1

RE[6]: INITALLY
by Neolander on Sat 15th Jan 2011 11:43 in reply to "RE[5]: INITALLY"
Neolander Member since:
2010-03-08

Nope, memory segmentation is NOT what I would call a "gift".

In 16 bit mode it's just an idiotic way to address more than 64KB of memory, and in 64-bit mode you essentially have to set it to identity mapping once and forget it for ever. Don't know about 32-bit mode though, didn't code on it long enough, but it seems that segmentation played a bigger role on these long-gone days.

Sure, legacy x86 is pretty well documented, because absolutely not usable otherwise.

System programming with only the arch's official documentation is generally-speaking extremely hard. These docs are designed for those already experienced with the architecture who want a reference manual, not for learning the thing. In this regard, the ARM manual is no better than the x86 one.

New ARM chips (Cortex, MP) perhaps with the help of nVidia (Tegra 2) will leverage this "per-device basis" development process.

This would be some great news !

If ARM become more broad and mainstream, sure Wikis will consider the issues, OSDev will open a dedicated ARM section, and then what ?

Well, ARM is already much more mainstream than x86, it's rather the "broad" which matters ;) Once I can get my hands on a reasonably powerful ARM-based desktop or laptop, with the same guarantees that the current IBM PC compatible offers in terms of system-level development, and with the same broad doc base that x86 offers, I'll have strictly nothing against ARM taking over the world.

ARM chips provide so much more than x86 (registers, memory access, power consumption, ...)

I'd sure love to see some good comparison of both which goes in more details. At the moment, I only know details about the differences between both, but don't have a broad view of the thing.

otherwise anyone would have instead a x86 in their dsl box, phone, game console, whatever...

Actually, some old Intel chips are used in embedded devices. Don't remember which, though, will check later if you want. In a lot of areas, you don't have to provide a powerful processor or something, it just has to be dirt cheap and the rest is a bonus. See how long Motorola's 68xx and 68xxx managed to survive.

ACPI is just of little interest, do not tell me wrong, otherwise anyone from the "community" would have shred the 700+ pages papers into well explained and/or documented pieces, just like the 250000+ leaked notes from Wikileaks. Just wonders why the former happened, but not the first ?

It's very interesting, to the contrary, as soon as you use a laptop and want it to stop blowing fans at full speed and lasting 1h on battery. APM is now deprecated, making ACPI the only supported power management system on x86. Apart from the power management part, it's also the only way to control some peripherals and gather some information, and it's supposed to fully replace the MP specification someday so those interested in multicore chips and APIC should have to look at it sooner or later.

In my opinion, the main reason why ACPI is so poorly documented is that it'd be near impossible to document properly the bloated mess that it is nowadays. It's like graphic chips : we should have stopped vendors from using proprietary HW interfaces and binary drivers from the very beginning, and started a standardization effort instead, but now it's too late and all we can do is hope that they will open up their specs sooner or later, like Intel and AMD did, so that we can fully reimplement (!) the binary blob in an open-source form. On a nearly per-device basis.

Edited 2011-01-15 11:47 UTC

Reply Parent Score: 1

RE[7]: INITALLY
by Kochise on Sat 15th Jan 2011 19:45 in reply to "RE[6]: INITALLY"
Kochise Member since:
2006-03-03

In 16 bit mode it's just an idiotic way to address more than 64KB of memory

Completely stupid, I do agree with you. Providing a full blown 20 bits address register would have done the trick. The 68000 was able to address 24 bits (16 MiB) and while being internally 32 bits, only the 24 LSB of each address registers were used (some programming tricks used the 8 unused MSB which lead to portability problem when the 68020 came out, being full 32 bits.

System programming with only the arch's official documentation is generally-speaking extremely hard.

Nope, you mentioned the 68000, this one is gifted by the Gods : completely orthogonal (you can perform any operation in any data register) very easy to use, very easy assembler, even the original datasheets and programming manuals are a breeze, see here http://www.freescale.com/files/archives/doc/ref_manual/M68000PRM.pd...

I'd sure love to see some good comparison of both which goes in more details.

Well, since Ubuntu 10.10 also works on ARM, it would be cool to compare a PandaBoard and a Netbook running under Atom N550, just to see how a 1 GHz machine outperform a 1.8 GHz one :p

In a lot of areas, you don't have to provide a powerful processor or something, it just has to be dirt cheap and the rest is a bonus.

Sure, the 68000 had so many bonuses that granted it such a long life that it still be used inside ColdFire processors.

we should have stopped vendors from using proprietary HW interfaces and binary drivers from the very beginning, and started a standardization effort instead, but now it's too late and all we can do is hope that they will open up their specs sooner or later, like Intel and AMD did, so that we can fully reimplement (!) the binary blob in an open-source form. On a nearly per-device basis.

Well, Gallium3D is an attempt to standardize the blob and specs around a newly crafted interface. Hope it'll succeed ;)

Kochise

Edited 2011-01-15 19:46 UTC

Reply Parent Score: 1