Linked by Hadrien Grasland on Thu 13th Jan 2011 11:23 UTC, submitted by vodoomoth
Hardware, Embedded Systems "Despite the progress, ARM, which licenses its designs to chip makers, is keeping its focus on smartphones and tablets. The company's CEO, Warren East, sat down with the IDG News Service to discuss Windows, the PC market and future architecture developments."
Order by: Score:
puzzling
by fran on Fri 14th Jan 2011 12:03 UTC
fran
Member since:
2010-08-06

This interview is a bit puzzling.
Earlier interviews with ARM execs where a lot more forthcoming and less reserved.

First ARM has a 64bit chip lined up.
http://www.eetimes.com/electronics-news/4210884/ARM-64-bit-CPU-comi...

Another report an ARM exec predicted ARM dominance in the desktop arena with Hybrid devices. Tablets/Laptops with docking stations.
http://www.pcworld.idg.com.au/article/369198/arm_co-founder_intel_d...

He played the cards very close to his chest here.

Reply Score: 2

INITALLY
by judgen on Fri 14th Jan 2011 17:17 UTC
judgen
Member since:
2006-07-12

ARM on a low power dektop is going to happen, if it is a sucess or not will be up to the future to tell... I hope that it will be a step forward and not back.

Reply Score: 2

RE: INITALLY
by Kochise on Fri 14th Jan 2011 20:49 UTC in reply to "INITALLY"
Kochise Member since:
2006-03-03

It will, on mini/nano/pico-ITX board, new ARM line up will provide a great boost of performance and decrease of power usage, while getting rid of the horrible x86 legacy ;)

Kochise

Reply Score: 1

RE[2]: INITALLY
by Neolander on Fri 14th Jan 2011 22:11 UTC in reply to "RE: INITALLY"
Neolander Member since:
2010-03-08

What do you call the "horrible x86 legacy" ?

Reply Score: 1

RE[3]: INITALLY
by Kroc on Fri 14th Jan 2011 23:28 UTC in reply to "RE[2]: INITALLY"
Kroc Member since:
2005-11-10

Like that an x86-64 processor still starts in 16-bit mode before switching up, just for legacy reasons: http://www.pagetable.com/?p=460

Reply Score: 1

RE[4]: INITALLY
by Neolander on Sat 15th Jan 2011 06:31 UTC in reply to "RE[3]: INITALLY"
Neolander Member since:
2010-03-08

Well, actually this is a gift to OS hobbyists (albeit not much anyone else).

It wouldn't require much changes to tweak existing, well-established kernels, so that the CPU may start up in 32-bit mode directly. On the other hand, in this case, you'd lose most of the (just as legacy) nice functionality provided by the BIOS (as a lot of BIOS functions work only in 16-bit mode).

What does it means ? That the years of existing, well-established doc about low-level development on x86, would be left unusable. For hobbyists, x86 would become something like ARM : all you've got it the vendor's big paper brick and the unreadable Linux code, there is no tutorial or well-written wiki doc whatsoever, and most hardware is managed through either nonstandard or poorly documented interfaces. You more or less have to code on a per-device basis.

There is at least one example of such technology on x86 already : ACPI. When you want to implement it, all you've got is an indigest (700+ pages) pile of paper and the Linux code. Well, try to make a thread on OSdev.org about what people think of it.

X86's 16-bit mode is a reminder that there has been a time where HW manufacturers actually cared about OS developers. As long as it costs next to nothing to implement (16-bit support is just a little program in the processor's microcode nowadays), I do think that we should keep it as a playground for newcomers in the hobby OS market. Others can ignore it through the use of bootloaders like GRUB anyawy.

Edited 2011-01-15 06:33 UTC

Reply Score: 1

RE[5]: INITALLY
by Kochise on Sat 15th Jan 2011 09:43 UTC in reply to "RE[4]: INITALLY"
Kochise Member since:
2006-03-03

Nope, memory segmentation is NOT what I would call a "gift". Sure, legacy x86 is pretty well documented, because absolutely not usable otherwise. New ARM chips (Cortex, MP) perhaps with the help of nVidia (Tegra 2) will leverage this "per-device basis" development process.

If ARM become more broad and mainstream, sure Wikis will consider the issues, OSDev will open a dedicated ARM section, and then what ? ARM chips provide so much more than x86 (registers, memory access, power consumption, ...) otherwise anyone would have instead a x86 in their dsl box, phone, game console, whatever...

ACPI is just of little interest, do not tell me wrong, otherwise anyone from the "community" would have shred the 700+ pages papers into well explained and/or documented pieces, just like the 250000+ leaked notes from Wikileaks. Just wonders why the former happened, but not the first ?

Kochise

Reply Score: 1

RE[6]: INITALLY
by Neolander on Sat 15th Jan 2011 11:43 UTC in reply to "RE[5]: INITALLY"
Neolander Member since:
2010-03-08

Nope, memory segmentation is NOT what I would call a "gift".

In 16 bit mode it's just an idiotic way to address more than 64KB of memory, and in 64-bit mode you essentially have to set it to identity mapping once and forget it for ever. Don't know about 32-bit mode though, didn't code on it long enough, but it seems that segmentation played a bigger role on these long-gone days.

Sure, legacy x86 is pretty well documented, because absolutely not usable otherwise.

System programming with only the arch's official documentation is generally-speaking extremely hard. These docs are designed for those already experienced with the architecture who want a reference manual, not for learning the thing. In this regard, the ARM manual is no better than the x86 one.

New ARM chips (Cortex, MP) perhaps with the help of nVidia (Tegra 2) will leverage this "per-device basis" development process.

This would be some great news !

If ARM become more broad and mainstream, sure Wikis will consider the issues, OSDev will open a dedicated ARM section, and then what ?

Well, ARM is already much more mainstream than x86, it's rather the "broad" which matters ;) Once I can get my hands on a reasonably powerful ARM-based desktop or laptop, with the same guarantees that the current IBM PC compatible offers in terms of system-level development, and with the same broad doc base that x86 offers, I'll have strictly nothing against ARM taking over the world.

ARM chips provide so much more than x86 (registers, memory access, power consumption, ...)

I'd sure love to see some good comparison of both which goes in more details. At the moment, I only know details about the differences between both, but don't have a broad view of the thing.

otherwise anyone would have instead a x86 in their dsl box, phone, game console, whatever...

Actually, some old Intel chips are used in embedded devices. Don't remember which, though, will check later if you want. In a lot of areas, you don't have to provide a powerful processor or something, it just has to be dirt cheap and the rest is a bonus. See how long Motorola's 68xx and 68xxx managed to survive.

ACPI is just of little interest, do not tell me wrong, otherwise anyone from the "community" would have shred the 700+ pages papers into well explained and/or documented pieces, just like the 250000+ leaked notes from Wikileaks. Just wonders why the former happened, but not the first ?

It's very interesting, to the contrary, as soon as you use a laptop and want it to stop blowing fans at full speed and lasting 1h on battery. APM is now deprecated, making ACPI the only supported power management system on x86. Apart from the power management part, it's also the only way to control some peripherals and gather some information, and it's supposed to fully replace the MP specification someday so those interested in multicore chips and APIC should have to look at it sooner or later.

In my opinion, the main reason why ACPI is so poorly documented is that it'd be near impossible to document properly the bloated mess that it is nowadays. It's like graphic chips : we should have stopped vendors from using proprietary HW interfaces and binary drivers from the very beginning, and started a standardization effort instead, but now it's too late and all we can do is hope that they will open up their specs sooner or later, like Intel and AMD did, so that we can fully reimplement (!) the binary blob in an open-source form. On a nearly per-device basis.

Edited 2011-01-15 11:47 UTC

Reply Score: 1

RE[7]: INITALLY
by Kochise on Sat 15th Jan 2011 19:45 UTC in reply to "RE[6]: INITALLY"
Kochise Member since:
2006-03-03

In 16 bit mode it's just an idiotic way to address more than 64KB of memory

Completely stupid, I do agree with you. Providing a full blown 20 bits address register would have done the trick. The 68000 was able to address 24 bits (16 MiB) and while being internally 32 bits, only the 24 LSB of each address registers were used (some programming tricks used the 8 unused MSB which lead to portability problem when the 68020 came out, being full 32 bits.

System programming with only the arch's official documentation is generally-speaking extremely hard.

Nope, you mentioned the 68000, this one is gifted by the Gods : completely orthogonal (you can perform any operation in any data register) very easy to use, very easy assembler, even the original datasheets and programming manuals are a breeze, see here http://www.freescale.com/files/archives/doc/ref_manual/M68000PRM.pd...

I'd sure love to see some good comparison of both which goes in more details.

Well, since Ubuntu 10.10 also works on ARM, it would be cool to compare a PandaBoard and a Netbook running under Atom N550, just to see how a 1 GHz machine outperform a 1.8 GHz one :p

In a lot of areas, you don't have to provide a powerful processor or something, it just has to be dirt cheap and the rest is a bonus.

Sure, the 68000 had so many bonuses that granted it such a long life that it still be used inside ColdFire processors.

we should have stopped vendors from using proprietary HW interfaces and binary drivers from the very beginning, and started a standardization effort instead, but now it's too late and all we can do is hope that they will open up their specs sooner or later, like Intel and AMD did, so that we can fully reimplement (!) the binary blob in an open-source form. On a nearly per-device basis.

Well, Gallium3D is an attempt to standardize the blob and specs around a newly crafted interface. Hope it'll succeed ;)

Kochise

Edited 2011-01-15 19:46 UTC

Reply Score: 1