Linked by Thom Holwerda on Mon 5th Nov 2012 23:40 UTC
Apple Another Apple-to-switch-Macs-to-ARM post. "Apple engineers have grown confident that the chip designs used for its mobile devices will one day be powerful enough to run its desktops and laptops, said three people with knowledge of the work, who asked to remain anonymous because the plans are confidential. Apple began using Intel chips for Macs in 2005." No idea when Apple will make the switch, but they will do it. I'm thinking 5-10 year timeframe.
Thread beginning with comment 541117
To view parent comment, click here.
To read all comments associated with this story, please click here.
Laurence
Member since:
2007-03-26

That sounds like making something that's slower go faster than something that goes faster.

Why no optimize for the faster option and make it even faster?

Actually that's how x86 has been operating for the last decade. Intel has had to employ a number of 'cheats' to keep up with Moores law and as a result the x86 CPUs have gotten very long in the tooth.


iOS devices can do impressive stuff speed wise, but when it comes to certain desktop applications it's hard to beat raw power.

Few people run applications that need that kind of raw power, and the few times it is required, a switch to multiple RISC cores over fewer CISC cores might pay dividend in the long run. Admittedly there's still a software hurdle to over come there (teaching developers to write god multi-threaded code isn't a small task). But for DAWs, video editors, and image manipulation software; having dedicated RISC cores for filters and fx makes a lot of sense for low latency work.


I genuinely think if we want to sustain the exponential growth in processing power we've enjoyed thus far, then we need to learn to better parallel process rather than rely on clever processor tricks to mimic such effects (eg out of order execution). And to do that, it makes more sense to have more dedicated RISC cores: it's easier to stack cores on one die and the lower draw on power means the CPUs run cooler (as cooling top end multi-core monsters is always a bit of a battle).

Reply Parent Score: 5

MOS6510 Member since:
2011-05-12

(teaching developers to write god multi-threaded code isn't a small task).


Yes, I can imagine!

Reply Parent Score: 3

Tuishimi Member since:
2005-07-06

I wonder how many threads God had to use to create the universe... Talk about multi-threaded. And I wonder if went with agile processes or good old fashioned up-front design?

Reply Parent Score: 3

WereCatf Member since:
2006-02-15

video editors, and image manipulation software; having dedicated RISC cores for filters and fx makes a lot of sense for low latency work.


I echo this guy's sentiment: ARM may not be as powerful as a general purpose CPU as x86/x64, but it can be tailored for certain kinds of tasks after which it can beat x86/x64 hands down. Video and image manipulation are actually things that are already handled quite well on ARM by specific cores designed for those tasks, like e.g. many video cores these days can handle both decoding and encoding of video in real time. It's just a matter of adding support for effects in the core and updating the software to make use of the core, and you'll likely get better performance and better battery-life than with x86/x64 solutions.

That is to say, x86/x64 is good for all-purpose tasks where general, overall, raw power is important, and ARM is much worse for that, but a whole lot better for more specific tasks.

About the topic itself: I could certainly see Apple going for ARM in the future, they have a lot to gain from such a shift in architecture. It would possibly start with only Macbook Air's going ARM in an effort to see how the public reacts, to let developers start the transition on their ends, and to prepare the public for a bigger push in a few years from that.

Reply Parent Score: 3

Tuishimi Member since:
2005-07-06

Didn't amiga try to offload various types of processing onto a variety of units in their desktop hardware? Or am I smoking something?

Reply Parent Score: 2

kovacm Member since:
2010-12-16

Few people run applications that need that kind of raw power, and the few times it is required, a switch to multiple RISC cores over fewer CISC cores might pay dividend in the long run.

x86 are RISC from PentiumPro ;)

real speedup coming from massive parallelism of GPU.

Reply Parent Score: 0

viton Member since:
2005-08-09

x86 are RISC from PentiumPro

It is simply wrong. It never was (except RISC86 core) and most microops now translated into 1 CISC instruction. Also there is a pure CISC-y Atom.

Edited 2012-11-07 04:15 UTC

Reply Parent Score: 2

henderson101 Member since:
2006-05-30

The only RISC that was ever part of x86 was so deep in the processor core and pipelines that no one programming the processor for general use ever noticed. It was only intensive applications where any special kind of optimisation made a lot of difference. The whole point of the x86/IA32 architecture was that it was meant to be backwardly compatible with the previous generations.

Again, RISC core + CISC userspace interface /= RISC processor

Edited 2012-11-07 11:06 UTC

Reply Parent Score: 3

zima Member since:
2005-07-06

it makes more sense to have more dedicated RISC cores: it's easier to stack cores on one die and the lower draw on power means the CPUs run cooler (as cooling top end multi-core monsters is always a bit of a battle).

Intel can do pretty much the same with Atoms? Actually, Intel might be furthest along WRT such many-chips solution (and future software support), with post-Larabee http://en.wikipedia.org/wiki/Intel_MIC

Reply Parent Score: 2

Laurence Member since:
2007-03-26

Intel can do pretty much the same with Atoms? Actually, Intel might be furthest along WRT such many-chips solution (and future software support), with post-Larabee http://en.wikipedia.org/wiki/Intel_MIC

Intel are far from alone. MIPS (I think it was) and IBM have both been doing this for years. In fact one such joint venture between IBM and Sony is sat in many peoples homes: the Playstation 3 Cell processor.

So I wouldn't say Intel are ahead of the game on this one - though they're certainly not sat idle either.

Reply Parent Score: 2