Linked by Thom Holwerda on Wed 13th Sep 2017 16:40 UTC
Apple

With the iPhone X revealed, we really have to start talking about its processor and SoC - the A11 Bionic. It's a six-core chip with two high-power cores, four low-power cores, and this year, for the first time, includes an Apple-designed custom GPU. It also has what Apple calls a Neural Engine, designed to speed up tasks such as face recognition.

Apple already had a sizeable performance lead over competing chips from Qualcomm (what Android phones use) in single-core performance, and the A11 blasts past those in multicore performance, as well. Moreover, the A11 also performs better than quite a number of recent desktop Intel chips from the Core i5 and i7 range, which is a big deal.

For quite a few people it's really hard to grasp just how powerful these chips are - and to a certain extent, it feels like much of that power is wasted in an iPhone, which is mostly doing relatively mundane tasks anyway. Now that Apple is also buildings its own GPUs, it's not a stretch to imagine a number of mobile GPU makers feeling a bit... Uneasy.

At some point, these Apple Ax chips will find their way to something more sizable than phones and tablets.

Permalink for comment 648859
To read all comments associated with this story, please click here.
ahferroin7
Member since:
2015-10-30

With either wasm or electron, it's trivial to use the full extent of a platform's power


Haha, no.

You can't even use the full extent of a platform's power in Java (at least, if you plan on compiling to JAR files instead of native executables), using it in Electron is a complete joke, and anyone trying to say otherwise either is being paid to do so, or has no idea what that many levels of indirection does to performance. Electron is why VS Code and Discord's Desktop app get such crap performance compared to natively compiled tools. The same has conventionally applied to things built on Adobe AIR and Mozilla's XULRunner.

WebAssembly makes things better, but it's still limited in performance because of the translation overhead.

Portability is the enemy of performance. Portable builds of software written using Java, or a CIL language, or even things like WebAssembly, are all interpreted code, not machine code. That hurts performance. In fact, the only case I've ever seen where that can perform better is Lua, and that only applies in very specific situations where the algorithm being used happens to be more efficiently implementable in Lua's interpreter runtime than whatever native runtime you're comparing it to.

By the same virtue, iOS is so efficient because it's single platform. macOS is as well. Conversely, Android supports at least 3 CPU architectures, not accounting for varying bit width (x86, MIPS, and ARM), and it runs on a kernel that supports a lot more (SPARC, POWER, SH, Alpha, HPPA, M68k, S/390, RISC-V, and more than a dozen architectures most people have never heard of).

Note that I'm not saying that portability is bad, just that it's not always the best answer (especially if it's not a client node you're running on).

Reply Parent Score: 4