Today, Apple announced its first three ARM-based Macs – a the MacBook Air, the MacBook Pro 13″, and the Mac Mini. They are all equipped by the Apple M1 system-on-a-chip, which was, of course, the main focus of the unveiling. Apple made a lot of bold claims about their first ARM-based Mac chip, but sadly, refused to show any real-world use cases, benchmarks, or any other verifiable data, making it very hard to assess the company’s lofty claims about performance and battery life.
That being said, AnandTech has done some deep diving into the A14, found in the latest iPhones and iPad Air, and it would seem they boast excellent performance figures.
What we do know is that all of these machines – including the MacBook Pro which definitely has Pro in its name – cap out at a mere 16GB of RAM, which seems paltry, especially since that 16GB is shared with Apple’s integrated GPU. This RAM is on-die, and since there’s no SIM slot on any of the new machines, it cannot be expanded. On top of that, the base models of al of these machines only ship with 8GB of RAM, which should be a crime.
Just like on the latest iPhones, the two laptop models also do not ship with high-refresh rate displays, so you’re stuck with a paltry 60Hz display – it’s not even available as an option. Much like the 8GB of RAM, shipping such expensive machines with mere 60Hz displays is inexcusable.
The MacBook Air is fanless, but the MacBook Pro and Mac Mini are not. This most likely allows the latter two models to sustain their peak performance for longer than the MacBook Air can, which makes sense considering their price points and marketing.
The new machines will ship a week from today.
It’s disappointing that the Mini won’t have upgradeable RAM – again! Saying that, as is the case with iOS versus Android, we could find out in the coming weeks whether macOS under ARM manages memory to the point that a direct comparison between these new machines and those that came before them isn’t useful.
Let’s see how the real-world testing turns out.
I think that the fact the old macs were just PC-compatibles with some extra hardware, and the new ones are completely new designs from the ground up, means you can’t directly compare Intel Macs with ARM macs
On the other hand, they present exactly the same APIs and user interface, so running the same software (just compiled for different CPU architectures) to see how it performs should be trivial – there are many sensible ways to compare apples and oranges.
dnebdal,
I agree it shouldn’t be too difficult given a portable code base. Although there may be some more variables involved that open up different interpretations of the data. Differences in compilers, operating systems, drivers, etc. may cause performance differences that aren’t necessarily a result of the CPU itself.
They kept the high end Intel based mini for those who need it, and you still can bump it up to 64GB.
Mine has 32Gb and I don’t expect to need anymore for years as a media centre.
Considering how long it will be before I buy another computer, I’m more eagerly awaiting Big Sur. Or, more than that, a big review of Big Sur
I’m afraid the phone and laptop market has all become a bit incremental for me, they get a bit faster, they get a bit thriftier and a bit lighter but the one thing they never seem to get is significantly more reliable. They always operate with the performance dialled into a level that delivers the very fringe of acceptability.
Whatever Apple, MS or Google offer is now far far away from the areas of technology that I’m actively interested in. There products are now mostly completely utilitarian and of little new interest, just a better hammer.
While I’m not an advocate of change for the sake of change, I do think they need some more innovative channel for those who are genuinely interested in innovation, other than painting a faster version of the same as innovative. Most of the remainder of the public, just want things to work as they are supposed to work.
cpcf,
Me too. As the market has matured there’s less and less to differentiate one slab of glass from another other than incremental specs that to be honest most of us don’t need in a mobile devices.
What I really want is a long lasting repairable mobile that I don’t have to replace regularly due to stupid engineering goals like software obsolescence and vendor-locking that create upgrade incentives that harms the environment. E-waste is a huge problem that is creating a large burden on our children and the planet. Long lifespan, end user replaceable software, reusable components is by far the best thing we can do to minimize our environmental impact, yet greedy-ass corporations like apple deliberately make repairs more difficult with an eye on raking in the next trillion dollars. We need to get back to long product lifespans for sustainability’s sake, but wall street has cursed us to ever shorter lifecycles and greed over all else 🙁
Yes, I’m fully on board with this perspective. It’s more likely in the future I’ll have a phone I replace one a week than one I recharge once a week!
Maybe they’ll spit them out of vending machines and you will use then discard them, never to be recharged ever, little disposable clones that load you life from the corporate controlled cloud every-time you swipe to buy a fresh one!
RAM size, upgradability and design doesn’t bother me, other things do. If I may be so bold, I formulated it here: http://vivapowerpc.eu/20201111-0850_Apple_is_back_on_RISC
Unfortunately there is a big error in that article: there aren’t nine binary subtypes for ARM, but ten, because it forgets ARM64. And the point is that the last one is the only one that Apple ARM devices will support.
Even if there was just one ARM64/AArch64 subtype now, I don’t believe it will stay so. Maybe across Apple products, but there are other manufacturers, which won’t have access to Apple technology and will make their own extensions. The 32-bit ARM mess is evidence.
I take offence at the article as it is somehow claiming ARM is still RISC these days.
RISC died long ago beyond the real processor underneath many CPU’s that you don’t get access to.
Maybe there is a reasl RISC CPU still out there I am unaware of? I doubt it though. Inbeetween’s at most. Not that the distinction ever made much sense.
What about ARM (or SPARC or Power, for that matter) makes them disqualified as RISC designs?
Different people define “RISC” in different ways.
For some people, “RISC” means “load/store architecture” (e.g. you can’t do something like “add reg1, [memory]” and have to load/store separately). For these people, ARM is still RISC.
For some people, “RISC” means “no translation of instructions into micro-ops”. For these people, modern ARM is definitely not RISC.
For some people, “RISC” means “reduced number of instructions” (fewer simpler instructions rather than lots of more complex instructions). For these people, modern ARM is definitely not RISC.
16Gb is pretty much a no go for me for any serious work today – with all containers, VMs and Java. Just recently upgraded both my home and work workstations to 64Gb and for now I’m happy with it.
But what will you run in your VM? Raspbian, RISCOS, Android perhaps? Windows on ARM?
Neither needs a lot of RAM afaik.
For work purposes, it’s typically the same OS in each, running different applications isolated from each other apart from some network ports. It makes it easy to automatically do clean (re-)installs on top of specific library versions without worrying about affecting anything else. Kind of the same reasons people use docker.
NaGERST,
16GB is a good amount for a mid range computer IMHO. However all the computers I use for development have been 32GB/64GB for years. It helps for VMs & big builds. I don’t mess around when it comes to RAM anymore, it’s just not worth the small cost savings when you have large workloads. Obviously not everyone’s going to need that much, which is fine, but to be perfectly candid 16GB doesn’t look good as a maximum capacity.
Other than that, I would be very interested in trying out the new ARM CPU. The A13 had superb single threaded performance, but lousy multithreaded performance (compared to x86 at the time). This new M1 chip adds two more fast cores, which obviously should help with multithreading. To be honest I wish they were all fast cores…but I understand apple’s reason for compromising between slow and fast cores. It’s just unclear how well these are going to perform against the latest gen x86 components, particularly AMD since AMD has the top spot now. Apple’s marketing material is so vague that it’s virtually useless for comparison. As usual I don’t give marketing departments the benefit of doubt, we won’t know how well these perform until products are on the shelves and available to 3rd party reviewers.
I very much hope that these are NOT boot locked and will be able to run linux. I worry there’s going to be a lot of IOS-style restrictions and vendor locking. It saddens me. Apple had built a fairly nice unix platform with OSX only to be ruined by corporate greed based restrictions making macos worse over the years…ugh. I am very concerned that incremental restrictions keep chipping away at owner freedoms. Some people are willing to look the other way, but man I have to call out that crap for what it is regardless of who’s trying to shovel it down our throats. Anyways, we’ll see where things land on restrictions when the newest mac products are out.
I’m also using devices with more and more RAM, 32Gb is bare minimum now.
Given my daily tasks are mostly developing or debugging IIoT / IoT type sensors and devices, gadgets that are designed with thrift and cost in mind, using less and less energy and resources by the edition, the irony is not lost on me that I need more and more RAM to build and manage these few kilobytes efficiently.
Something feels wrong, the tools have become bloated, like I’m using the might of the military to build a better dog kennel!
Has brute force supplanted beauty?
Speed of development has overtaken optimisation. Because everyone has access to a plethora of RAM, there’s little incentive to optimise every megabyte an application uses.
Java probably isn’t the M1s forte just yet either.
We’re talking MacBook Air, Entry level MacBook Pro both mainly used by students and entry level Mac mini, mainly used a cheap desktop computer or a media centre.
We need to wait for the 4-ports MacBook Pro 13, MacBook Pro 16, iMac, and MacPro replacement before judging.
I am pretty sure these limitations will be gone by then and a lot of the software will have caught up too.
I am a software developer in a big company where 90% of laptops are macbook pro; the big “pro” of the x86 mac is the easy portability of code from linux/unix environment which makes it an ideal java/c++ development environment.
An ARM based mac is actually of very little use to me, with a huge performance gap due arm x86 runtime translation.
I’m curious to see what our IT will decide, but surely an ARM macbook pro right now is a NO GO; apple will lose a big customer.
The battery life does look appealing, for those times when I need a laptop to take with me to conferences, etc. and I’m not really doing my “proper” work. But, like you, an ARM chip isn’t really for me when I need to do “proper” work.
It will have it’s niches – e.g. if servers start going the way of ARM. And it provides a better virtualization environment for developers for mobile devices.
But I suspect the same as you – loyal Mac users, especially those that primarily use Apple’s own software – will stick with it. People who bought Macs because they had x86 chips and are suitable for all sorts of virtualization, etc. as a result won’t. Whilst it will probably do OK in it’s own right, I think that in terms of the broader installed base, this is going to cost Apple market share.
Infact I am suprised. I understand the new Macbook air with finally a powerful videocard, compatibility with IOS apps and completely silent is a winner.
But I supposed Apple will not use ARM in PRO line, at least of now. Apple had a different idea… but it is seems not so good for me.
Or perhaps the performance of M1 is so good that…
The gap is not huge. M1 will run x86 faster than any mac laptop 🙂
At least what can be translated statically (runtime recompilng is slow).
For Java development why do you ever care about the Arch?
But I don’t know the state of Java for ARM Mac.
Well, If fastest and energy efficient laptop is not for you,
you can keep overheating and throttling x86 legacy until it lasts 🙂
viton,
Native can be expected to do very well based on benchmarks of apple’s existing ARM products…
https://www.pcmag.com/news/how-might-apples-arm-silicon-perform-in-future-macs-we-ran-some-benchmarks
But converting code from one architecture to another is a big unknown. Historically it adds a lot of overhead, both in terms of memory and CPU. I don’t think we should be making any claims here until real world benchmarks are out.
They’re making very good inroads, but we must be careful about making performance claims before we have benchmarks in hand. Let’s assume it ends up being the fastest apple laptop, it doesn’t strictly follow it will be the fastest laptop overall since apple didn’t have the fastest laptops overall to begin with.
As an aside, I kind of wish that sites like osnews would refrain from covering the marketing fluff. I know this is futile because all media outlets want to cover the PR material, but I find it annoying to always be debating the merits of products in an information vacuum over and over again. I’ve always been more interested in fact based discussions once properly reviewed and benchmarks are out.
x86 emulation performance is known for a while, since the day DTK received by developers.
GB5 results are pretty reliable.
For a GB5 tests emulated code is running just 25% slower on A12Z.
Unlike Snapdragon, A12Z had memory model switch to deal with x86 memory ordering.
This is why I specifically said “mac laptop”.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/
Well, M1 bencmharks are out.
https://browser.geekbench.com/v5/cpu/4671689
So this passively cooled chip has single perf topping best intel chip (TigerLake) and multi threaded perf of mobile 8 core AMD chip (4800U).
viton,
That was in a separate paragraph, thanks for the clarification though 🙂
This is the first benchmark I see…Good find! Although to be fair it was just posted a few hours ago after my comment was posted. I for one don’t know where this benchmark came from. These laptops aren’t even shipping yet. I’ll be more comfortable once 3rd party reviewers are performing benchmarks to independently validate the results in the field. Regardless, it does help add missing data to the discussion, thank you very much!
The single core scores are great, as anticipated. The multi core GB5 scores reveal that M1 is quite slow at multithreading, I expected better to be honest. Maybe this is to be expected with a macbook air and the macbook pro could have better multithreading performance… Let us know if you find any more benchmarks. Do you have any clue what the ram is clocked at? I’m also curious if the integrated GPU will perform like a typical integrated GPUs or if they’ve managed to get better performance out of it? Naturally I expect we’ll have a lot more data once these laptops actually ship.
viton,
I’m searching for results on geekbench and suddenly a lot of new benchmarks are showing up.
https://browser.geekbench.com/v5/cpu/search?q=virtualapple
The results suggest that x86 versions of geekbench running on apple silicon have about a 51% x86 single core emulation overhead compared to native. Additionally the data suggests that x86 emulation only supports 4 cores rather than 8? Other than that I don’t think there are any surprises here: as usual the emulation is inefficient, but useful if you have to be able to run x86 software on an ARM computer.
Also, geekbench result for macbook pro was posted just a moment ago…
https://browser.geekbench.com/v5/cpu/4663687
Ironically it actually scored marginally lower than the macbook air on both the single core and multi-core. Hmm. I guess this first generation isn’t going to be very good at multithreaded workloads. And that’s ok, but it’s why I never like taking marketing hype at face value.
This may be a VERY silly idea – but could Apple make a computer that had an M1 *and* x86? Of course, it would probably have to be *at least* 4-AND-A-HALF WHOLE MILLIMETERS thick because of the increase cooling requirements.
They have lost the “pro” focus several years ago.
My work laptop was a MBP too. However in their type-C refresh, they dropped many useful ports, reduced battery size, and replaced the keyboard with what is essentially a gimmick.
I switched platforms and did not look back. Okay, I admit, I look back every year or so to see whether they fixed the issues. But they seem to be going in the wrong direction.
I assume these are overpriced to avoid cannibalizing existing x86 Mac inventories. From what I’ve heard, the DTK hardware (basically an iPad Pro in a Mac Mini case, using the A12Z CPU) is about half as fast as a current i7 Mac Mini. Of course, the CPU inside is probably 1/10th the price and using 1/10th the power so it’s a win. Lots of benchmarks will show up as soon as these hit the streets.
Shame about the anemic base RAM and storage though.
It’ll be interesting when they release some actual Pro hardware, and software catches up.
I have been considering the pricing as well – before the reveal quite a few people were saying that ARM Mac’s would be cheaper. We should have remembered that Tim Cook’s Apple never feels they have to make an effort to get a sale. They assume their customers will buy version 1 of this new platform regardless of any disadvantages like the loss of Bootcamp. Many will too.
The ARM powered Mac mini is cheaper than the intel one was, so yes these people were right.
I don’t know a single person who uses bootcamp but the VMWare users are indeed concerned. I am using Citrix Workspace on my Mac for my Windows “needs”. It’s sufficient for at least 80% of use cases.
PhilB,
At linux user group meetings, quite a few people had macbooks and I think all of them were running linux on bare metal. I don’t know whether they used linux full time, but clearly they did use it sometimes and the mac hardware allowed them to do it. I had coworker was running ubuntu on a MBP for work.
I’ve got a question for you guys who imply that linux will not work on these new ARM macs. I understand there’s no bootcamp, which is frustrating, but this has never stopped linux from supporting devices in the past. Has there been any word or evidence that apple will be actively blocking linux? Or is it just a case of not supporting it out of the box, but still leaving the option for the linux community to add support on it’s own?
I re-read the comments and it seems you guys were only talking about windows specifically, but if you’ve got any information on whether linux will run I’d like to know.
I certainly wouldn’t buy an ARM machine to run a windows OS or software, it wouldn’t make much sense. Even with the ARM versions of windows you’d incur too much emulation overhead. But some of us are interested in running linux on ARM hardware, which is why I’m asking whether new MBP owners will be blocked from running linux?
The problem for Linux support is device drivers. On x86 platforms the support is more ubiquitous as Apple was using regular off-the-shelf CPUs, GPUs, wifi chips, LTE chips, standard SATA/PCIE interfaces et cetera. Now the “Apple Silicon” integrates everything into a single proprietary chip.
It is basically up to Apple to release some sort of detailed hardware specification in order for the community to even have the slightest chance at scraping up some soft of basic support for the SoC — with abysmal performance and battery life, of course.
Though, upon the initial announcement Apple did talk something about the possibility to run Linux on top of some sort of (light?) emulation layer, probably to overcome the issues related to proprietary SoC design with zero native software support. I hope that support will be generic and allows us to run any Linux distro out there, because IIRC the initial talk only mentioned running Ubuntu…
Also there will be no standard UEFI/BIOS on these ARM-Macs, therefore it will be virtually impossible to use any non-Apple-sanctioned bootloader, therefore making native Linux a pipedream anyways.
sj87,
Obviously things would be far better if they were fully documented, but the lack of documentation doesn’t mark the end of the road. If it did then linux wouldn’t support anywhere close to the amount of hardware that is does. Drivers can be effectively reverse engineered even without documentation. Linux devs often have to do this when there is a lack of manufacturer cooperation. So while it could take some time I believe the linux community would be both willing and able to do it especially with computers that are as popular as apple’s.
Are you referring to virtual-box style system emulation or running on raw metal? I’m talking about running on raw metal.
It isn’t a pipe dream assuming they haven’t built restrictions that block owners from installing their own changes. If they do impose restrictions, then a jailbreak would be necessary. It tends to be a pain in the butt and a jailbreak may or may not happen…
The other article “Booting a macOS Apple Silicon kernel in QEMU” already provides some clues as to how the existing kernel works and how it might be changed to chainload other operating systems. It’s certainly feasible IMHO.
I am not talking about reverse engineering something trivial akin to printer drivers or webcam drivers, jesus christ.
We can look at Nouveau, the open source driver for Nvidia GPUs, for comparison. For years they did not even have proper power management. Their progress WRT to new hardware is also problematic due to how Nvidia is hampering them with their firmware policies as of late.
Drivers for GPUs and LTE modems at the very least are super complex to implement by pure virtue of reverse engineering, and will never reach the level that would justify paying those Apple premium prices for a device that then performs like a $200 My Little Pony laptop.
sj87,
There’s no need for that tone, I’m just pointing out the fact that it’s possible for the linux community if they’re so determined.
I agree the full GPU is the toughest because you’re not merely following a well defined command protocol as you would with storage, networking, modems, etc, A GPU is a fully programmable processor unto itself where you’re actually sending programming instructions (kernels) to the GPU for execution, which means you need a special compiler and all that. So we can agree that doing GPGPU with 3d acceleration will be very hard without proper specs. On the other hand I’m positive they could reverse engineer the bootloader and kernel just to get raw frame buffer access, which is a great start and for some may be all they really need. Yes ideally full GPU accelleration would be there, but realistically a lot of linux applications don’t require or even use it at all.
Another thing to keep in mind is we don’t know how well these macs with shared memory iGPUs are going to perform at graphical tasks. Until we have benchmarks in hand it remains unclear whether these ARM laptops will be as suitable as x86 counterparts for demanding GPU tasks in the first place.
Anyways most other peripherals don’t work this way and it should be relatively strait forward to reverse engineer them using conventional means. It isn’t rocket science for those with reverse engineering experience.
For everyone complaining about the RAM limit on these, consider that this is a limitation of the packaging and board design, not necessarily Apple being too frugal with RAM. For example, the Raspberry Pi wasn’t able to break the 1GB RAM barrier until they moved to a completely new chip with the Pi 4 and separated RAM into its own package. I wouldn’t be surprised if Apple is working on 32GB, 64GB, and 128GB designs for future “Pro” Macs like the iMac Pro and the Mac Pro when they bring the M1 to those devices.
I was thinking the same thing about ARM and memory, that it’s probably a limitation there in some way. But then I remembered what an Apple fan told me when I said I wanted a model with 32gb of RAM, basically that Apple has always, throughout their entire history charged a whole lot more than they should for RAM upgrades. Like they don’t understand why you’d need more or something…
As a side rant, when people are reviewing phones and they say they have 32gb of memory when they mean storage makes me want to stab the reviewing person….
leech,
What’s the difference?
You ever meet people who refer to the tower as “the hard drive”?
Haha
Even better was in the late 80s/early 90s when many people still had computers without hard drives; they referred to the computer as the “modem” because the phone line plugged into a slot on the back.
Yeah, I’m waiting for 2nd gen pro parts to see how this develops, and I’m guessing the M1 is a repurposed mobile chip, which is why some things are odd.
I can’t imagine trying to get any sort of yields with a proc with 128GB of RAM on die. How big would that thing need to be anyway? XD
RAM on-die??? That was unexpected and wasn’t really clear from the keynote. I guess if it makes things more performant OK but has this architecture even been seen anywhere before up to now?
It isn’t clear at all, and I wonder if it’s PoP (RAM stacked on/under SoC) or if the SoC and RAM are discrete but on a common module that is shared between all three form factors, rather than living further away on the system board like most modern ultrabooks with soldered RAM.
Apple does refer to it as “Unified Memory” so that could mean it’s deeply tied to the SoC, or it could be marketing gibberish.
Unified Memory Architecture is not about the placement of memory on SoC.
You’re right, not sure where I was going with that.
With that said, after seeing a teardown of the new Mac mini, it’s obvious the ram is indeed on-package and not on-die like some assumed (archive link because the host is down):
https://web.archive.org/web/20201118135839/https://egpu.io/forums/desktop-computing/teardown-late-2020-mac-mini-apple-silicon-m1-thunderbolt-4-usb4-pcie-4/
It is for reasons that are mentioned in the specs and Thom’s commentary, that proprietary “Workstations” are on the demise. Now come on… Who IS satisfied with the shipped or base model of anything? I guess Apple answered that with an impelled exclamation point. On top of that, with their price points, I guess they took the Jesse Jame’s approach, ” Pay up sucker!”
There have been 2 main reasons why I didn’t choose Mac for my last purchase.
1. I personally think Apple hardware is over priced. It used to be easy to justify the extra cost as I could develop using open web standards, create/run containers to deploy to cloud, and still have access to decent graphics tools all on the same system. Time has now moved on, and Apple’s focus on iOS has given Microsoft the chance to not only catch up in this area, but also overtake (a Windows native version of Wayland is being built into WSL).
2. There is no hardware upgrade path. I have to purchase all the RAM and storage I need for the life time of the machine up front as there is no option for a midlife upgrade. The cynic in me says this was a choice, not a constraint.
To that I now have to add 2 more:
3. Maxing out at 16GB is shocking for a machine that’s supposed to be targeting Pro level users, especially when it’s shared with the graphics.
4. I have to cede control of my upgrade cycle to Apple. When they decide not to support the hardware, I would be left with an expensive door stop. Apple’s firmware has been getting increasingly hostile to anything not macOS for years. Dream on if you think this will be an open system.
I was sad to leave the Mac behind, but I haven’t come across anything that makes me regret it either.
3. Apple has only updated the base 13” MBP, with the same available specs as the Intel machines they’re replacing. You can expect at least 32GB and presumably more cores on the higher end 13” update expected next year. As such, there’s nothing shocking here.
Good point well made. Intel promising cooler chips and not delivering has mucked things up a bit. Trying to fit a hot chip into a smaller package and still stay close(ish) to the promised power and TDP envelope, has meant leaving no corner left uncut.
j-mac,
It makes me wonder why apple didn’t/doesn’t offer AMD CPUs. I think many customers would be interested in buying AMD powered x86 laptops. Maybe it’s possible that apple underestimated AMD.
Understandable. When Apple moved to Intel, AMD was dead in the water. The company was losing money hand over fist, and their chip lineup was a generation or 2 behind Intel in terms of performance and heat (I’m being charitable here).
It wasn’t until the Zen cores came into play around 2017, that AMD really turned things around. A truly amazing recovery, aided in no small part by Intel. However, by that stage Apple was already sick of being constantly let down by 3rd party suppliers over promising and under delivering, and had already started sunk a lot of money into taking full control of the Mac’s future.
I admire Apple for having the guts to take the business risk, and for their technical achievement, but I’m really disappointed that for all there is to admire, the business model behind it is such a massive turn off for me. I’m not anti Apple. I love my Apple TV, my iPad and I’m about to happily buy an iPhone 12, but when I buy a PC I have completely different expectations of what that platform is, and a more powerful iPad with a keyboard isn’t it.