The Mac Pro is one of the few remaining Intel Macs with no Apple Silicon replacement ready to go, even though we’re a little past the two-year deadline that CEO Tim Cook originally set for the transition in summer 2020 (and to be fair, it has been a hard-to-predict couple of years).
Bloomberg’s Mark Gurman reports that Apple continues to work on a new version of the Mac Pro, alongside other as-yet-unreplaced Intel Macs like the higher-end Mac mini and the 27-inch iMac, but that a planned “M2 Extreme” chip that would have powered the Apple Silicon Mac Pro has “likely” been canceled.
[…]Waiting for news in the face of uncertainty isn’t new to Mac Pro holdouts; it has been a constant for the last decade-plus. It has been a very long time since the Mac Pro was updated on anything close to a predictable cadence, especially if you don’t count partial refreshes like the 2012 Mac Pro tower or the addition of new GPU options to the 2019 model. And each of the last two updates—the “trash can” Mac Pro in 2013 and the reforged “cheese grater” version from 2019—have reflected a total shift in design and strategy.
At this point, I’d like Apple to decide: either commit to a consistent strategy or vision for the Mac Pro and its place in the lineup or retire it.
It sure has been a rough time for Mac Pro buyers. The reality of it is that desktop PCs – Apple or otherwise – just aren’t really all that popular anymore compared to laptops, and this probably doubly counts for the very high end. Selling Mac Pros by the thousands simply doesn’t make a whole lot of sense compared to the numbers Apple’s other computers are shipping at.
The tower form factor remains popular for computer enthusiasts and I definitely think the demand is there, but apple seems content to let consumer laptops displace macpro. Apple’s latest macpro workstation just wasn’t competitive and at the starting price of $6k it only had rather embarrassing specs, you’d need to spend more like $10k to get a decent macpro. With so many devs using linux (including for apple hardware), the macpro is kind of a hard sell. And they don’t have a cuda option, which is a big deal for AI.
As far as an m1 based macpro, the problem is the m1’s unified cpu/ram/gpu. It just is not as scalable or customizable as discrete components, and it never will be. It’s possible they could build an M1 cluster, with several cpus in an SMP configuration. They could try and write drivers to distribute CPU/GPU workloads across said cluster. And they could build an interconnect akin to other scalable NUMA systems. This would get around the M1’s scaling limitations. It would be interesting option. but it does go against M1’s design where everything uses unified memory. Even If an SMP M1 non-unified memory cluster were built, given that NUMA typically has to be optimized for by application developers, I question whether many software developers would actually go through the trouble to support it.
@Alfman, I was thinking the very same thing about clusters, and came to the conclusion if you are heading down that architecture then using Apple hardware just doesn’t make sense, you could do far more for far less dollars using pretty much any widely available alternative!
I concede as an academic exercise / proof of concept or even hobbyist it may be worth the effort, if you could afford it, but if a Workstation was being built for purpose not a chance you’d use Apple hardware!
I would have assumed they would still use discrete AMD GPUs in an ARM based Mac Pro, unless of course they are trying to build their own.
My Macbook Air M1 8GB/512GB at work is revelatory. It kicks ass.
I am using a M1 Mac Mini 16GB/1 TB at home, and it… destroys the competition. Granted, I am not doing hardcore gaming on it. I imagine that would show its limitations, but it is a simply amazing engineering marvel. So quiet too. I think a Mac Pro really needs to advance the conversation. How is it better than a Mac Studio (which I desire)? What expandability will it create? Those questions need to be answered, and I hope Apple answers them.
NathanJHill,
It certainly beats all other ARM competition on specs. I do want a good ARM system to run linux natively, for me macos would be a con (obviously subjective), but in the absence of widely available systemready products for consumers I’d say the ARM experience has been lousy for FOSS. I’d drop qualcomm products in a heartbeat. Hypothetically if apple would sell M1 CPUs to other vendors producing systemready computers, it could be a win in my book.
High end x86 desktops with discrete GPU generally offers more horsepower than m1 for those who need it. Commodity x86 computers offer more ram, more performance cores, more gpu, more expand-ability, etc. Obviously the tradeoff is much bigger floor space and more power consumption. But if you’re doing something particularly intense, like blender, large builds, etc… I’d still opt for an x86 workhorse over m1. Consider this m1 pro owner became tired of waiting for renders to finish even for the purpose of bench-marking, so the quality was lowered…
https://osong.art/benchmarking-m1-pro-with-blender-3-1a-cycles-with-metal-backend/
I rendered this scene at the original sampling of 800 in 61s on an i9-9900k 3080ti, which is 2.46X faster than his “optimized” render.
Reducing it to 100 samples on my system took 15s. That’s a full 10X faster than the M1 pro. Unless I was married to specific macos software, I honestly wouldn’t even consider an ARM mac for 3d modeling & rendering until they’re able to support much faster discrete graphics.
This is why I think the author is wrong. Apple’s achievements with ARM are great especially for portable laptops, but at the moment things are inadequate for replacing high end workstations and gaming rigs.
I don’t have one myself, but I keep hearing very mixed things on this topic. Apparently the fan spins at a constant 1700rpm even at idle, which is actually pretty high. Some people say it’s louder than expected while others say they don’t hear it.
https://forums.macrumors.com/threads/brand-new-m1-mac-mini-has-fan-on-continously.2271339/
https://www.reddit.com/r/macmini/comments/u6589q/yes_the_m1_mini_makes_some_fan_noise/
I was testing out RTX raytracing titles on this machine, and the noise is pretty unbearable with the fans ramping up to max. I use a fairly aggressive fan profile to lower temps, but it makes me wonder if people are actually going to play games this way. However on another computer in a full tower, I’ve filled up all the fan slots and it remains fairly quite even under full load because they just don’t have to spin as fast to get good cooling. Of course not everyone wants a full tower. Also it seems rather wasteful for most people browsing the web and running a couple productivity apps, haha. 🙂
Alfman,
I am not sure whether you have an nvidia card (Founders Edition), or an OEM one (evga, asus, etc). However many of the nvidia’s standard cooler designs are not good enough to handle their own GPUs. For saving on space, and probably aesthetics and price, they go with a smaller cooler assembly and less amount of fans.
Unfortunately those cards require at least a 2.5 slot cooler, very efficient dual fans, or even three. Better yet, custom AIO liquid cooling solutions. Dissipating ~400W of heat is not going to be easy.
sukru,
I haven’t been paying much attention to the 40xx generation, but historically I agree the founders editions cards are worse than OEM versions if you push them hard.
I find the 3 fan coolers handle themselves pretty well and don’t have to ramp to the max assuming you have a cool case. I could try measuring actual RPM numbers if that’s of interest. I might have considered a water cooled GPU to take up less PCI space, which would have benefited me. But given the price and supply chain problems I didn’t feel it was a good time to go this route. I’ve been happy using AIOs for CPUs. Then again I’ve seen 100% price inflation in the few years I’ve been doing it, which is painful.
The M1 Max in my MBP is AMAZING. It’s specifically amazing in the ways the Intel machines always sucked – it stays cool with minimal fan use (only really runs when I game on it, or run a lock of docker builds), and the battery lasts forever. The GPU is STRONG – I can run Final Fantasy XIV (using “XIV on Mac” – not running even remotely “native”) at 90+ FPS at 1440p, for example. Nothing is optimized about that pipeline, and it’s still that fast.
The M1 is AMAZING. BUT, there’s no reason to believe this would beat out a top AMD or Intel Chip, with a dGPU. There’s literally no reason to put an M1 in a plugged in desktop workstation. All the benefits (other than performance, which I addressed) are the benefits for a laptop or SFF PC, not for a *professional* level desktop.
Apple would IMHO, be smarter to keep their platforms diversified. I get that there is a lot of benefit in using in-house silicon, but there’s also benefit in using commodity hardware. (Marketing and “coms” aside.)
CaptainN–,
I think we can all safely agree the M1 is a great processor especially considering power consumption. In the context of this article specifically though, we are talking about M1 computers replacing high end desktops including the x86 macpro, and here we cannot ignore that current M1/M2 CPUs are a substantial downgrade for MT and GPU performance, to say nothing of being able to expand/replace/upgrade components yourself. IMHO it would be kind of insulting for apple to drop the x86 macpro and declare that m1/m2 are all that apple professionals need (*).
* To be fair, it was the author who suggested this, not apple themselves.
Yes. I think there are M1 users who could benefit from eGPU support today if apple gave them the option. This does make me wonder if apple boxed themselves in with unified memory such that software written for M1 is no longer compatible with discrete GPUs? It could be kind of a bummer to find that one’s software has become tightly coupled to a proprietary iGPU.
NathanJHill,
Depending on work load, I am sure M1 would be a very good choice. For light memory tasks, and using power efficient components, it could be ideal.
However, even the i7 version of recent Intel chips are running circles around it:
https://www.cpu-monkey.com/en/compare_cpu-intel_core_i7_13700k-vs-apple_m1
Yes, they are engineered very well. Task focused, and everything is integrated. Yet, if you need more RAM, add-on PCIe cards (10GBe?), more storage (non-soldered nvme, stacks of spinning disks), unfortunately it will not scale up. Unless, or course without another major design revision.
I suspect more people would have bought the Mac Pro, or the PowerMac before it, if it had been priced for normal computer users rather than for Hollywood studios that might previously have bought SGI. A lot of people wanted an expandable Mac in a normal size case, tower or otherwise, as PC users have had access to for decades, but Apple insisted on selling us laptops, all-in-one iMacs or Minis. What we wanted was freedom to upgrade things like the hard drives or memory; Apple were more interested in selling fancy boxes than power or space value for money.
It would have sold more, but at a lower margin. Which is not attractive for Apple.
Even on the Windows/Linux side, Workstations are not cheap and tend to follow similar premium pricings as the Mac Pro. These are segments that are not intended for home.
Professional/Organizational purchases are less price sensitive that consumer/home. So workstation vendors tend to try to price them with as much margin as they can get away with. Since for a business, a $10,000 HW investment may lead to a significantly higher return in terms of the revenue it leads to. Whereas for a personal/home purchase a $10K computer makes no sense whatsoever, since it’s not generating any income (unless someone is doing trading/crypto/etc with it, but by then you’re dealing with a business purchase as well).
They should just keep shipping all of the above CPUs, and treat them as commodities, and end the silliness on architecture purity. Make Rosetta 2 go in the other direction, or just add an additional ARM chip for ARM apps. Heck, they could even work with AMD to get a custom package with both x64 and ARM on one package. For a high power always plugged in machine, there’s really no reason they need to go ARM (and the “need” to go ARM outside of that is even questionable). Commodity hardware is the way to go.
For Apple it makes sense to go all ARM for their entire product line because it allows them to subsidize the design and production cost of their SoCs across all their products. I.e. they get to reuse the same core/GPU/NPU/etc building blocks from mobile all the way to desktop. Specially since they are a vertical integrator, they want to keep as much in house in terms of SoC design as possible.
Working w AMD to do a custom part would end up being just as costly as doing their own high end ARM part anyway.
The Mac Pro isn’t a personal computer, it’s a workstation. It’s mostly designed for the people that do video editing with Final Cut. Their choice of GPU and the proprietary Afterburner card, dedicated to faster video encoding show this.
They should have stuck to this format, Right now they could update the intel version and wait a bit longer before releasing the arm one.
Now, does Apple want to remain into the Workstation market? They already left the server one….
IMHO “workstation” means server-grade fault tolerance/reliability/redundancy and server-grade extensibility (e.g. the ability to have 2 TiB of RAM with 4 fibre channel cards); but designed for a local user (with graphics, keyboard, etc) instead of the “barely enough graphics to allow remote login” you’d get with a server.
It’s the kind of system where (e.g.) if a RAM chip is fails you don’t know until a DevOps person arrives at 11:50 AM and tells you they’ll be finished replacing the faulty hardware before you get back from your lunch break.
Chances are that the problem is that there may be little demand for the Mac Pro at this point since most of the use cases it was intended for: video editing/content creation are met by the Mac Studio at a much better price/performance ratio.
There are a few corner cases where the Mac Pro still has a clear edge over the Studio: Large RAM + GPU compute workloads. But the problem may be that the demand is just not large enough to justify a very expensive SKU of the Apple Silicon.
So I assume the whole project may have a hard time getting prioritized in their pipeline.
Mac Pro prices are business prices, and businesses don’t wait around 5-10 years to make a purchase. Chances are some $1000+ GPU with hardware accelerated video encoding is faster than an M series cpu for what they need. Then there’s the fact the current xeon Mac Pro only made sense for a few video editing use cases. the Mac Pro is pretty much a shiny trophy to display on your desk at this point, and it mainly displays the foolishness of the buyer.
Maybe there is a benefit to x86, namely the fact it has a range of CPUs that cover any use case, including the Xeon workstation CPUs that the Mac Pro uses, plus a rich ecosystem of insanely powerful GPUs (and CPUs with enough PCI-E lanes to drive them). Somehow I don’t see the M2 scaling to cover what the Mac Pro is today. Which is why I am really interested to see what Apple will replace the Mac Pro with, if they replace it as they’ve promised. But unfortunately, they taunt us about a replacement in every WWDC but don’t deliver in the next one.