Apple’s M2 Ultra powered Mac Pro is the final step in their Apple Silicon transition. But without GPU support or meaningful expansion, is it worth nearly double the price of a comparable Mac Studio?
It really seems like high-end computing is simply no longer possible whatsoever on the Mac. The Mac Pro is a joke, the memory limits on the M2 chips make them useless for high-end uses, there’s not enough PCI-e lanes, the integrated GPUs are a joke compared to offerings from AMD and NVIDIA, and x86 processors at the higher end completely obliterate the M2 chips.
At least ARM Macs use less power, so there’s that. But then, if you have to wait longer for tasks to finish – or can’t perform your tasks at all – does that really matter on your stationary, high-end workstation?
The PCI-e Expansion slots in the Mac Pro are not meant for GPUs, but for special purpose cards, mainly I/O, needed in video production. It is targeting a rather narrow audience, that would essentially be very happy with a Mac Studio otherwise, but need one or the other card to connect to other equipment in their production flow.
The price reflects both: the in comparison low number of machines that will get produced and the targeted customers, that probably do not care so much about a couple thousand $ for a special tool they (think they) need.
The M3 is delayed because of difficulties at TSMC, so we will see a M3-Ultra option for the Mac Pro probably only in 2024.
AMD- or NVIDIA GPUs will not come back to the Apple ecosystem any time soon – if ever.
cybergorf,
it’s still a regression though. The macpro’s “bread and butter” used to be professionals who did need more GPU horsepower.. To them it must be disappointing that discrete GPUs are no longer supported. Even eGPUs. have been removed from the table. I bet that many “old school” macpro users would still benefit from expandable GPUs.
I do hope m3 delivers stronger gains than the m2 did for the sake of progress. But I suspect apple engineers are experiencing the scalability problems we predicted years ago by having memory, cpu/gpu/compute on the same physical chip. Now that all these resources are under the same thermal umbrella, it’s harder for them to increase the performance of one subsystem without penalizing another. This is born out by m1/m2 applications and games experiencing bottlenecks when running CPU and GPU simultaneously. Sure, some say that macpro is not built for games, but I would suggest that the need to use CPU and GPU at the same time isn’t really limited to games. Discrete GPUs offer more compute performance, an easy upgrade path, and can even use multiple GPUs to double/triple performance in applications that support it like blender.
I consider the m1/m2’s biggest advantage to be energy efficiency. But if it were put to a vote and apple professionals were given a direct choice between efficiency and faster render times, I wonder how many vote for more efficient CPUs knowing they’d have to wait longer?
Hows that coolaid taste? Apple has left the highend to the big boys. It’s obvious to anyone.
I don’t have one and I am not going to buy one. I am just pointing out the obvious:
Apple did not build the new Mac Pro for more GPU power but for more connectivity. Hence it is for customers that would otherwise be satisfied with a Mac Studio but need these PCI-e slots.
I am sure Apple did some market research and found there is some money to make in exactly this niche.
Terms like “highend” or “professionals” are relative criteria and no absolutes.
Nobody who for what reason ever is needing the latest GPU from ATI or Nvidia (including optimized drivers and application support) is buying an Mac Pro. And not because of the last iteration, but this was the case already for the older model.
From Apples point of view, offering the fastes GPU or the most RAM is a game they can’t win anyways. Trying to do so would cost them more than the profit such a product would generate.
cybergorf,
I doubt it was apple’s intention going into the m2 transition that the macpro would become an underwhelming system. They went from specs and upgrade potential that most of us could be envious of to “meh, there’s not much here”. I’m pretty sure that wasn’t part of the plan, but rather a byproduct of the m2 not being ready for prime time. I still believe apple have every intention of perusing the high end market, but they need more time and work to deliver a more compelling product.
I am sure their agenda involved the m3 and the Mac Pro would have come as the first computer equipped with it. As the delays in the roadmap became visible, they had to switch the strategy a little bit. The choice was either waiting with the new Mac Pro for the new CPU to manifest, or bring it early with the same CPU as the Studio. The later path was taken.
“Underwelming” is again very subjective as it only refers to expectations. Are we sure our or the testers expectations are the same as the ones of the actual targeted group?
The alternative would probably not have been a better Mac Pro but no Mac Pro at all – at least not in 2023.
We can probably assume there is some infighting within Apple and the “Pro Team” needed some product to justify its existence and not being dissolved all together. Waiting an other year might have convinced leadership of its irrelevance.
We have to acknowledge that no matter what pro-machine they release it is only a tiny fraction in Apples total business – even the whole Mac division is absolutely dwarfed by phones and tablets.
If the path they took was the “right” one, will become evident next year or latest 26: if we see a successor with higher specs the Pro Team won – if not it was probably the last Mac Pro.
(and crazy collectors will pay insane amounts of money for this model in 20 years….)
I believe it is underwhelming for traditional macpro users. To them, macpro was uncompromising on power, expandable, and by far the most interesting thing to do with PCI slots for was GPU/compute and video capture. Now that apple has done away with GPUs, I think apple fans are trying to retroactively suggest it was always part of the plan for macpro to target an increasingly narrow market, but I simply don’t buy this argument.
Apple still wants to target the creative user base who were the original macpro power users. I’m sure many still want macos. But if they are unwilling or unable to step up their GPU game, I think apple risks loosing them to competitors with more powerful systems.
As you can see I do not say something like “this was always the plan”.
And I am not justifying anything, but explaining the business decisions a big company like Apple is making.
The “traditional macpro users”, which expect the things you refer to, are long gone. Past decisions of Apple did lead to that.
So this offer is targeting a different audience. This was of course not the plan 10 years ago, but not they are probably just trying to make the best out of the situation as it is.
Overall it looks like the Mac Studio is quite successful, and many reviewers put it even in the category “more powerful than I need”.
The latest incarnation of the Pro is targeting the ones that miss PCI-e on that machine – nothing more.
In the longer term sales of the Studio und the new Pro might convince the leadership that there is still some monetary value in pursuing this venue.
But the customers will be a new crowd and not traditional macpro users.
cybergorf,
Maybe I misread you then, but when you say things like “It is targeting a rather narrow audience” or “From Apples point of view, offering the fastes GPU or the most RAM is a game they can’t win anyways.” and so on, that makes it sound like the macpro’s lack of GPU horsepower was always intended by apple. but I really don’t think that was the case. It’s just that their inhouse technology didn’t scale as much or as quickly as apple were hoping.
Again, I feel this is retroactive justification for what is generally a weak showing by the latest macpro. I think fans might feel better about retroactively justifying the shortcomings by claiming the macpro didn’t fall short, instead where it landed is what apple was targeting to begin with, nothing more. However I suspect apple themselves must be disappoint with where the macpro landed. They did not set out for it to have such shortcomings.
So are you suggesting apple has already lost their traditional macpro users and that they won’t be interested in future apple products? This seems to be implied, but I just wanted to make sure I understood.
Yes, they lost them already with the trashcan model.
PCIe Gen 4 16x SSD could remediate the low ceiling on memory.
cosmotic,
That’s true, this could/does help, although the lifespan of SSD is always at the back of my mind so I’m a bit hesitant to use it as a scratch medium.
What I think would be interesting is to have a CPU with a few gigs of memory like the M1, but unlike the M1 it would be used as a fast cache in front of expandable DDR5 memory modules. It could get us the best of both world’s. Very low latency memory available on the CPU, but backed by large expandable DDR on the motherboard.
I wonder if it will be possible for Asahi Linux to add and use a AMD GPU and have better graphics performance than MacOS on the same hardware.
It should work just fine. As Linus TechTips showed in his recent video the AMD 5700 GPU is detected in MacOS on the new m2 ARM macpro, but there is no driver in macos for it.
I’m fairly certain they were also “detected” on the M1 as well, but it’s just getting the device to identify itself on the bus. Reports vary on why, but it doesn’t appear the M chips can support add in or external GPUs. I’ve heard no memory mirroring operations, and the GPU on the chip isn’t connected to the PCIe bus, etc. Asahi might be able to hack together some solution because “Linux” eventually, but I wouldn’t get optimistic.
dark2,
Being that linux already supports discrete GPUs (both amd and nvidia), I imagine it would be possible to make this happen on the M line of CPUs with enough work. Proprietary drivers are probably out of the question, but the open source drivers ought to be portable.
The problem for apple is that they went all in on the unified/shared memory model, whereas obviously discrete GPUs have their own dedicated memory. We can force a discrete GPU to operate in unified/shared memory mode, but at what cost?
I performed a test on my 2021 era system with a 3080ti and i9-11900k with DDR4 3200.
The M2 specs claim memory bandwidth of 100GBps, which I’ll take at face value, but a discrete GPU won’t be able to use that bandwidth. It will be limited by the bandwidth and latency of PCI. (or worse firewire for an eGPU). Apple has painted itself into a corner with unified memory and herein lies the problem of adding discrete GPUs to increase performance of m1/m2 macs: it can break compatibility with software designed for unified memory. The have no choice but to back track on unified memory to add the performance and expandability of a dedicated GPU. Of course apple can support both, but software developers who built and optimized their software around unified memory might not be interested in doing that work again to add support for discrete memory cards.
If I’m wrong please correct Me, but doesn’t the M GPU cluster have access to all available memory? With MacOS 11 pulling slightly under 4GB ram at boot on my M1U that means 124GB of ram for video from that point.
Now granted in reality we’re actually looking at 80-100GB,. But that’s far more than the top line cards from the big 2.
It’s not meant for games. Though last-gen AAA titles appear to play well based on YouTube videos.
A look at POM and Parallels show Overwatch and GTA V work smoothly. COD. Doom.
But what to do with the Pro? The studio does it all. If not GPUs, there’s not much use in that much expansion. Everything a normal pro user would use for expansion is baked in.
I mean you can use a bunch of SSDs if that’s your thing. A file server.
Hardware video encoders, like AV1?
I mean, it’s not 1995 where the dumb motherboard is a backplane for NIC, Modem, Video, Audio, I/O, drive controllers.
I’d love to see someone say what the hell the put inside one of these.
So far the best I’ve see was a guy who actually grated cheese on the cheese grater pro.
lostinlodos,
You bring up a very interesting topic! Considering that the m2 ultra is two physical m2 max chips, I would guess that there are at least two, possibly more, memory controllers, which likely means there’s an access penalty for crossing controllers. However I’ve searched and cannot find any information from apple about the NUMA configuration or what the performance impact is. Apple abhors publishing detailed specs and benchmarks, so while it’s an excellent question, I don’t have an answer.
That’s true, 128GB is a ton of memory. It’s even higher than nvidia’s top of the line quadro cards that only have 48GB…
nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/quadro-rtx-8000-us-nvidia-946977-r1-web.pdf
nvidia.com/en-us/design-visualization/rtx-a6000/
It’s worth noting that high end cards including the A6000 have nvlink, which is an insanely fast interconnect allowing GPUs to access each other’s memory to increase both capacity and power.
club386.com/nvidia-reveals-next-gen-hopper-gpu-architecture/
I am very curious how m2 ultra bandwidth compares to nvlink, but these marketing numbers only give us a rough idea. We really need side by side benchmarks. Still, I am in agreement with you that fact that m2 ultra has this much memory for so “cheap” is darned impressive… if only it had the GPU performance to match it.
Yeah, it depends on the features those games use. Personally I have no issue disabling features and turning down detail and resolution, but obviously m1/m2 aren’t good for those who want the latest games to run with specs maxed out.
I’d say GPUs and capture cards are the big ones. My own system has more ethernet ports, but I guess an ethernet dongle wouldn’t be the end of the world. Personally I’d rather have two HDMI interfaces rather than have to use a thunderbolt dongle. Still at least there’s a way to do that. I use large hard drives for bulk storage, on the mac studio I’d have to offload this to a NAS.
The two things I would complain about with the mac studio hardware are the GPU, which is a fairly steep downgrade for me, and the non-replacable storage, to which I want to say “f*ck you apple for doing that”.
I agree. and this seems to be the widespread consensus among professionals.
https://www.theverge.com/23770770/apple-mac-pro-m2-ultra-2023-review
@dark2,
being able to detect a PCI-e device is the “easy” part, as it is just a basic query.
The main issue is that a) the memory controller for the M1/M2/M3 is not going to play nice with a dGPU. And similarly, there is no support from the internal limits management engine in the M-SoC for a big external dGPU. So it would likely not be able to do a proper boot bring up of the device.
Those two big ticket items are either deep inside the SoC or part of the firmware, which is not accessible to the linux folk (or any non-Apple programmer).
Asahi linux is a nice project, but it would be silly investing the money to buy an ARM Mac Pro to run a DGPU on linux, when you can get a much more supported x86 system, for cheaper.
That Mac Pro announcement was pretty weird. No adding video card, way less RAM. Feels like ppl are being sold empty chassis.
Still bummed about new M2’s not being Compute Modules. lol
Apple has such nice stuff yet feels like it is not used to its potential.
ronaldst,
They’ll have to do something about this to catch up. I agree. there would be merit in breaking up the M2s into several separate compute modules running concurrently asunder one OS. The problem is that apple leaned heavily into the shared/unified memory of M1 over the dedicated memory used by rivals. By going back to discrete components at this point means apple would have to backtrack on the unified/shared memory model. It could be worth it as discrete GPGPU opens up expansion possibilities using PCI cards and users can install many of these into one system. After all this is exactly what nvidia/amd discrete GPUs offer today and the industry clearly believes in this model.
The problem for apple specifically though is that they’ve been promoting the shared/unified memory APIs that don’t match how dedicated GPUs work. Apple would have to convince software developers to go back to supporting dedicated GPGPUs. But if these upgrades are only going to be available on macpro, then how many software developer will even bother? The macpro has become an increasingly niche market that makes it quite hard to justify investment from developers who are targeting the masses.
Edit: The easiest way to add support for dedicated GPUs on m1/m2 would just be for apple to support existing products…not that they’d want to.
Apple is mainly a consumer and prosumer company now. The mac is now an appliance, and aligned with the mobile side of things.
The high end desktop market is just not big enough for them to justify the investment in designing and fabricating systems to target it, given just the ridiculous margins and revenue they get from the mobile-derived devices. With x86, they could leverage a lot of the ecosystem for desktop parts. So the Mac Pro was still a more “traditional” workstation, and even then it has traditionally stagnated compared to Windows and Linux workstations. With the move over to Apple Silicon, that ecosystem is not there and they can’t leverage it.
In any case. If you were doing compute-heavy use cases, Apple hasn’t been an attractive option for a very long time. So I assume most of those customers fled the mac long ago.
javiercero1,
I would say the market for high end GPUs and compute is not only a giant market, but responsible for the stock market’s largest movers in recent years. Apple would love to get a cut of this. The problem for them is that, as LTT concludes, their macpro doesn’t really deliver the goods.
I agree with this. The company implemented a company-wide directive to sell products with apple CPUs exclusively. I’d say it was a success on mobile, but it came at the expense of eliminating partners whose technologies are better suited for high end computing.
There’s a lot of overlap between high end GPUs for compute and high end GPUs for the VFX industry. A large chuck of creative users grew up on mac os and still prefer a mac os environment. Presumably they would benefit from more high end GPUs. Maybe the next generation will be better.
The high end compute and GPU market are mainly running on datacenter infrastructure, and are mostly CUDA workloads. Apple does not do datacenter nor CUDA.
The mac pro is still a competent 3D machine. The market for GPU-centric use cases on Macos was clearly not large enough for apple (Or AMD) to bother with. Most of those use cases are on windows/linux for a long time, and apple long conceded that segment.
Apple since Jobs came back is a very focused company in terms of the markets they serve. Old “traditional” divisions have a hard time justifying their existence, within apple, when they have to compete with the revenue/margins that the mobile/phone/tablet side of the organization brings. The current mac pro is likely the best that division could come up given the resources (tiny) that they were given to develop it, so it’s basically a mac studio with PCI-e capabilities fitted in an old chassis.
javiercero1,
A lot of work is being done by individual developers on consumer GPUs before scaling up to a data center. And you may not realize this but there are so many software developers who prefer macos over windows and linux. So the market is clearly there. Same deal with videographers rendering on their own personal computers, these two groups make up a good portion of apple’s professional users. There’s no doubt that the demand for a GPU workhorse is here. I honestly don’t think apple set out to leave these professional users hanging, apple needs to deliver the goods though.
I am simply telling you what apple is doing. I am not responsible for their strategy, so maybe you should direct your insights to them.
People developing on consumer GPUs before scaling up to the datacenter is a market that almost exclusively uses CUDA/NVIDIA to do so. And Apple hasn’t had much collaboration with NVIDIA in ages.
For videographers, the media/vide engines in the M-series are some of the best in the industry, and these systems seem to do quite well as Resolve/Final Cut Pro seats.
Apple is clearly not interested in targetting the high end desktop GPU compute market any more. Likely because the size to market to development investment ratio is just not there. The Mac Pro line has been consistently lagging for a decade, so it’s clear that segment of the market hasn’t been a priority for them for just as long.
javierceros,
Well, obviously they haven’t been on good terms, but that doesn’t mean apple has no interest in the market.
Depends what they do. If they’re just mixing videos then sure, but if they’re doing video effects or even raytracing then a good GPU will make the difference between rapid rendering/turn around time and constantly waiting on the hardware.
Do you have any direct evidence for this claim or is it all circumstantial? Let me remind you that simply pointing out the fact that their macpro fails at the high end is not evidence that apple didn’t intent to be at the high end of the market. It could also mean that their silicon fell short this time ’round.
The Mac Pro and Mac Studio represent exactly the markets that Apple is targetting currently. A pretty basic concept to grasp. Alas, once again, here you are…
javiercero1,
Apple’s own marketing material specifically says the m1 chip is designed to excel at machine learning, breakthrough performance for color grading 6k video, graphically intensive games, and machine learning inference. They says the mac mini a great machine for developers, scientists, and engineers utilizating deep learning technologies like tensorflow or createml. When it comes to machine learning, peformance is spectacular, thanks to the neural engine, ml is up to 11 times faster than the previous generation.
youtu.be/5AwdkGKmZ0I?t=742
youtu.be/5AwdkGKmZ0I?t=1914
youtu.be/5AwdkGKmZ0I?t=2244
By apple’s own admission, they are interested in marketing towards GPU intensive applications for graphics, engineering, science, machine learning. Yet with all of these GPU tasks that apple themselves are promoting get held back by the architecture’s weak iGPUs.
Apple’s m1/m2 processors have excellent application specific accelerators, this makes video codecs smooth & fast. But once you actually need to use GPU execution units, the shortcomings of the M1/M2 GPU architecture become far more apparent. Common applications like Davinci Resolve will peg the GPU at 100% and playback stutters. 3D renders crawl, and so on. It’s really not just the compute market that benefits from better performing GPUs.
youtube.com/watch?v=CYIZ1Fw_-Tw
youtu.be/vEkjupEC6gw?t=24
And again, you’re conducting a debate that only exists in your head.
The point is that apple is not targeting the very high end of the pro desktop compute market, which is basically cornered by NVIDIA at this point. Not that Apple is completely ignoring anything that requires GPU or tensor processing.
javiercero1,
Sorry but those quotes were from apple themselves. I just reiterated what they said. So your beef is with them and their marketing, not me.
And again, you’re projecting what you are doing on to me. Yawn.
javiercero1,
Apple’s own marketing explicitly mentions how the m1 architecture is designed for machine learning inference, tensorflow, create ML, and GPU intensive workloads used by scientists and engineers. It contradicts your claim that apple aren’t interested in competing with nvidia. Now you are deflecting and I suppose your next comment will be an adhominum attack.
Instead of that nonsense though, just admit it: apple are interested in apple silicon enabling professional compute applications.
Listen, I made a pretty basic point. Might as well claimed that water is wet. Alas, here you are.
Take care. Cheers.
javiercero1,
You appeared to be completely unaware that apple themselves are promoting M architecture for AI. Contrary to your point, apple are competing with nvidia on compute and they even have their own machine learning SDK.
https://developer.apple.com/documentation/createml/
https://developer.apple.com/documentation/coreml
That’s fine, you learn and you move on. But rather than being flippant with me about it, you might try a “thanks man, I didn’t know that”. Cheers.
LOL. Are you seriously trying to pass that strawmanish debate, that is only going on in your head, while trying to project on to me what you should have done over and over the many times you’ve been found yourself out of your depth in some these rabbit holes you love to deep into?
I guess I must repeat myself again:
“And again, you’re conducting a debate that only exists in your head.
The point is that apple is not targeting the very high end of the pro desktop compute market, which is basically cornered by NVIDIA at this point. Not that Apple is completely ignoring anything that requires GPU or tensor processing.”
javiercero1,
The fact that nvidia have the market cornered was never a source of disagreement though. You said, “Apple is clearly not interested in targetting the high end desktop GPU compute market any more.”, yet it does not follow that just because nvidia dominate compute that apple aren’t interested in high end desktop compute and that’s what I corrected you on. If you actually agree with me that apple aren’t ignoring the compute market then you should have just said so. What is your problem that you can’t admit when you agree? Sheesh.
LOL You’re so textbook. . This is the part where you keep goading, so that you can focus on the reaction in order to play the victim, and thus “win” another imaginary debate… to prop that fragile ego that has still not recovered from that narcissistic injury sooo many moons ago.
Etc, etc, etc. Yawn.
javiercero1,
That is very hypocritical. When I provide evidence for apple’s interest in compute and you agree with me, then just say it directly without beating around the bush. You always turn everything into ego for reasons that are beyond me. I am stubborn, but reasonable, and I always try to be friendly. So I’d appreciate the same from you.
LMAO. So predictable! I am only engaging with you out of pure clinical fascination; you follow the patterns, almost textbook, of a very specific disorder.
Your “proof” is just your own very subjective interpretation of a marketing blurb. I am simply pointing out that Apple’s line up’s capabilities are correlated with the markets they are targeting. Apple is targeting GPU and NPU use cases that can be served by the capabilities of their SoC’s GPUs and NPUs. Given how they don’t support dGPUs for these products, that is clear indication as to what markets apple is targeting and it excludes/prunes out which ever use cases that require a dGPU.
Really simple stuff, alas here you are, on another rabbit hole to nowhere…
javiercero1,
It means your portrayal of apple as being uninterested in demanding desktop compute use cases was factually wrong, but hey at least now you know.
Well, we can all agree that the hardware they’ve delivered so far is behind in the market. I know this, you know this, and apple surely knows it too. But they have to start somewhere and that the m2 ultra was the best they managed to build so far does not tell us that they don’t have higher ambitions for AI and compute for the future! Apple knows this market is import to them even if you won’t admit it. It’s why they invented createML and are encouraging developers, scientists and engineers to use apple instead of buying a windows machine and use cuda.
Certainly not for extremely high-end general computing, but for specific tasks (video editing, live video, music production, etc) I can see this Mac Pro being quite competitive (tho, that’s also tur of the Studio in many cases).
As for thise market it can’t serve, isn’t it fair to say they had already lost that market? With Nvidia for the last decade, who in their right mind doing ML/Ai/etc used Macs?
Still, I must admit it’s sad to not see Apple at least *try* to a play in the *general* extreme high end market.
Don’t disagree – it’s worth noting that high end PCs compete in ways that didn’t used to be true, as you said. Even for high-end video editing and music I’ve seen people recently that prefer PCs.
torb,
We as consumers think of compute/machine learning and video editing/graphics as completely unrelated markets, because they are. But as far as the technology goes they actually have a lot in common in terms of GPU hardware being used to accelerate workloads Consider things like graphics programs that use AI effects, 3d raytracing, davinci resolve using GPU acceleration, etc. These all benefit from high end consumer grade GPUs: faster render times, and smoother operations. When the GPU gets close to 100% utilization, interactive applications can suffer..
Well, apple are promoting their m cpus for things like tensorflow and create ML, but I agree with you they have a long way to go to make their hardware competitive.
Editing performance on m1/m2 can benefit thanks to m1/m2 application specific accelerators. If all your intensive tasks can be offloaded from the CPU/GPU cores in this way, then the mac studio is going to perform extremely well. However when software starts utilizing the GPU cores, the M1’s GPU bottlenecks become much more apparent, as this video highlights.
http://youtube.com/watch?v=CYIZ1Fw_-Tw
Creative software is moving in the direction of GPGPU. It needs to due to higher resolutions, more intensive effects and AI. So IMHO apple cannot afford to ignore the importance of powerful GPUs long term without holding back creators. Many creators still prefer macos, but when push comes to shove they’re going to migrate to platforms with the best productivity software. I imagine that apple knows the m1/m2 architecture won’t cut it and behind closed doors I suspect they’re working on something more powerful than an iGPU for future products.
Ars shows the real world use of the standard M2 max puts Intel’s best part to shame. The ultra is on par with high end Ryzen
And here’s your close comparison:
https://pcpartpicker.com/user/Lostinlodos/saved/#view=pKsfpg
lostinlodos,
Do you mind providing a link please? I didn’t find an Ars article comparing these.
https://www.tomshardware.com/news/apple-m2-ultra-geekbenched
https://hothardware.com/news/apple-m2-ultra-cupertino-vs-intel-amd-nv
To be clear, I think m2 ultra CPU performance is plenty good even though it doesn’t top the charts. It’s the GPU speed that is disappointing. Especially for a workstation with no upgrade options. Even a mid range consumer GPU beats it.
Looks like I got slightly ahead of myself. Looking at m1 vs 12. I can’t edit the post though.
https://arstechnica.com/gadgets/2023/06/m2-ultra-mac-studio-review-who-needs-a-mac-pro-anyway/
Shows the m2 slightly below the 13.
CPU monkey has a nice breakdown too. At https://www.cpu-monkey.com/en/compare_cpu-apple_m2-vs-intel_core_i9_13900k
Sadly I can’t find any real world software tests for the ultra vs 139 in list form.
YouTube is full of single app comparisons. And the m2 tends to top out On multi core. Clearly the 139 is the current single core king.
What is really irksome: why are there no ultra vs 139 real app comparisons? Same thing last generation till PC Mag did a spread months later! Everyone uses the max. Like that’s fair?