When AMD announced that its new Zen 3 core was a ground-up redesign and offered complete performance leadership, we had to ask them to confirm if that’s exactly what they said. Despite being less than 10% the size of Intel, and very close to folding as a company in 2015, the bets that AMD made in that timeframe with its next generation Zen microarchitecture and Ryzen designs are now coming to fruition. Zen 3 and the new Ryzen 5000 processors, for the desktop market, are the realization of those goals: not only performance per watt and performance per dollar leaders, but absolute performance leadership in every segment. We’ve gone into the new microarchitecture and tested the new processors. AMD is the new king, and we have the data to show it.
AMD didn’t lie – these new processors are insanely good, and insanely good value, to boot. If you’re building a new PC today – AMD is the only logical choice. What a time to be alive.
Congrats to AMD. These scores show AMD overtaking intel CPUs on low thread counts. Many here will recall me pointing out the significance of low thread performance for many games and desktop use cases. This was intel territory. Now AMD wins. While they don’t have 100% of the benchmark wins, clearly AMD is ahead on average if you don’t cherry pick preferable benchmarks for intel. If you want the best performance, you go with AMD. This is significant.
In what scenario does intels offerings outperform amd then? watched the new linux tech tips video and intel lost in all workloads single as well as multi threaded.
NaGERST,
LTT puts out entertaining videos, but frankly he’s sub-par as a source as he’s not the most competent or thorough when it comes to these things, haha. Anyways, that’s not what this is about, we’re talking about the anandtech article that osnews linked to. AMD does not win all the benchmarks. For example, the gimp benchmark, some of the simulations, some of the legacy tests, games, etc. there are many options for cherry picking.
https://i.postimg.cc/KjcbL9CY/benchmark.png
This is why I’m so picky about claims that are made without the data to back it up, it happens way too much unfortunately. Oh well.
Provided it gets the job done without warming up the globe, I can live with it not being #1 in every field.
Good to know I still can rely on AMD after almost 20 years with them. Heck, I started with a Athlon XP in 2002, still have an A8-3500 since 2011, an E350 since 2013 and now a Ryzen 2500U laptop since 2019. All still works flawlessly, that’s all what I need.
Kochise,
Same.
I built an athon MP system back in the day when you could use metallic paint to convert an athlon xp into a dual CPU athlon mp, haha. It worked several years until I upgraded the power supply one day and it killed absolutely everything in the computer. One expects things to fail by not working, but you know it’s a crappy day when one component takes everything else out with it.
It looks like they tested GIMP fresh install startup time, not even something like processing a layer but just the fresh install startup time. Not very interesting imo. I’d rather they benchmark doing something heavy in GIMP, not just starting it for the first time.
Gargyle
Here’s what they say specifically…
I agree that it would be interesting to subdivide this benchmark to see where the time goes more precisely.
As an aside, I use the gimp a lot, but I often lament its performance. I don’t know what part is responsible for the overhead, like the heavy use of scripts or something else, but IMHO it is in desperate need of software optimization. For better or worse, the modern software developer creed is that optimization isn’t important because better hardware will take care of it, haha.
How many of the legacy tests are using ICC? remember Intel got busted a few years back for paying companies to use ICC (I remember Tek Syndicate exposing Cinebench for using ICC to compile their benchmark after entering an “advertising partnership” with Intel) and the Intel cripple Compiler automatically slaps a 20-30% penalty on anything running on a non Intel approved chip.
Oh and before anybody says “Oh Intel just knows how to optimize for their CPUs”…uhh no, afraid not, in fact what forced them to have to pay out 2 billion to AMD in the lawsuit was it was shown in court the first chip targeted by the crippler was NOT an AMD chip…it was an Intel. Apparently when the first Netburst CPUs came out it was getting its ass handed to it by the cheaper Pentium III so folks were buying those instead, Intel puts out ICC and pays benchmark companies to use it and hey, wadda ya know, suddenly the Netburst chip is beating the P III by nearly 20%!
This is why I only trust benchmarks from groups like Phoronix as they not only tell you the compiler you can download the code itself and see there is no shenanigans.
bassbeast,
+1 agreed.
I am very cautious about relying on data from a single source.
Although for the time being geekbench is all we have because some insiders appear to be benchmarking the M1 CPU prior to it’s release.
Congratulations to Dr Su. She’s managed to match intel and Nvidia with a fraction of resources.
Just amazing focus and flawless execution.
Yeah, AMD have done very well on desktop, workstation and server markets. All desktops I’ve build for the last three years included what I buy for my office have been with some Ryzen. However what I’d like to finally see is the good laptops selection range. Unfortunetely we do not see a lot of nice laptops with AMD.
I hope this will change, the Ryzen 4000 serie already started to show interesting things, and I think that the Zen3 offering will be amazing. However, that’s only the CPU, adding USB-C and Thunderbolt to the mix imply having Intel in the parts list, and in a good place.
Outside of building a PC to play UHD Discs, there’s zero need for an intel CPU (and that’s due to UHD software requiring SGX and intel integrated GPUs).
All good in the performance area, getting less in the price area (maybe AMD know they are good?! 🙂 )
Ryzen 5 5600X is MSP 300USD. If this is true, it is around 140 USD more than the Ryzen 5 3600.
I certainly am not prepared to pay that premium for 2% faster opening spreadsheets and 3% percent faster frame rates in games.
This happened in the early Athlon64/Opteron days too… Whoever leads on performance gets to charge more, while whoever is behind has to target the budget market.
The framerate improvements in games are much higher than just 3%, if you read the reviews you will find it’s actually genuinly baffling the gains they made.
Running a 1st gen Zen CPU myself, I’m really compelled to upgrade, because I’d almost DOUBLE my framerates in most games where the situation isn’t GPU-bound.
Looking back at the figures from Anandtech I can’t follow it all.
There are sometimes 20+ frames per second differences but most of the tests are done on low quality? Why?
I want to see tests with a ryzen 5 3600 + medium quality GPU on 1080, 1440, and 4K on max settings (maybe medium for 4K) and do a straight swap. No low quality settings nonsense. Nobody games on low level who is interested in Zen 3.
And it seems most of the differences happen when the frame rate is alread > 100 frames per second. I am not a hard core gamer and quite frankly I don’t care if a game is 120 frames per second or 160.
Wondercool,
You make a valid point, people will be running higher settings, but the reality is that if your game is hitting GPU bottlenecks due to higher graphics settings the CPU is likely not going to be the bottleneck. If the CPU is not hitting max load then it’s not going to be a very meaningful CPU benchmark. So I think the purpose of lowering graphics quality is to reduce GPU bottlenecking and maximize the CPU load in order to measure CPU related bottlenecks.
Hypothetically some games may make heavy use of CPU to compute game state independently of FPS. But I would think that most first person shooters are generally GPU limited.
Hi Alfman,
You are making my point. On paper 30 percent extra CPU seems a lot, but in practice it doesn’t matter unless you are compiling or doing scientific calculations. For main stream desktopping and gaming it’s not worth the 40 percent price premium.
Wondercool,
Well, I was answering your question. But it’s also true that if you run programs where CPU doesn’t reach 100%, then the CPU doesn’t much matter. I do agree with you that most gamers don’t need powerful CPUs since most of the heavy lifting is done in the GPU.
This can highlight a difference between synthetic versus practical usage benchmarks. Arguably when a benchmark starts achieving 120-200FPS, that’s no longer meaningful to normal use cases. That’s total overkill. But it can continue to have comparative value much like a synthetic benchmark.
I think game engines can make good synthetic benchmarks, but it needs to be put in context. CPU B may be 1.2X faster than CPU A, but if CPU A is already 4X faster than what you need due to GPU bottlenecks what’s the point of CPU B? It’s a fair question. It’s entirely possible that some users don’t really benefit from the performance of either A or B. Some gamers want high end systems just because they can (like it or not a lot of the high end is driven by people in this category). Some users may have applications that do benefit and others may be most interested in future-proofing. And tangential to performance benchmarks, efficiency is another consideration. AMD does well here thanks to it’s smaller die process size.
Personally my choice is rather than buy midrange systems more frequently, I’ll buy a high end desktops with the intention of keeping it a very long time, so future proofing is important to me.
Hardware Unboxed used 1080p ultra quality settings to measure the difference in gaming performance across existing and these new CPUs, not low quality. Even then the difference between CPUs is quite noticeable.
The only criterium in these tests is that they should not be GPU limited. How do they get there? By using graphical quality settings that are so light on the GPU that they would result in more fps than the CPU is capable of providing.
There is something to be said about the real world value of such tests, because who in their sane mind would ever run settings where the framerate is _not_ GPU constrained? Except for pro-gamers maybe? Absolutely nobody. However, since high-framerate gaming is the new trend, it might be (or become) more important after all.
Yes, it’s certainly good for AMD, and it’s good for customers in the 12c/16c market, but to me things have regressed a lot. A couple of years back I could have picked a decent 6c R5 2600 for $150 or 8c R7 2700 for $200. That’s what made many people enthusiastic about AMD products. Last year AMD has abandoned their cheaper (non-X) R7 3700, now they have done the same with R5 5600.
This is a window of opportunity for Intel. Strength of the Ryzen platform is also its biggest weakness – they are using the same 8c dies for their high and low end products. This means, when they are supply constrained they are virtually forced to abandon the low/mid-range segment. Products like 4c R3 3300X can only be classified as “paper releases” and new 6c/8c processors are no longer cheaper than Intel offerings.
Given that the 5600X is around 10% slower in multicore performance than the 3700X, the current price isn’t all that insane given if you consider the price falling in the coming months and cheaper SKUs getting introduced to undercut the current line-up.
Yes, there isn’t an 8c16t R7 2700 successor in Zen 2 or Zen 3 for the same kind of money, but the lower core count models are SO much faster that they caught up or even surpassed the 2700 even in multicore performance, not even speaking about the monstrous gains in singlethreaded performance. When the 5600 non-X lands for €230-260 then value oriented consumers will have found their new “budget” friendly choice.
This is awesome, but what I’m still holding out for is AMD mobile chips with integrated next-gen graphics. Apparently these won’t be coming out till 2022 in the “Rembrandt” APUs. Until then for thin-and-light laptops with integrated graphics, looks like Intel still takes the win with its new Xe graphics architecture.
https://uk.pcmag.com/chipsets-processors/128668/deep-dive-intel-tiger-lake-vs-amd-renoir-which-laptop-cpu-wins-on-performance
Like pre-release game reviews, I don’t see any reason to trust these. The game benchmarking section seems pretty limited, and of course the ancient, CPU-bound CS:GO is in there. That being said, the main reason I wouldn’t be an early adapter right now is one of the Youtube reviewers showed GTA V crashing during the benchmark. Benchmarks might mean nothing if it turns out to be unstable for actually playing games.
dark2,
That would be concerning if it turns out to be a widespread problem. The bathtub curve is real. It sucks to get a bad unit, but statistically it can happen to anybody. It’s mostly dumb luck and not necessarily indicative of a broad trend. If I see products with 1/5th or more reviews complaining about quality issues, that’s make me hesitate on the product. I wish the return rates would be published for all products to help inform consumers. In the long run it would encourage manufactures to raise their quality standards. Alas, I suspect many vendors are actually prohibited from disclosing the defect stats through NDAs and nothing short of regulation forcing disclosure will change that.
My first own computer was running with an AMD 386DX40 and I really wish I could go back to AMD now. But my current Intel/nvidia PC is still recent enough…
Ironically I would argue AMD is in the same place now that Intel was in during Bulldozer in that there real competition is their previous offerings. I know several people which lucked out into the 1600AF when it was so cheap that have no intent of upgrading anytime soon just because that chip makes for a great budget workstation and my R5 3600 running totally stock just blows through my games and video renders so damn well I haven’t even had the thought of OCing it cross my mind much less upgrading to Ryzen 5xxx.
I would argue this though is actually a good thing as the 5xxx are probably gonna be in limited supply for quite awhile (hell some of the reviewers never were able to get their hands on a Ryzen 3300x or that last APU they released) so this will let those that have been sitting on old hardware have a chance to enjoy Ryzen while many of us on previous gens will just sit back and enjoy our PCs.
BTW if you are upgrading to Ryzen and want a good cheap cooler? Look at the Gammax 400 V2, I picked one up for just $23 and even slamming all cores for nearly an hour on a long render I never went above 76c and spent most of the time around 72c with no throttling and it wasn’t even loud, I was quite impressed.