Being announced today by AMD for a November 21st launch, this morning AMD is taking the wraps off of their Ryzen 7000 Threadripper CPUs. These high-end chips are being split up into two product lines, with AMD assembling the workstation-focused Ryzen Threadripper 7000 Pro series, as well as the non-pro Ryzen Threadripper 7000 series for the more consumer-ish high-end desktop (HEDT) market. Both chip lines are based on AMD’s tried and true Zen 4 architecture – derivatives of AMD’s EPYC server processors – incorporating AMD’s Zen 4 chiplets and a discrete I/O dies. As with previous generations of Threadripper parts, we’re essentially looking at the desktop version of AMD’s EPYC hardware.
With both product lines, AMD is targeting customer bases that need CPUs more powerful than a desktop Ryzen processor, but not as exotic (or expensive) as AMD’s server wares. This means chips with lots and lots of CPU cores – up to 96 in the case of the Threadripper 7000 Pro series – as well as support for a good deal more I/O and memory. The amount varies with the specific chip lineup, but both leave Ryzen 7000 and its 16 cores and 24 PCIe lanes in the dust.
I’m hoping these will eventually find their way to eBay, so that around five years from now, I can replace my dual-Xeon workstation with a Threadripper machine.
I remember the time these were targeted to high end desktops and enthusiasts. And yes having lots of RAM especially ECC RAM was a nice thing.
Today, the lowest end one seem to be priced at $1,499, and these are the “X” series now “WX” ones (workstation). Most people’s entire desktop machine will cost less than that. And it only comes with 24 cores. That is not too much more than 7950X3D, which has 16 cores (50% increase) for more than double the price hike. (And possibly we will have 24 core CPUs on regular segment soon, since 7950X3D’s counterpart in TR had a low end of 12 cores in 5945WX).
And, just like that Intel’s upcoming 14th gen desktop chips offer 24 cores: https://www.intel.com/content/www/us/en/products/sku/236773/intel-core-i9-processor-14900k-36m-cache-up-to-6-00-ghz/specifications.html, and supports ECC RAM! (Though I have yet to see any motherboards supporting that).
Yeah but regular AM5 has 128GB ECC via UDIMMS, And as you said its fast enough.
The main segmentation here isn’t ram or CPUs, its IO….. all the threadripper systems have vastly more expansion IO. And by contrast the regular desktop systems are quite cramped for lanes these days even though the per lane speed is very high.
cb88,
I would like to see more PCI lanes as well. My ryzen has 16x PCI lanes feeding two PCI slots (either one 16x or two 8x). The other slots are forked off the PCH, max of 4x.
I intended to have 3 GPUs for compute, but since newer GPUs don’t run on anything less than 8x (at least with nvidia), I am unable to run more than 2 now. You might argue it’s time to go threadripper, but considering that we used to be able to run 7 or so GPUs on very low end CPUs, it seems so wasteful to require threadrippers when you don’t intend to use the CPU cores for any heavy lifting.
If anyone knows how to fix the PCI lane issues short of buying threadrippers that would largely be wasted on pure GPU loads, please let me know!
Alfman,
In theory a recent Intel CPU on a z790 chipset should be able to do that:
https://www.intel.com/content/www/us/en/products/docs/chipsets/desktop-chipsets/z790-chipset-brief.html
In practice no motherboard manufacturer does that, but instead divides the bandwidth into other devices.
There seems to be one on the AMD side though:
https://www.anandtech.com/show/14657/the-asus-pro-ws-x570ace-motherboard-review
This particular one is told to operate the slots in x16x8x8 speeds.
(The CPU has direct PCIe lanes, and also DMI lanes to talk to chipset, which map to further PCIe lanes).
sukru,
That’s a good find…
(my emphasis)
My ryzen is an x570 too, I didn’t know it wouldn’t work ahead of time, but had I known a motherboard with x8/x8/x8 was available I would have changed my purchase.
I am looking at the intel z790 “features at a glance” and it isn’t clear to me that it supports 3* x8 slots?
The “Chipset Block Diagram” shows lots of PCI lanes but it doesn’t add up with what the text or specs are saying If anything the “including 20 PCIe 4.0 lanes” isn’t enough, but maybe the difference is between direct PCI lanes versus those shared with motherboard peripherals as you say…
I’ve always been curious what the performance difference is for these PCI slots that don’t have dedicated lanes.
Alfman,
DMI is the proprietary interface Intel uses to talk to PCH. As far as I know it maps 1:1 to PCIe signals (speeds and versions might not be same), and only possible different is probably latency between those directly connected to the CPU versus those have to traverse one additional chip.
Some discussion here:
https://forums.tomshardware.com/threads/can-dmi-lanes-be-as-performant-as-pcie-ones-when-they-arent-being-multiplexed-saturated.3803610/#post-22980695
I have seen various numbers, but in total, that there would be between 28 to 40 PCIe lanes (4 and 5 combination), plus another x8 PCIe 3.0 that could be used for on board devices.
The motherboards are free to distribute these themselves. Some will be dedicated to NVMe, some will be their own Ethernet/WiFi/Audio chipsets, some will be wasted.
But if the upper range is correct, there is nothing stopping them from offering one x16 PCIe 5.0 along with another three x8 PCIe 4.0 on the same system. (Except none of them seem to go through the trouble).
One more diagram (from previous gen 600 series):
https://www.audiosciencereview.com/forum/index.php?threads/dmi-umi-and-you-a-guide-to-your-motherboards-chipset-bandwidth.35521/#:~:text=2.0%20×4%20speed's%5D.-,With%20DMI%204.0%20(Intel%20600%20series)%20and%20depending%20on%20your,bandwidth%20is%20always%20a%20compromise.
They clearly show 32 PCIe 4.0 and 5.0 lanes.
And for reference, DMI 4.0 seems to be 16GB/s per lane, whereas PCIe x1 4.0 is ~2GB/s. The CPU -> chipset connection is up to x8 DMI 4.0 in the 700 series.
sukru,
Yes, I know this, but when all other motherboards and chipsets specs explicitly count those PCI lanes and this one does not (unless I missed it), it makes me uncertain what the real specs are.
I saw that in the diagram but given that the specs don’t add them up this way, it’s unclear to me whether these are all available at the same time.
sukru,
I should say thank you because I really do appreciate you finding this information. I’m not ready to replace anything yet though, I usually keep my systems for a long time. The ASUS Pro WS X570-Ace looks like it would do the trick if I just wanted to swap the motherboard, but I can’t justify another $600 right now.
I figure it’s best to wait, the next time I’m on the market there may be more options.
“I can’t justify another $600 right now.”
This was on newegg by 3rd parties, but elsewhere I see it comes up for $390 on amazon (though I don’t like buying from them).
amazon.com/ASUS-Pro-WS-Workstation-Motherboard/dp/B07SYWKXJV/ref=sr_1_1?qid=1697817634
Asus’s online store price shows $380, but it’s out of stock.
https://shop.asus.com/us/90mb11m0-m0aay0-pro-ws-x570-ace.html
I just wish the GPUs would run in the x4 slot I’ve already got. I’m not sure what’s enforcing the restriction: driver/firwmare/hardware? Not that this would help me, but I wonder if the nouveau drivers would let the card work in a x4 PCH slot. Another idea is to map the card into a VM and see if nvidia drivers react differently there. I’ve never tried this with a GPU, but I am curious how well a physical GPU works under a VM.
What is wacky is AMD should be able to have boards with setups like
8+8+8+8+4+4+2+2
Or 8+8+8+4+4+1+1+1+1 etc… anything that adds up to 44 (24 of which can be PCIE 5.0 aka the first 3 PCIE slots).
They usually weirdly allocate 8x of the PCIe 5.0 lanes lanes to NVMe… when really you don’t need that much bandwidth instead you typically just need low latency.