Linked by Thom Holwerda on Tue 20th Jun 2017 21:39 UTC
AMD

The big news out of AMD was the launch of Zen, the new high-performance core that is designed to underpin the product roadmap for the next few generations of products. To much fanfare, AMD launched consumer level parts based on Zen, called Ryzen, earlier this year. There was a lot of discussion in the consumer space about these parts and the competitiveness, and despite the column inches dedicated to it, Ryzen wasn't designed to be the big story this year. That was left to their server generation of products, which are designed to take a sizeable market share and reinvigorate AMD's bottom line on the finance sheet. A few weeks ago AMD announced the naming of the new line of enterprise-class processors, called EPYC, and today marks the official launch with configurations up to 32 cores and 64 threads per processor. We also got an insight into several features of the design, including the AMD Infinity Fabric.

For the past few years, the processor market was boring and dominated by Intel.

This is the year everything changes.

Order by: Score:
Odd lineup
by laffer1 on Tue 20th Jun 2017 21:51 UTC
laffer1
Member since:
2007-11-09

I think AMD could do some serious damage against Intel's E5 chips and maybe E7. What's odd here is that these prices and specs don't really align with E3 chips. I was expecting some entry level chips that are close to Ryzen consumer models. The EPYC 7251 is the closest we see to that, but the power consumption is not competitive with Intel. I realize it has more PCI lines and optimized memory, but for entry level servers it doesn't make a lot of sense.

This complicates any upgrade plans I had for my webserver that's sitting on an aging E3 ivy bridge era chip.

Reply Score: 2

RE: Odd lineup
by bassbeast on Tue 20th Jun 2017 23:25 UTC in reply to "Odd lineup"
bassbeast Member since:
2007-11-11

Unless you are really slamming that webserver the question is...do you REALLY need to upgrade it?

If you simply need more cores just get one of the Opteron G34 boards, the Opteron 12 cores only pull 80w on average and you can get a dual socket WITH the 12 core CPUs (thus giving you 24 cores/threads) for like $150 USD and I've seen the single socket boards (again with the CPU) going for less than $75.

But while I'll admit things may have changed since I supported network back ends but webservers really didn't need a whole lot of horsepower,certainly your webserver should run quite easily on a single Magny Cours chip with cores left over.

Reply Score: 2

RE[2]: Odd lineup
by Morgan on Wed 21st Jun 2017 10:35 UTC in reply to "RE: Odd lineup"
Morgan Member since:
2005-06-29

I'm no expert in this space so this is a layman's observation, but these days everything is in containers and you need more cores and more RAM running as fast as possible to support all those containers running concurrently. I've dabbled a bit with Docker, and I've found that older quad core chips will (in most cases) outperform faster, newer dual core chips when I have several containers running at once from one machine.

If you can boost the operating frequency, bus speeds, and core counts all at once for a decent price as it seems AMD is doing here, it makes sense to go with their new chips for production servers.

Reply Score: 2

RE[3]: Odd lineup
by WereCatf on Wed 21st Jun 2017 11:37 UTC in reply to "RE[2]: Odd lineup"
WereCatf Member since:
2006-02-15

The one thing that really separates the Epyc-lineup from everything else is the 64 PCI-E lanes that allow you to e.g. slap 8 GPUs in one rig and if your workload is almost solely run by the GPUs, the lowest-tier Epycs look absolutely effing stunning value, especially since Intel has nothing to compete with that.

Another thing they allow you to do is have absolutely massive I/O-bandwidth for storage. There are fields like e.g. bioinformatics where this is extremely useful.

Reply Score: 2

RE[4]: Odd lineup
by Morgan on Thu 22nd Jun 2017 22:53 UTC in reply to "RE[3]: Odd lineup"
Morgan Member since:
2005-06-29

See, that's what I'm talking about. I need a 64 core, 128 thread machine with 256GB of RAM, running 8 GPUs at full PCIe bandwidth.

Ok, no, I don't need all of that, but I can get more bang for the buck going with AMD than with Intel. I was looking at upgrading my gaming rig to a Kaby Lake Core i7 CPU (my motherboard supports it, it's a Z170 chipset). But, for the price of just that CPU alone, I could buy a new AMD based board and a CPU that would outperform that i7, and it gives me more USB 3.1 ports and better audio.

Reply Score: 2

RE[2]: Odd lineup
by laffer1 on Wed 21st Jun 2017 13:17 UTC in reply to "RE: Odd lineup"
laffer1 Member since:
2007-11-09

I don't need to upgrade for CPU performance so much as I/O. Calling it a webserver was not accurate, although it is the primary function. In reality, it runs many services including sendmail, dovecot, rsync, ftp, subversion, apache, GNU mailman. Some of these are in jails.

The legacy AMD server platforms are quite behind on storage tech. It just doesn't fit the bill. I really only need 6-8 cores of modest speed, 32GB of RAM and faster SSD. My current board doesn't even do SATA 6g on all the ports.

I realize my use case is a bit odd, but I think it's pretty common in the small business space to buy a few of these smaller servers. I'm sure many are using google apps or aws for most things now, but there's still need for file servers and things of that nature.

Before anyone asks, the bandwidth costs for my server make it cost prohibitive to run in AWS. It's far cheaper to run on a comcast business package. (open source project)

Reply Score: 2

RE: Odd lineup
by ahferroin7 on Wed 21st Jun 2017 12:24 UTC in reply to "Odd lineup"
ahferroin7 Member since:
2015-10-30

The Ryzen 7 models (just like most of AMD's high-end desktop parts since at least 2010) do support ECC RAM (assuming you can find an AM4 motherboard that supports it), so is there actually all that much need for an E3 equivalent EPYC part? For an equivalent price point, they outperform E3 parts in almost every respect except memory bandwidth and cache (and they don't lose by much there either), so it seems to me like AMD has just decided to not waste resources on selling two different brandings for what would almost certainly be the exact same hardware.

Reply Score: 2

RE[2]: Odd lineup
by laffer1 on Wed 21st Jun 2017 13:10 UTC in reply to "RE: Odd lineup"
laffer1 Member since:
2007-11-09

That's the problem though, there are no motherboards that actually support ECC memory and the RAM support is TERRIBLE on ryzen so far.

I have a ryzen 7 1700 with an asus prime x370-pro and it's been terribly buggy. I was hoping the new server platform chipsets were more stable and at least had UEFI that worked with ECC RAM.

Not to mention most of the ryzen consumer boards are designed for gamers with LED lights and things. Not very server friendly.

As for the other comment about using yesteryear AMD server platforms, I want something that supports pciE SSD.

I stand behind my claim that AMD has nothing for the low end server market right now.

Reply Score: 2

RE[3]: Odd lineup
by ahferroin7 on Wed 21st Jun 2017 14:03 UTC in reply to "RE[2]: Odd lineup"
ahferroin7 Member since:
2015-10-30

That's the problem though, there are no motherboards that actually support ECC memory and the RAM support is TERRIBLE on ryzen so far.

I have a ryzen 7 1700 with an asus prime x370-pro and it's been terribly buggy. I was hoping the new server platform chipsets were more stable and at least had UEFI that worked with ECC RAM.

Yes, but support for almost any new hardware is crap. You likely won't see any reasonable boards for DIY EPYC servers for at least a few months as it is, and even then they're likely to be just as buggy as Ryzen boards are now. This is perfectly normal, and I really don't get why people seem to have the misguided assumption that everything will just work perfectly on release.

That said, I'm surprised you've had issues to that degree, considering that I've dealt with two different Ryzen systems with boards from two different manufacturers (one ASUS, one MSI) that have show zero issues beyond the poorly documented on-board sensors (though that is normal on almost any board these days).

Not to mention most of the ryzen consumer boards are designed for gamers with LED lights and things. Not very server friendly.

How exactly is LED lighting not server friendly? You can turn it off in the firmware on the board you mentioned, and even on boards you can't do so with, it's not going to be enough of a power draw to make much of a difference. I did some testing with the x370 prime we've got in one of the new systems where I work, and the difference in power consumption was so low that it had zero impact on the power consumption of the 80 Plus Platinum certified PSU the system has.

As for the other comment about using yesteryear AMD server platforms, I want something that supports pciE SSD.

I stand behind my claim that AMD has nothing for the low end server market right now.

Considering that no board, server or otherwise, from within the last decade should have issues with any type of PCI-E connected SSD other than an NVMe device, I'm going to assume that you're talking about NVMe storage. Given that fact and the fact that you apparently have enough load on your webserver to warrant that kind of upgrade, you really aren't looking at what most people would consider a 'low-end' system, even if you only need an E3-equivalent to power it.

Put another way, even high-end web servers can get by with a baseline E3 if they're serving static content or farming the content generation out to other systems, but they need good RAM and good storage regardless of what processor they are using (unless all the content is dynamically generated, in which case they generally don't need particularly amazing storage).

Reply Score: 2

RE[4]: Odd lineup
by Alfman on Wed 21st Jun 2017 15:57 UTC in reply to "RE[3]: Odd lineup"
Alfman Member since:
2011-01-28

ahferroin7,

Put another way, even high-end web servers can get by with a baseline E3 if they're serving static content or farming the content generation out to other systems, but they need good RAM and good storage regardless of what processor they are using (unless all the content is dynamically generated, in which case they generally don't need particularly amazing storage).



Obviously it depends, some webservers are CPU bound and others are IO bound.

On my webservers I find that my client's workloads are not bottlenecked by parallelism but by single threaded performance because some modern frameworks like magento can be relatively slow even when nothing else is running. I haven't really seen 100% CPU utilization across all cores on my client's web servers, so adding more cores at the expense of single core speed would be counterproductive. Having faster processors is more important than core counts, in their cases anyways.

As far as disk IO goes, it's dominated by fetching product images, most of the web pages and database tables just remain in RAM with occasional writes.

There are maintenance jobs on these webservers that are definitely IO bound, and for us that's when we see IO bottlenecks, although they run at night when there isn't traffic anyways.

While we could come up with a generalized profile for webservers, I tend to approach every client's needs on a case by case basis - one size does not fit all.

Reply Score: 2

RE[5]: Odd lineup
by lancealot on Thu 22nd Jun 2017 00:38 UTC in reply to "RE[4]: Odd lineup"
lancealot Member since:
2007-02-25

ahferroin7,

Put another way, even high-end web servers can get by with a baseline E3 if they're serving static content or farming the content generation out to other systems, but they need good RAM and good storage regardless of what processor they are using (unless all the content is dynamically generated, in which case they generally don't need particularly amazing storage).

Obviously it depends, some webservers are CPU bound and others are IO bound.

On my webservers I find that my client's workloads are not bottlenecked by parallelism but by single threaded performance because some modern frameworks like magento can be relatively slow even when nothing else is running. I haven't really seen 100% CPU utilization across all cores on my client's web servers, so adding more cores at the expense of single core speed would be counterproductive. Having faster processors is more important than core counts, in their cases anyways.

As far as disk IO goes, it's dominated by fetching product images, most of the web pages and database tables just remain in RAM with occasional writes.

There are maintenance jobs on these webservers that are definitely IO bound, and for us that's when we see IO bottlenecks, although they run at night when there isn't traffic anyways.

While we could come up with a generalized profile for webservers, I tend to approach every client's needs on a case by case basis - one size does not fit all.


How is core counts not important for the example you gave (Magento) if the website is getting hit concurrently? If you have it setup correctly then Magento which runs using PHP will be handing out the PHP processing to processes/threads (for example a FCGI setup), and each of these processes/threads will be running on different cores. So yes cores are important when it comes to running any PHP website with high load that is generating dynamic websites, as the visitors increase it will spread out the PHP processing over the cores (at least it should be). For a web server serving only static, CPU is of little importance, so that would be an instance where (as you said) you could put a lower end CPU in static only serving systems. Another option is use containers where you can mix and match things, and put QOS limits, where you can make a more generalized system profile, and then tailor it for specific purposes (containers), all on shared hardware.

Now if you have a PHP software that you are forced to use which is a hog (such as Magento) that causes a single thread performance spike, rather then throw more CPU Mhz at the issue, you best be starting with optimization. For example use a Magento version that supports PHP 7.x (PHP 7.x is one of the fastest interpreted languages there is currently), and make sure your using OPCache (where the first time it runs it puts the bytecode into shared memory for future execution). If after you do that you still have single threaded performance spikes, consider other cache techniques (such as any web server or software based cache method). If all that fails, then you would consider a bigger CPU to provide the needed single thread performance or what I would do if single thread performance was that big of a hit is use another software that is not so bloated. It is counterproductive money wise to throw more hardware to compensate for a very very poorly written software. If there is something else more efficient, use that if possible. Magento is not the only PHP E-commerce software out there, and I consider it one of the most bloated ones.

I agree with you that most systems like the ones mentioned in this thread (sendmail, dovecot, rsync, ftp, subversion, apache, GNU mailman) are mostly reading, and will benefit most from a system with a lot of RAM (I go with ECC RAM for reliability) and a good filesystem that will use that RAM to cache reads. READs being served out of RAM will usually be faster then any SSD (PCI or otherwise). SSDs come in handy as a READ/WRITE cache in front of any spinning drives where it can't be handled by RAM. I/O is a layered approach. The only negative of using RAM for cache is it requires a warm up period, though I know there are more advanced methods around that.

In the end most systems bottle necks for these type of systems will end up being the network bandwidth, except for expensive filesystem tasks (like doing a find command on a large filesystem, etc..), which you mostly want to avoid when possible, or as you said you run at night when I/O activity is lower. That is why people setup systems on the front end to hand off to back-end processing systems (in a network load balanced fashion), it spreads the network loads and also can send tasks to special built systems (like one with a TPU in the case of AI processing at Google).

So I agree with a lot of what was said minus the few objections I had above.

Edited 2017-06-22 00:39 UTC

Reply Score: 1

RE: Odd lineup
by tylerdurden on Wed 21st Jun 2017 21:06 UTC in reply to "Odd lineup"
tylerdurden Member since:
2009-03-17

It seems AMD is targeting EPYC initially for the data center space, so the small entry level servers applications seem to be MIA.


I think AMD is missing a bit of the high end workstation/low level server space currently. We'll see.

Reply Score: 2

It's About Time
by Pro-Competition on Wed 21st Jun 2017 01:00 UTC
Pro-Competition
Member since:
2007-08-20

For the past few years, the processor market was boring and dominated by Intel.

This is the year everything changes.


And thank goodness for it!

Reply Score: 2

RE: It's About Time
by _txf_ on Wed 21st Jun 2017 01:37 UTC in reply to "It's About Time"
_txf_ Member since:
2008-03-17

And thank goodness for it!

Name checks out

Reply Score: 2

Infinity Fabric
by Chrispynutt on Wed 21st Jun 2017 12:04 UTC
Chrispynutt
Member since:
2012-03-14

Infinity Fabric is a solution that only a smaller player could be forced to invent.

AMD can't have loads of unique waffers, unlike Intel. So basically every chip we have seen so far uses the same module. Ryzen uses one, Threadripper two and 7000 upto 4. All connected via IF. Either internally or on the package.

Where it has major benefit is that to get TR to 4ghz requires just two working modules, with 8 cores per module. So basically you have to strike lucky only 8 times, twice. Much easier than Intel's 16 in a row.

I am really butchering my meaning, but I hope you can get what I mean.

Its why higher core chips traditionally have lower clocks, its harder to keep winning the silicon lottery on more and more discreet cores, ie getting them all good to 4ghz, you can sell a CPU with only one 4ghz core and 15 3ghz cores. However AMD doesn't have to enter the 16 or 32 lottery, just 8 time and time again. Its odds are much higher.

Intel's crazy high prices might have been margin, but also cost, both unique prodction lines and lots of goes at the lottery.

Edited 2017-06-21 12:06 UTC

Reply Score: 2

RE: Infinity Fabric
by tylerdurden on Wed 21st Jun 2017 21:13 UTC in reply to "Infinity Fabric"
tylerdurden Member since:
2009-03-17

Intel does not make as many different dies as people think. Most differentiation in the xeon and core lines is due to binning.

But you're correct, AMDs multimodule strategy seems to have paid out this time, and it seems that they're getting great yields. But you're also missing that the cost of packaging those multimodules is not trivial.

Reply Score: 2

RE[2]: Infinity Fabric
by The123king on Thu 22nd Jun 2017 09:38 UTC in reply to "RE: Infinity Fabric"
The123king Member since:
2009-05-28

But at least there's a large supply silicon to pack! Each one that's packed is another saleable product. Sure, if you're not packaging them, you're not spending on packaging, but at the same time, you're also lacking a product to sell, which means you're not making any profit.

I'm sure the slightly higher packaging costs are more than outweighed by the high yields anyway, so actually i expect AMD is making a tidy profit on these chips

Reply Score: 2

RE[3]: Infinity Fabric
by tylerdurden on Fri 23rd Jun 2017 06:46 UTC in reply to "RE[2]: Infinity Fabric"
tylerdurden Member since:
2009-03-17

I sure hope AMD is making some profit on their x86 side. Intel needs to be kept honest, or else we end up with the stagnation of the past half decade.

I wonder if AMD is able to translate the multimodule tech into their GPUs as well. That would be an interesting strategy vs. the huge monolithic dies of the past decade of high end GPUs.

Reply Score: 2