Linked by Thom Holwerda on Fri 28th Jul 2017 19:49 UTC
AMD

So far all the products launched with Zen have aimed at the upper echelons of the PC market, covering mainstream, enthusiasts and enterprise customers - areas with high average selling prices to which a significant number of column inches are written. But the volume segment, key for metrics such as market share, are in the entry level products. So far the AMD Zen core, and the octo-core Zeppelin silicon design, has been battling on the high-end. With Ryzen 3, it comes to play in the budget market.

AnandTech's review and benchmarks of the new low-end Ryzen 3 processors.

Thread beginning with comment 647251
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Comment by raom
by raom on Sat 29th Jul 2017 03:17 UTC in reply to "RE: Comment by raom"
raom
Member since:
2016-06-26

You mean x86 has been optimized close to as much as it can be done already? Is it not just intel being lazy due to no competition?

Reply Parent Score: 1

RE[3]: Comment by raom
by kriston on Sat 29th Jul 2017 03:58 in reply to "RE[2]: Comment by raom"
kriston Member since:
2007-04-11

I think Centaur/VIA and NexGen would argue with that statement. In the case of Centaur/VIA, they had both directly running x86 instructions on CISC cores and emulated x86 instructions on RISC cores. The latter has won out.

Think of all the energy we could save by not emulating x86 and x86-64 on RISC cores. We're talking ARM-level energy savings and reducing your physical footprint by more than a third. Think if Itanium succeeded.

But, backward compatibility, fam.

Reply Parent Score: 2

RE[4]: Comment by raom
by Brendan on Sat 29th Jul 2017 11:18 in reply to "RE[3]: Comment by raom"
Brendan Member since:
2005-11-16

Hi,

I think Centaur/VIA and NexGen would argue with that statement. In the case of Centaur/VIA, they had both directly running x86 instructions on CISC cores and emulated x86 instructions on RISC cores. The latter has won out.


Both Transmeta and NexGen did "dynamic translation" to convert 80x86 into a RISC-like instructions; while everyone else either used direct execution or micro-ops. Because of this, Transmeta and NexGen were among the earliest 80x86 clone vendors to die.

For the others, from memory (potentially wrong in places); AMD still lives (and is well known), SiS/Rise still lives (got taken by an embedded system company that continues to produce low power/fanless systems under the name "Vortex86"), Centaur/Cyrix moved to VIA (and nobody seems to have heard anything since the dual and quad core variants of VIA Nano several years ago), NSC got taken by AMD (who replaced "NSC Geode" with "AMD Geode" - mostly an Athlon based core - and no trace of NSC was seen after that). I can't quite remember what happened to UMC but I think they died early too.

Think of all the energy we could save by not emulating x86 and x86-64 on RISC cores. We're talking ARM-level energy savings and reducing your physical footprint by more than a third. Think if Itanium succeeded.


It's been estimated by professional CPU designers (not me) as "less than 1% difference". The primary difference is design goals (fast single-thread performance; and all the power consumed by higher clock, larger caches, branch prediction, wider datapaths, higher bandwidth for RAM and interconnects, etc) that all have nothing to do with instruction set.

But, backward compatibility, fam.


Yes; backward compatibility is far more important than insignificant theoretical gains that don't actually exist in practice. It's the reason 80x86 managed to push every other CPU out of "mainstream desktop/server" (including Intel's own Itanium), become dominant (despite IBM designing "PC" as short-term thing), and remain dominant for "multiple decades and still counting".

ARM actually started out in home computers (Acorn), and like all of the others (Commodore, BBC, Tandy, Apricot, ...) that were around in the 1980s got beaten severely by 80x86 PCs. The CPU part got spun off (becoming "ARM Holdings") and continued in the embedded market (where the major deciding factor is price and not quality, performance or power consumption) for a few decades because they couldn't compete otherwise. It's only recently (due to smartphones, and especially with ARMv8) that they've started being more than "bargain basement"; and that's mostly only because ARM CPUs have grown to become significantly more complex than "CISC" originally was.

Mostly, RISC was guaranteed to fail from the beginning. "Let's do twice as many instructions where every instruction does half as much work" hits the clock frequency and TDP constraints (and instruction fetch problems) much sooner than CISC; and (unless the only thing you care about is design costs) only really seemed to make sense for a brief period in the early 1990s (during the "clock frequency wars" of 80x86, that ended when Intel released Pentium 4/Netburst and everyone finally realised clock frequency alone is worthless if you can't sustain "effective work done per cycle").

- Brendan

Reply Parent Score: 2

RE[4]: Comment by raom
by tylerdurden on Sat 29th Jul 2017 23:11 in reply to "RE[3]: Comment by raom"
tylerdurden Member since:
2009-03-17

The decode overhead in a modern x86 machine is in the single digit percetage of the total, close to rounding error at this point.

It is not much of an issue at this point, and in some cases CISC instructions end up being more power efficient (instruction memory access).

I keep repeating this in this site, since most people don't seem to understand computer architecture: ISA and microarchitecture have been decoupled concepts for eons. Most of the power efficiency of ARM is due to microarchitecture characteristics, not to it's ISA (except for extreme cases, like Thumb parts for deeply embedded markets).

Reply Parent Score: 2

RE[3]: Comment by raom
by Alfman on Sat 29th Jul 2017 04:43 in reply to "RE[2]: Comment by raom"
Alfman Member since:
2011-01-28

raom,


You mean x86 has been optimized close to as much as it can be done already? Is it not just intel being lazy due to no competition?


I don't believe it has so much to do with intel as it does with the easy engineering being behind us and the costs of moving forward are greater than ever.

But I do agree with you about there not being enough competition, one of the reasons could be that the costs of modern semiconductor facilities are absurdly high.


https://www.technologyreview.com/s/418576/the-high-cost-of-upholding...

http://www.economist.com/news/21589080-golden-rule-microchips-appea...

This resulted in mass consolidation with most companies exiting the chip fabrication business (ie AMD outsourcing it's CPUs).


Also, any would-be competitors would likely have to contend with many billions of dollars worth of patent lawsuits and/or royalties, which obviously favors the old incumbents and protects them from new competition.

Another reality is that the consumer PC market is steadily shrinking year over year.
http://www.idc.com/getdoc.jsp?containerId=prUS42214417

This is not to say people don't want faster computers, but the economic realities of building even more expensive fabs may be difficult to justify given the weak market.

Reply Parent Score: 3

RE[4]: Comment by raom
by Sidux on Sat 29th Jul 2017 05:52 in reply to "RE[3]: Comment by raom"
Sidux Member since:
2015-03-10

Prices generally went up due to data centers that grew like mushrooms as soon as cloudops became a thing.
Funny thing is that everyone lured customers with "unlimited" plans for storing and processing data but now that they started gaining momentum they're removing this option starting with individual accounts.
As for consumers, there is no secret that everything was supposed to run via web at some point.
Smartphones will share the same market stagnation as PC's for very much the same reasons.

Reply Parent Score: 2