Linked by Thom Holwerda on Fri 28th Jul 2017 19:49 UTC
AMD

So far all the products launched with Zen have aimed at the upper echelons of the PC market, covering mainstream, enthusiasts and enterprise customers - areas with high average selling prices to which a significant number of column inches are written. But the volume segment, key for metrics such as market share, are in the entry level products. So far the AMD Zen core, and the octo-core Zeppelin silicon design, has been battling on the high-end. With Ryzen 3, it comes to play in the budget market.

AnandTech's review and benchmarks of the new low-end Ryzen 3 processors.

Order by: Score:
Binning
by Treza on Fri 28th Jul 2017 22:52 UTC
Treza
Member since:
2006-01-11

I find quite remarquable that all the RYZEN 3/5/7 chips are made from the same die, iirc "Zeppelin", with different binning, disabled cores or multithreading.

The production cost of the lowest end R3 is the same as the highest performance R7, except for the bundled cooler.

Reply Score: 2

RE: Binning
by kriston on Sat 29th Jul 2017 03:56 UTC in reply to "Binning"
kriston Member since:
2007-04-11

Well, it's well-known practice to do this either for marketing reasons but also for practical reasons. Intel has always done this at least as far back as differentiating CeLeron SKUs from Pentiums. If one of the cache modules has a flaw but the rest of the chip is fine, sell it as a feature-reduced chip for less money. The same with multiple cores and heat-related failures in certain CPU modules.

At least in the old days, most of the time the flaws are in the cache, so reduce cache and turn off HyperThreading. I wonder what AMD's official word is on this practice for modern ryzen CPUs.

Reply Score: 1

RE[2]: Binning
by grat on Sat 29th Jul 2017 04:55 UTC in reply to "RE: Binning"
grat Member since:
2006-02-02

I seem to recall the 486SX was a 486DX that failed QC, so they burned out the links to the Floating Point Unit.

Reply Score: 3

RE[3]: Binning
by JLF65 on Sat 29th Jul 2017 18:58 UTC in reply to "RE[2]: Binning"
JLF65 Member since:
2005-07-06

Motorola wouldn't even burn out the links - if an FPU or MMU failed, they sold the chip AS-IS with a different label and told programmers to simply not use FPU or MMU instructions. There was no way in code to tell if the FPU or MMU was working or not.

Reply Score: 3

RE: Binning
by bassbeast on Sat 29th Jul 2017 06:42 UTC in reply to "Binning"
bassbeast Member since:
2007-11-11

AMD has done this for years, for example the Zosma Phenom quad was a Phenom hexacore with 2 cores turned off and there are Athlon X4s that you can attempt to reactivate the cache on which turns them into Phenoms.

And IIRC the FX chips are all a single chip...the FX-8350. They are either binned faster or slower or have some cores disabled but they are all just 8350s, even the "e" series were just "gold binned" FX-8350s that would run at 95w with just a slightly lower clock speed.

You really have to give AMD credit for doing that, their yields must be insane by just making one chip and then disabling cores/cache to fit different markets, no wasted chips, no need for multiple lines, quite smart.

Reply Score: 3

RE[2]: Binning
by Alfman on Sat 29th Jul 2017 08:28 UTC in reply to "RE: Binning"
Alfman Member since:
2011-01-28

bassbeast,

AMD has done this for years, for example the Zosma Phenom quad was a Phenom hexacore with 2 cores turned off and there are Athlon X4s that you can attempt to reactivate the cache on which turns them into Phenoms.


I once bought two athlon XP desktop processors and used conductive ink to repair the broken traces that differentiated them from athlon MP that could be used in dual CPU boards.

You can see those tiny traces in these pictures:
http://www.cpu-world.com/CPUs/K7/TYPE-Athlon%20MP.html

Edited 2017-07-29 08:30 UTC

Reply Score: 2

RE[3]: Binning
by Drumhellar on Sat 29th Jul 2017 18:05 UTC in reply to "RE[2]: Binning"
Drumhellar Member since:
2005-07-12

Hah.I did that to unlock the multiplier on my 700MHz Duron.

Was able to get that processor up to 1Ghz and keep it stable.

Reply Score: 2

RE: Binning
by unclefester on Sun 30th Jul 2017 01:08 UTC in reply to "Binning"
unclefester Member since:
2007-01-13

I find quite remarquable that all the RYZEN 3/5/7 chips are made from the same die, iirc "Zeppelin", with different binning, disabled cores or multithreading.


Car makers have being doing the same thing with their engines for a century. They deliberately downgrade performance (lower compression, rev limiting, smaller displacement etc) on cheaper models to encourage buyers to upgrade. Now that turbo engines are common they can tweak the engine management software to alter the power output by a huge margin for almost zero cost.

Reply Score: 2

RE[2]: Binning
by Alfman on Sun 30th Jul 2017 01:59 UTC in reply to "RE: Binning"
Alfman Member since:
2011-01-28

unclefester,

Car makers have being doing the same thing with their engines for a century. They deliberately downgrade performance (lower compression, rev limiting, smaller displacement etc) on cheaper models to encourage buyers to upgrade. Now that turbo engines are common they can tweak the engine management software to alter the power output by a huge margin for almost zero cost.


This is probably also why they lobby for the DMCA to apply to cars.


http://www.autoblog.com/2015/04/20/automakers-gearheads-car-repairs...

In any other "legit" copyright case I'd say making your own changes to your own legal copy would not reasonably be construed as an infringement, but the DMCA made it illegal merely to circumvent copyright protections even if you aren't otherwise violating any copyrights. And this is why manufacturers became fond of the DMCA, but the fact that this keeps happening just highlights how this was always just bad legislation, we should be repealing that!

Reply Score: 2

RE: Binning
by Lennie on Mon 31st Jul 2017 16:25 UTC in reply to "Binning"
Lennie Member since:
2007-09-22

Much more remarkable is why would you call something your company makes "Zeppelin". They didn't have the best name in the past.

Especially for a company (AMD) which is known for having had products which heating problems.

Reply Score: 2

Comment by raom
by raom on Sat 29th Jul 2017 01:18 UTC
raom
Member since:
2016-06-26

I just want intel to make significantly faster processors for each generation again, like in the Nehalem days

Reply Score: 1

RE: Comment by raom
by Alfman on Sat 29th Jul 2017 01:59 UTC in reply to "Comment by raom"
Alfman Member since:
2011-01-28

raom,

I just want intel to make significantly faster processors for each generation again, like in the Nehalem days


I'd like that too, but unfortunately it seems we've reached the point of diminishing returns. The problem is that while having X times more speed is more useful than having X times as many cores, a tiny increase in speed will require exponential increases in costs, which consumers are reluctant to pay for. This is why chip vendors have shifted towards pushing more cores instead.

At the higher price points CPUs would have to compete with other technologies like FPGAs that are both faster and more efficient, so that's probably the direction the industry will be moving in once the scales of economy for those alternatives kick in.

Edited 2017-07-29 02:02 UTC

Reply Score: 2

RE[2]: Comment by raom
by raom on Sat 29th Jul 2017 03:17 UTC in reply to "RE: Comment by raom"
raom Member since:
2016-06-26

You mean x86 has been optimized close to as much as it can be done already? Is it not just intel being lazy due to no competition?

Reply Score: 1

RE[3]: Comment by raom
by kriston on Sat 29th Jul 2017 03:58 UTC in reply to "RE[2]: Comment by raom"
kriston Member since:
2007-04-11

I think Centaur/VIA and NexGen would argue with that statement. In the case of Centaur/VIA, they had both directly running x86 instructions on CISC cores and emulated x86 instructions on RISC cores. The latter has won out.

Think of all the energy we could save by not emulating x86 and x86-64 on RISC cores. We're talking ARM-level energy savings and reducing your physical footprint by more than a third. Think if Itanium succeeded.

But, backward compatibility, fam.

Reply Score: 2

RE[4]: Comment by raom
by Brendan on Sat 29th Jul 2017 11:18 UTC in reply to "RE[3]: Comment by raom"
Brendan Member since:
2005-11-16

Hi,

I think Centaur/VIA and NexGen would argue with that statement. In the case of Centaur/VIA, they had both directly running x86 instructions on CISC cores and emulated x86 instructions on RISC cores. The latter has won out.


Both Transmeta and NexGen did "dynamic translation" to convert 80x86 into a RISC-like instructions; while everyone else either used direct execution or micro-ops. Because of this, Transmeta and NexGen were among the earliest 80x86 clone vendors to die.

For the others, from memory (potentially wrong in places); AMD still lives (and is well known), SiS/Rise still lives (got taken by an embedded system company that continues to produce low power/fanless systems under the name "Vortex86"), Centaur/Cyrix moved to VIA (and nobody seems to have heard anything since the dual and quad core variants of VIA Nano several years ago), NSC got taken by AMD (who replaced "NSC Geode" with "AMD Geode" - mostly an Athlon based core - and no trace of NSC was seen after that). I can't quite remember what happened to UMC but I think they died early too.

Think of all the energy we could save by not emulating x86 and x86-64 on RISC cores. We're talking ARM-level energy savings and reducing your physical footprint by more than a third. Think if Itanium succeeded.


It's been estimated by professional CPU designers (not me) as "less than 1% difference". The primary difference is design goals (fast single-thread performance; and all the power consumed by higher clock, larger caches, branch prediction, wider datapaths, higher bandwidth for RAM and interconnects, etc) that all have nothing to do with instruction set.

But, backward compatibility, fam.


Yes; backward compatibility is far more important than insignificant theoretical gains that don't actually exist in practice. It's the reason 80x86 managed to push every other CPU out of "mainstream desktop/server" (including Intel's own Itanium), become dominant (despite IBM designing "PC" as short-term thing), and remain dominant for "multiple decades and still counting".

ARM actually started out in home computers (Acorn), and like all of the others (Commodore, BBC, Tandy, Apricot, ...) that were around in the 1980s got beaten severely by 80x86 PCs. The CPU part got spun off (becoming "ARM Holdings") and continued in the embedded market (where the major deciding factor is price and not quality, performance or power consumption) for a few decades because they couldn't compete otherwise. It's only recently (due to smartphones, and especially with ARMv8) that they've started being more than "bargain basement"; and that's mostly only because ARM CPUs have grown to become significantly more complex than "CISC" originally was.

Mostly, RISC was guaranteed to fail from the beginning. "Let's do twice as many instructions where every instruction does half as much work" hits the clock frequency and TDP constraints (and instruction fetch problems) much sooner than CISC; and (unless the only thing you care about is design costs) only really seemed to make sense for a brief period in the early 1990s (during the "clock frequency wars" of 80x86, that ended when Intel released Pentium 4/Netburst and everyone finally realised clock frequency alone is worthless if you can't sustain "effective work done per cycle").

- Brendan

Reply Score: 2

RE[5]: Comment by raom
by JLF65 on Sat 29th Jul 2017 19:05 UTC in reply to "RE[4]: Comment by raom"
JLF65 Member since:
2005-07-06

Mostly, RISC was guaranteed to fail from the beginning.


Then why is EVERY x86/x86-64 internally a RISC processor? Because RISC WON the processor war, but businesses still demand backwards compatibility.

RISC makes pipelining and super-scalar FAR easier to accomplish than CISC, and THAT is what makes modern processors so much faster.

Reply Score: 4

RE[6]: Comment by raom
by Alfman on Sat 29th Jul 2017 19:48 UTC in reply to "RE[5]: Comment by raom"
Alfman Member since:
2011-01-28

JLF65,

Then why is EVERY x86/x86-64 internally a RISC processor? Because RISC WON the processor war, but businesses still demand backwards compatibility.

RISC makes pipelining and super-scalar FAR easier to accomplish than CISC, and THAT is what makes modern processors so much faster.



Yes, it depends very much how we use the terms. The RISC cores have won out, but I think maybe you and brendon might agree that by converting to CISC into RISC microcode, the runtime performance CISC and RISC become very similar and in this sense the overhead for having complex instructions is minimized. Of course it's still there in an absolute sense, but those transisters are operating in parallel and are not the main CPU bottleneck, although it could account for more power consumption.

Reply Score: 2

RE[7]: Comment by raom
by Kochise on Sat 29th Jul 2017 20:27 UTC in reply to "RE[6]: Comment by raom"
Kochise Member since:
2006-03-03

Out-of-order execution and hyper-threading also really improved things.

Reply Score: 2

RE[7]: Comment by raom
by JLF65 on Sat 29th Jul 2017 21:47 UTC in reply to "RE[6]: Comment by raom"
JLF65 Member since:
2005-07-06

Yes, the CISC to RISC translation layer gives the processor a chance to optimize the instruction sequence in a way that can't be matched by any existing compiler. It tailors the instructions to the hardware in a way that is nearly impossible outside the processor. Out-of-order execution, fully utilizing parallel execution units, etc. It's kind of the best of both worlds. I just wish they did this with the 680x0 instead of the x86 - I don't know ANYONE who likes the x86 ISA. AMD did a good job in making the 64-bit spec, but it's still just polishing a turd.

Reply Score: 2

RE[6]: Comment by raom
by Treza on Sat 29th Jul 2017 22:09 UTC in reply to "RE[5]: Comment by raom"
Treza Member since:
2006-01-11

Then why is EVERY x86/x86-64 internally a RISC processor?


I'm getting tired of this nonsense.

There is no "internal RISC processor". There is microcode. Microinstructions do not make an instruction set, Intel CPUs used to execute micro-ops sequentially (8086...80386), now it is pipelined, out of order, speculative, ... but this do not make a RISC.

RISC/CISC is about instruction sets, and one of the ideas behind RISC was discarding the microcode and replacing it with a straightforward instruction decoder.

Reply Score: 2

RE[7]: Comment by raom
by tylerdurden on Sat 29th Jul 2017 23:39 UTC in reply to "RE[6]: Comment by raom"
tylerdurden Member since:
2009-03-17

I think part of the confusion also comes from the fact that originally there were many research teams working on RISC designs, and each team had a different defintion for what the term meant.

Some RISC projects were indeed about exposing both the microcode and pipeline thus passing the programming complexity on to the compiler. Ironically, most RISC designs originally were not intended to be programmed by hand...

Reply Score: 2

RE[7]: Comment by raom
by Alfman on Sun 30th Jul 2017 00:33 UTC in reply to "RE[6]: Comment by raom"
Alfman Member since:
2011-01-28

Treza,

I'm getting tired of this nonsense.

There is no "internal RISC processor". There is microcode. Microinstructions do not make an instruction set, Intel CPUs used to execute micro-ops sequentially (8086...80386), now it is pipelined, out of order, speculative, ... but this do not make a RISC.


The problem is that "RISC" is associated with different meanings, and whether we like it or not we now have to be more concise than just saying "RISC" and automatically expecting everyone to be on the same track. And before you disagree with me, I want you to take note that your own post used two of the differing meanings:

1) "RISC/CISC is about instruction sets"
and
2) "one of the ideas behind RISC was discarding the microcode and replacing it with a straightforward instruction decoder."

I'm personally not at all bothered by this, but some posters do become frustrated when "RISC" is associated with this idea of a simple implementation even though it's commonly used in that context. Many of the arguments in years past have started with semantic differences, so I'm hoping maybe we can all explicitly move beyond that and steer the discussion to something more substantive instead ;)

With that in mind, I found this paper on the topic quite interesting:
http://research.cs.wisc.edu/vertical/papers/2013/hpca13-isa-power-s...

Edited 2017-07-30 00:33 UTC

Reply Score: 2

RE[6]: Comment by raom
by tylerdurden on Sat 29th Jul 2017 23:33 UTC in reply to "RE[5]: Comment by raom"
tylerdurden Member since:
2009-03-17

Then why is EVERY x86/x86-64 internally a RISC processor?



That's a common misconception. The reality is not that modern CISC processors are RISC internally, but rather than both CISC and RISC processor families ended up converging into the same micro architectural principles (e.g. pipelining, superscalar, out-of-order, simultaneous multithreading, decoupling, aggressive prediction, etc) to achieve high performance.

All modern out-of-order PowerPC, SPARC, and ARM designs do break their "RISC" instructions into simpler sub instructions internally, just like a modern out-of-order x86 does.


RISC basically means 2 things; fixed instruction length, and register-to-register operations. A lot of stuff has been erroneously assigned as being "RISC" because many people keep confusing ISA with microarchiteture.

Reply Score: 2

RE[7]: Comment by raom
by JLF65 on Sun 30th Jul 2017 14:18 UTC in reply to "RE[6]: Comment by raom"
JLF65 Member since:
2005-07-06

Yes, and no. Yes, the internals are not TECHNICALLY a RISC processor as such, but it DOES meet your definition - those micro-ops are all the same length, and internally they only do (shadow) register to register operations. The micro-architecture has standardized around RISC principles, which is why so many call it RISC internally - it's closer to the truth than calling it CISC internally.

Reply Score: 2

RE[4]: Comment by raom
by tylerdurden on Sat 29th Jul 2017 23:11 UTC in reply to "RE[3]: Comment by raom"
tylerdurden Member since:
2009-03-17

The decode overhead in a modern x86 machine is in the single digit percetage of the total, close to rounding error at this point.

It is not much of an issue at this point, and in some cases CISC instructions end up being more power efficient (instruction memory access).

I keep repeating this in this site, since most people don't seem to understand computer architecture: ISA and microarchitecture have been decoupled concepts for eons. Most of the power efficiency of ARM is due to microarchitecture characteristics, not to it's ISA (except for extreme cases, like Thumb parts for deeply embedded markets).

Reply Score: 2

RE[3]: Comment by raom
by Alfman on Sat 29th Jul 2017 04:43 UTC in reply to "RE[2]: Comment by raom"
Alfman Member since:
2011-01-28

raom,


You mean x86 has been optimized close to as much as it can be done already? Is it not just intel being lazy due to no competition?


I don't believe it has so much to do with intel as it does with the easy engineering being behind us and the costs of moving forward are greater than ever.

But I do agree with you about there not being enough competition, one of the reasons could be that the costs of modern semiconductor facilities are absurdly high.


https://www.technologyreview.com/s/418576/the-high-cost-of-upholding...

http://www.economist.com/news/21589080-golden-rule-microchips-appea...

This resulted in mass consolidation with most companies exiting the chip fabrication business (ie AMD outsourcing it's CPUs).


Also, any would-be competitors would likely have to contend with many billions of dollars worth of patent lawsuits and/or royalties, which obviously favors the old incumbents and protects them from new competition.

Another reality is that the consumer PC market is steadily shrinking year over year.
http://www.idc.com/getdoc.jsp?containerId=prUS42214417

This is not to say people don't want faster computers, but the economic realities of building even more expensive fabs may be difficult to justify given the weak market.

Reply Score: 3

RE[4]: Comment by raom
by Sidux on Sat 29th Jul 2017 05:52 UTC in reply to "RE[3]: Comment by raom"
Sidux Member since:
2015-03-10

Prices generally went up due to data centers that grew like mushrooms as soon as cloudops became a thing.
Funny thing is that everyone lured customers with "unlimited" plans for storing and processing data but now that they started gaining momentum they're removing this option starting with individual accounts.
As for consumers, there is no secret that everything was supposed to run via web at some point.
Smartphones will share the same market stagnation as PC's for very much the same reasons.

Reply Score: 2

RE[5]: Comment by raom
by Alfman on Sat 29th Jul 2017 06:30 UTC in reply to "RE[4]: Comment by raom"
Alfman Member since:
2011-01-28

Sidux,

As for consumers, there is no secret that everything was supposed to run via web at some point.
Smartphones will share the same market stagnation as PC's for very much the same reasons.


Yea, I agree smartphones will stagnate as well.

Although they have more planned obsolescence going for them (unlike my ancient PCs, I can't update or fix my damn phone!)

Reply Score: 2

RE[2]: Comment by raom
by _txf_ on Sat 29th Jul 2017 04:33 UTC in reply to "RE: Comment by raom"
_txf_ Member since:
2008-03-17


At the higher price points CPUs would have to compete with other technologies like FPGAs that are both faster and more efficient, so that's probably the direction the industry will be moving in once the scales of economy for those alternatives kick in.


FPGAs are not more efficient, at least not in such a way that one can state that without qualifying. FPGAs generally are less efficient as they have a lot of redundant hardware, hence why some will prototype on those to then develop ASICs. The advantage of an FPGA is the ability to retarget and massive parallelism.

Edited 2017-07-29 04:34 UTC

Reply Score: 2

RE[3]: Comment by raom
by Alfman on Sat 29th Jul 2017 06:00 UTC in reply to "RE[2]: Comment by raom"
Alfman Member since:
2011-01-28

_txf_,

FPGAs are not more efficient, at least not in such a way that one can state that without qualifying. FPGAs generally are less efficient as they have a lot of redundant hardware, hence why some will prototype on those to then develop ASICs. The advantage of an FPGA is the ability to retarget and massive parallelism.


I don't quite understand your critisism since CPUs have lots of redundancy too, with many features going unused for various use cases. Anyways it's like saying a graphics card is not more efficient than CPU because it has lots of redundant hardware. When you factor in the nature of the work your computing, the value of parallel processors can make a lot of sense. Massively parallel processing is becoming more relevant to all aspects of computing: graphics/compression/ai/gaming/etc. Most FPGAs come with a traditional processor to program and control the FPGA and the heavy work gets done by the FPGA.

Of course the technologies aren't directly compatible, but still, in terms of energy and performance this is generally true:

ASICS > FPGA > CPU.





Take bitcoin mining as a well studied example:
https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison
https://en.bitcoin.it/wiki/Mining_hardware_comparison

I tried to take a few fair examples from various categories for illustration...

Intel:
Core i7 3930k 66.6mhash/s 190watts = 0.35mhash/s/w

Arm:
Cortex-A9 0.57mhash/s .5watts = 1.14mhash/s/w

Amd:
2x Opteron 6172 55mhash/s 230watts = 0.24mhash/s/w

Nvidia:
GTX570 157mhash/s 219watts = 0.72mhash/s/w

FPGA:
X6500 FPGA Miner 400mhash/s 17watts = 23.53mhash/s/w

ASIC:
AntMiner S9 14000000mhash/s 1375watts = 10181.82mhash/s/w
BFL Monarch 700GH/s 700000mhash/s 490watts = 1428.57mhash/s/w


Obviously the FPGA and ASIC solutions have been optimized for this purpose, whereas a generic PC is not. But an efficient ARM processor does not (and can not) match the efficiency of an FPGA, much less an ASIC because CPUs fundamentally have to go through many more steps (ie transistors) to compute an algorithm. This is both slower, and takes more energy.

Note that the core i7 processor above had the benefit of a 32nm fab process whereas the Spartan 6 FPGA used a 45nm fab process, so the FPGA would have been even more efficient if it had the benefit of a 32nm fab. Intel CPUs have traditionally benefited from better fabs and not necessarily better architectures.

You and I can agree that the ASIC is better still, however ASIC chips are obviously hardcoded and cannot be reprogrammed, which is why I think FPGAs would make more sense in CPU evolution. This may be a bit controversial today, but to me it seems inevitable as we hit the performance boundaries of traditional CPU architectures.

Edited 2017-07-29 06:20 UTC

Reply Score: 2

RE[4]: Comment by raom
by unclefester on Sun 30th Jul 2017 02:18 UTC in reply to "RE[3]: Comment by raom"
unclefester Member since:
2007-01-13


I don't quite understand your critisism since CPUs have lots of redundancy too, with many features going unused for various use cases. Anyways it's like saying a graphics card is not more efficient than CPU because it has lots of redundant hardware. When you factor in the nature of the work your computing, the value of parallel processors can make a lot of sense. Massively parallel processing is becoming more relevant to all aspects of computing: graphics/compression/ai/gaming/etc. Most FPGAs come with a traditional processor to program and control the FPGA and the heavy work gets done by the FPGA.

Of course the technologies aren't directly compatible, but still, in terms of energy and performance this is generally true:

ASICS > FPGA > CPU.


A CPU is like a Swiss Army Knife - it does most things badly and nothing particularly well.

Edited 2017-07-30 02:20 UTC

Reply Score: 2

RE[5]: Comment by raom
by tylerdurden on Sun 30th Jul 2017 05:14 UTC in reply to "RE[4]: Comment by raom"
tylerdurden Member since:
2009-03-17

Is this sarcasm?

Reply Score: 2

Ryzen 3 VS FX-8
by bassbeast on Sun 30th Jul 2017 18:53 UTC
bassbeast
Member since:
2007-11-11

OzTalksHW did a series of benches placing the FX-8 against the Ryzen 3, result? If you have an FX-8 its a side upgrade at best.

https://www.youtube.com/watch?v=azX4Qs7n2_Q

This is why I'm gonna stick with my FX-8320e and why I've been saying for ages that hardware is frankly OP compared to the software we have to run on it as he's getting very good framerates even on the latest games with a chip that came out in...what? 2012? And that was with strictly stock clocks on the FX-8370, I can attest with good cooling you can usually get a full Ghz or more OC with an FX-8 series CPU.

So while I'm happy we have competition again (just waiting for Intel to put out a "super cripple" compiler or start bribing OEMs again like they did when Netburst was stinking up the place) and to see prices going down and cores going up until I can actually find a job I do that the FX-8 cannot do? I'll stick with what I have and I have a feeling the same is gonna be true of a lot of folks because if their software runs just fine on what they have why buy a new system?

Reply Score: 2