Linked by Thom Holwerda on Fri 6th Apr 2018 00:13 UTC
Intel

Intel first launched its 8th-generation branding last year. In the mobile space, we had the U-series Kaby Lake-R: four-core, eight-thread chips running in a 15W power envelope. On the desktop, we had Coffee Lake: six-core, 12-thread chips. In both cases, the processor lineup was limited: six different chips for the desktop, four for mobile.

Those mobile processors were joined earlier this year by Kaby Lake-G: four-core, eight-thread processors with a discrete AMD GPU on the same package as the processor.

Today, Intel has vastly expanded the 8th generation lineup, with 11 new mobile chips and nine new desktop processors, along with new 300-series chipsets.

Intel's naming scheme is a bit of a mess, isn't it? At this point I really have no idea what is what without consulting charts and tables. Can all the bright minds at Intel really not devise a more sensible naming scheme?

Order by: Score:
Still faulty
by _LC_ on Fri 6th Apr 2018 08:32 UTC
_LC_
Member since:
2017-12-16

A tad the author of this advertisement forgot:
Those chips still contain the bugs, which have been found about a year ago. Meltdown can cost you 40% performance if shit hits the fan. The other ones (Spectre) are usually below the 5% margin and therefore barely noticeable.

Reply Score: 4

RE: Still faulty
by avgalen on Fri 6th Apr 2018 08:44 UTC in reply to "Still faulty"
avgalen Member since:
2010-09-23

Didn't Meltdown and Spectre give barely noticable performance differences on 8th gen chips?
It does seem Intel is getting more aggressive and succesfull in packing more cores/performance in the same power-envelope, effectively competing with AMD.
I don't see any of that scaling down to the ARM power-envelope though

Reply Score: 3

RE[2]: Still faulty
by _LC_ on Fri 6th Apr 2018 09:32 UTC in reply to "RE: Still faulty"
_LC_ Member since:
2017-12-16

"Intel’s own tests on 8th-gen and 7th-gen laptops put the performance drop at 14 percent, while 6th-gen Skylake takes a hard 21-percent fall. Our tests put the 5th-gen Broadwell at 23 percent in the hole."

It is safe to assume that Intel did everything to make them look good here and it's still a punch in the stomach. Likewise, those weren't the worst benchmarks. The impact can be much bigger in other scenarios and these include real-world situations:

"The results for the SYSMark 2014 SE Responsiveness test are particularly worrying, showing that, as I expected, the biggest effect that the Spectre/Meltdown patching will have is on web browsing and overall system responsiveness, and that means that many of us will feel that our computers are running more sluggishly after applying the patches."

Reply Score: 3

RE[3]: Still faulty
by avgalen on Fri 6th Apr 2018 20:50 UTC in reply to "RE[2]: Still faulty"
avgalen Member since:
2010-09-23

"Intel’s own tests on 8th-gen and 7th-gen laptops put the performance drop at 14 percent". It is safe to assume that Intel did everything to make them look good here

No, it is safe to assume that you are trying to make them look bad here. You started with 40%, Now you dropped to 14% and that is actually the worst case scenario that Intel mentioned in that statement

Intel published some post-patch benchmark results on best-case PCs like this on its blog. The tests showed an average performance loss of between 2 and 7 percent in the SYSMark 2014 SE benchmark, which simulates productivity tasks and media creation. Its responsiveness score—which measures “‘pain points’ in the user experience when performing common activities”—plummeted by a whopping 14 percent, though. In web applications that use heavy amounts of JavaScript, Intel saw a 7 to 10 percent performance loss post-patch. These tests were performed on SSD-equipped systems; Intel reports the performance loss is less noticeable if you’re using a traditional hard drive. (source: https://www.pcworld.com/article/3245606/security/intel-x86-cpu-kerne..., based on https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Blog-...)


Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible. In this case the hype and (failed) patching have been far worse than the actual issue.
(for shared/virtual servers in datacenters the story is different, but since this article is about 8th gen, 6-core mobile CPU's we don't have to discuss that here)

Reply Score: 2

RE[4]: Still faulty
by Alfman on Sat 7th Apr 2018 03:49 UTC in reply to "RE[3]: Still faulty"
Alfman Member since:
2011-01-28

avgalen,

Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible. In this case the hype and (failed) patching have been far worse than the actual issue.

(for shared/virtual servers in datacenters the story is different, but since this article is about 8th gen, 6-core mobile CPU's we don't have to discuss that here)


I don't own any 8th gen chips, so I can't comment on that.

However I certainly wouldn't say that spectre and meltdown have had 0 impact on regular people or that there are no known exploits or even that the performance is negligible. Working exploits have been published, even in javascript! These are not theoretical, they work and anybody can use them to attack unpatched systems on multiple operating systems.

Unfortunately the performance impact of the mitigations is quite significant for workloads that exhibit a high rate of syscalls. Workloads with high IOPS fair much worse than workloads that spend their time performing computations without syscalls. So the performance loss really depends on the workload.

https://spectreattack.com/spectre.pdf

All eyes are looking at the OS, which makes sense, but individual applications can also be vulnerable to the indirect jumping code patterns even on a patched OS. All critical application code should be carefully audited too, but that's a huge undertaking. So IMHO we're going to be haunted by spectre for a long time, there's no trivial fix short of disabling speculative execution, which is fundamentally responsible for these exploits.

Reply Score: 5

RE[5]: Still faulty
by avgalen on Sun 8th Apr 2018 20:36 UTC in reply to "RE[4]: Still faulty"
avgalen Member since:
2010-09-23

Do you have any link to a non-theoretical exploit?
All I could find was things like this: https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma... or https://meltdownattack.com/ ("Has Meltdown or Spectre been abused in the wild? We don't know.")

And yes, for professionals working with special software on old OS's with old cpu's on shared virtual servers Spectre and Melt

But for people using their computer at home for editing, games, browsing, taxes, hobby, video/audio consuming ... they never noticed anything

Reply Score: 4

RE[6]: Still faulty
by Alfman on Mon 9th Apr 2018 18:38 UTC in reply to "RE[5]: Still faulty"
Alfman Member since:
2011-01-28

avgalen,

Do you have any link to a non-theoretical exploit?
All I could find was things like this: https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma..... or https://meltdownattack.com/ ("Has Meltdown or Spectre been abused in the wild? We don't know.")


Non-theoretical means the exploit is proven to work (as opposed to a bug which has not been successfully exploited yet). The spectre and meldown vulnerabilities are both proven to work.

Here's the source code for one that breaks the kernel barrier on linux:
https://github.com/mniip/spectre-meltdown-poc


I think what you actually meant was an attack which has been detected in the wild. I did a little digging and researchers have seen a rise of exploit code, but it seems that many of them were just testing the proof of concept code, which no longer works in up to date browsers.

https://www.networkworld.com/article/3253898/security/researchers-fi...

Obviously due to the passive nature of these attack, it's hard to know when you've been attacked since there are no logs to record the evidence ;)

And yes, for professionals working with special software on old OS's with old cpu's on shared virtual servers Spectre and Melt

But for people using their computer at home for editing, games, browsing, taxes, hobby, video/audio consuming ... they never noticed anything


Well, the most dangerous vulnerabilities are those that succeed without the users ever noticing, wouldn't you say? It's not easy to detect because the vulnerability is using totally legitimate functionality to snoop remote address space leaked by the hardware.

We might say that computers are already fast enough for most user needs, so the performance lost to mitigation is irrelevant. If this is the way a user feels, then that's fine.

Reply Score: 3

RE[7]: Still faulty
by avgalen on Mon 9th Apr 2018 20:27 UTC in reply to "RE[6]: Still faulty"
avgalen Member since:
2010-09-23

Non-theoretical means the exploit is proven to work
The problem with al the samples that I saw was that they could read random data, but not exploit anything. Of course if you attack enough systems for a long enough time you will eventually catch something interesting but that is a theoretical exploit to me.

I did a little digging and researchers have seen a rise of exploit code, but it seems that many of them were just testing the proof of concept code, which no longer works in up to date browsers.

I wouldn't call that a rise. 77 exploits after the first 2 weeks, 40 more a week later, 20 more 1 week later and after that it seems everyone has lost interest

It's not easy to detect because the vulnerability is using totally legitimate functionality to snoop remote address space leaked by the hardware.
That is legitimate functionality, but easy to detect because there is no normal purpose for this functionality so all software that does this is suspect. This is also why it was easy to fix this in browsers. The only good use for this function that I can think of would be "decompiler/debugging-hackery" and system diagnostics.

We might say that computers are already fast enough for most user needs, so the performance lost to mitigation is irrelevant. If this is the way a user feels, then that's fine.

This is exactly what seems to have happened. Most people just won't notice anything like a 10% drop in CPU performance. I would consider myself an experienced poweruser and of course I want "the latest i7 instead of an old i5"...but if you would give me a machine and would ask me if it feels like "the latest i7 or an old i5" I would very likely fail to identify it without running some test/benchmark

Reply Score: 3

RE[4]: Still faulty
by _LC_ on Sat 7th Apr 2018 10:46 UTC in reply to "RE[3]: Still faulty"
_LC_ Member since:
2017-12-16

"Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible."

Yes, of course! And the monkeys who were breathing in the exhausts for Volkswagen & friends were really only enjoying their trip to this health resort...

Apparently Intel's advertising department is taking us for idiots.

Reply Score: 2

RE[5]: Still faulty
by avgalen on Sun 8th Apr 2018 20:27 UTC in reply to "RE[4]: Still faulty"
avgalen Member since:
2010-09-23

"Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible."

Yes, of course! And the monkeys who were breathing in the exhausts for Volkswagen & friends were really only enjoying their trip to this health resort...

Apparently Intel's advertising department is taking us for idiots.

I am happy you got downvoted. You were basically calling me part of Intel's advertising department which is ridiculous (just look at my post history) and you back up your wild talk with 0 facts or links.

I support 50-100 of my own users at our company and we do the support for about 50 other companies as well. Outside of our support department and "the tech media" I have never heard anyone talk about Spectre/Meltdown and I haven't heard anyone about "suddenly my pc feels a lot slower" when the patches started to roll out.

https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma...
There is some sample code, that got copied and incorporated into even more Proof-of-Concept code, but there hasn't been anything dangerous going around in the wild.

Remove yourself from the tech-bubble, look around, and you will see that normal people never noticed

Reply Score: 3

v RE[6]: Still faulty
by _LC_ on Mon 9th Apr 2018 09:07 UTC in reply to "RE[5]: Still faulty"
RE[7]: Still faulty
by avgalen on Mon 9th Apr 2018 09:48 UTC in reply to "RE[6]: Still faulty"
avgalen Member since:
2010-09-23

It's annoying to reply to a guy who is lying bluntly:
http://www.tomshardware.com/news/meltdown-spectre-malware-found-for...
"February 1:... Security company Fortinet announced that it has found dozens of malware samples that have started taking advantage of the proof-of-concept (PoC) code for the Meltdown and Spectre CPU flaws released earlier last month....Malware Makers Are Adapting Quickly[sic]And this hasn't even started yet.

If you would have read the first paragraph of the article that I linked to you would see this:
"The use of the word 'samples' here, rather than 'malware', is deliberate: AV-Test confirms that it believes that at least the majority of these samples are proof-of-concepts rather than actual malware." (source: https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma...)
And from that same article "In fact, I doubt we will ever see a lot of in-the-wild malware using the Meltdown or Spectre exploits. Memory-read attacks simply aren't that attractive to most attackers: they don't allow an attacker to run arbitrary code on a targeted system, nor do they give the attacker access to stored data they are interested in. It is telling that Heartbleed, an unrelated attack that also allowed access to large chunks of memory, was not exploited widely in the wild, if it even was at all."
The article you linked to and the article I linked to are based on the same facts. However 1 of them is spreading panic and the other is analyzing the situation with a calmer mind and takes similar historic facts into account.
As you should have noticed by now, 2.5 months later there hasn't been any known outbreak in the wild. Basically a bunch of people got really interested in the theoretical problem, started playing around with the POC and .... nothing much happened after that, just like with rowhammer and other hardware based failures.
You continue to hype up this issue, but never react to all your refuted claims.

Of course, users get to feel it. In a multitude of ways:
"Root Cause of Reboot Issue Identified: Microsoft issues emergency Windows update to disable Intel's buggy Spectre fixes" ...

That is not a multitude of ways, also, it is exactly what I said in my 2nd post "In this case the hype and (failed) patching have been far worse than the actual issue."

Reply Score: 4

Comment by gedmurphy
by gedmurphy on Fri 6th Apr 2018 11:23 UTC
gedmurphy
Member since:
2005-12-23

Can all the bright minds at Intel really not devise a more sensible naming scheme?


Those bright minds don't get to come up with public naming schemes, marketing teams do...

Reply Score: 3

RE: Comment by gedmurphy
by nitrile on Fri 6th Apr 2018 12:09 UTC in reply to "Comment by gedmurphy"
nitrile Member since:
2010-05-06

I think in fairness it's not a completely straightforward ask; but doing so 'successfully' depends rather on what you consider as the goal. It's assuredly not clarity - it's in shovelling more, higher margin parts.

Those that want or need to know will still find out what they need and then choose accordingly.

The rest get bamboozlement; to degrade the choice into the bigger number their wallet can handle. I won't call that 'brighter minds' but it's definitely marketing.

Reply Score: 4

RE: Comment by gedmurphy
by The123king on Tue 10th Apr 2018 16:11 UTC in reply to "Comment by gedmurphy"
The123king Member since:
2009-05-28

It's a well known fact that marketing people are as bright as a 40w bulb in a brownout

Reply Score: 2

Comment by ahferroin7
by ahferroin7 on Fri 6th Apr 2018 12:16 UTC
ahferroin7
Member since:
2015-10-30

It's good to see that Intel is finally realizing that there actually is practical demand outside of the server and workstation markets for high levels of parallelization. A bit late for me to actually consider them for a DIY build for the time being, but still good to see.

Also, regarding the naming, last I checked, for the 8th generation CPU's Core i3 means 4 cores with no HT, Core i5 is 6 cores with no HT, and Core i7 is 6 cores with HT 9so 12 threads), with Pentium and Celeron being cheap-arse crap that's not even worth what they sell for.

Reply Score: 0

RE: Comment by ahferroin7
by hornett on Fri 6th Apr 2018 14:42 UTC in reply to "Comment by ahferroin7"
hornett Member since:
2005-09-19

Just curious, what's your workload for high core count on mobile?

Reply Score: 1

RE[2]: Comment by ahferroin7
by ahferroin7 on Fri 6th Apr 2018 14:55 UTC in reply to "RE: Comment by ahferroin7"
ahferroin7 Member since:
2015-10-30

Well, multimedia work for one. Most audio and video processing stuff can benefit from parallelization very well (either by processing multiple channels in parallel, or running multiple effects passes simultaneously). There's a lot of laptops out there that have very good displays and audio hardware, but still are inferior to a real multimedia workstation because they have a sub-par CPU.

There's still a pretty big market for gaming laptops, which quite often will benefit from higher core counts.

Increased core counts are also a good thing for software developers, building anything beyond a trivial piece of software is very easy to parallelize.

Put differently, pretty much anything that a normal workstation system would run that benefits from increased core counts. A lot of that type of stuff is not done much on mobile not because there's some issue with doing it on a laptop or similar device, but because it's far less efficient due to the reduced core counts.

Reply Score: 3

That's the point of the names
by dark2 on Fri 6th Apr 2018 15:03 UTC
dark2
Member since:
2014-12-30

That's the whole point of the i3/5/7 naming scheme. So you can feel like you made a good purchase and quality chip. I'm typing this from my "ultrabook" i5, which is really just a dual core cpu that gives it 6 hour battery life. Without the i5 in the name, not many people would justify the purchase of an ultrabook.

Reply Score: 2

d3vi1
Member since:
2006-01-28

The fastest CPU that one could buy in the winter of 2010-2011 for the Core i7 2820QM quad core CPU running at 2.3 GHz with a 3.4GHz TurboBoost. The fastest laptop CPU available 7 years later at the end of the 2017-2018 winter is the Core i7 7920HQ quad core CPU running at 3.1GHz with a 4.1GHz TurboBoost.

They used to double the CPU performance every 1,5years. It's been 7 years and they are close to doubling it, but not there yet. Moore's Law is dead and buried for 4 cycles already

Meanwhile, Apple's A11 2W CPU is already at 50% of the performance of Intel's 45W CPU. I can't wait for benchmarks of the A11X or A12.

Intel has been downhill since they got beat by AMD with the K7 and Hammer architectures. Here's how I remember it:
* When the Pentium 4 came out it got beat seriously by the K7 and the RAMBUS memory.
* They decided patch the Pentium 4 with the EM64T architecture (AMD64 to be more precise).
* They moved to LGA775 which they promised would be the last socket you'll need for a while, with Core and Core2 around the corner. That sort of screwed some people that bought Pentium 4 Laptops with 1hr battery life (like the ZD8000) only to discover that the Core 1 is not available on LGA775 and that Core2 or the dual-core Pentium 4 require small updates to the motherboard (profile 5a and 6 vrm, etc.). That was low even for their standards. Keep in mind, that up until then people were actually used to upgrading the CPUs and Intel even offered OverDrive CPUs.
* The Core 2 architecture proved to be successful, it moved from dual-core to quad core easily, it was a lot less power hungry, and it finally beat AMD. AMD was getting dangerous since they also bought ATi.
* Then came the Intel Core iX architecture that screwed everything. Server-wise it scaled from quad-core to 18-core, but for low TDP and high volume it was a complete mess. 8 Generations of Core iX with over 300 cpu models and they only doubled the performance.
* It was even worst for the clients. You were guaranteed that you won't be able to upgrade your CPU since newer CPUs depended on the newer Chipsets. So you had to buy a new computer for that 20% performance improvement instead of just upgrading the CPU.

Upgrades mattered, they would allow you to extend the life of a computer from 3 years to 5 years:
* You would start with a 33MHz 486 on Socket 3 and upgrade that to a 100MHz Pentium
* Buy a 66MHz Pentium and be able to upgrade it to 233MHz or even the crazy 550MHz K6-2?
* Buy a 180MHz Pentium Pro and upgrade it to a 333MHz Pentium 2
* Buy a 233MHz Pentium 2 and upgrade it to a 600MHz Pentium III (not more since the power reqs changed).
* Buy a 500MHz Pentium III and extend it to a 1GHz Pentium III.

Companies like Evergreen gave you even bigger performance jumps, you could go from a 16MHz 286 to a 48MHz 486. Or from a 25MHz 386sx to a 75MHz 486. Those were 4-6 fold performance improvements of the performance. They don't exist anymore since the sockets and the buses are proprietary. Back in the day, the the bus designs were implemented in chipsets by Chips&Technologies, ALi, Via, SiS, AMD, Intel, NVidia so you had competition.

VIA Technologies produced one of the most important chipsets in history, the MVP3, used for Super Socket 7. It was revolutionary it even supported DDR about 2 years before the first DDR products appeared on the market. No motherboards implemented it, but VIA included it. They AMD K6-3+ and K6-2+ could have been paired with up to 768MB of DDR200 memory in the era of 32-64MB PC100. Only in the later K7 motherboard designs and late P4 designs we've seen DDR.

I can't understand why Intel still exists. They produce shit CPUs that are full of bugs, they innovate at a snail pace and they engage in dubious practices to keep selling new boards without actually delivering an improvement.

Reply Score: 1

bassbeast Member since:
2007-11-11

"I can't understand why Intel still exists."...that one is easy, they got away with rigging the market for half a decade and only got slapped on the wrist!

Look at the transcripts or the coverage from the Intel VS Amd trial some time and you'll see that 2 billion dollar pay out to AMD? Was a sick joke, they had 1.- Bribed OEMs not to sell AMD, 2.- Paid benchmark companies to use their compiler because 3.- they rigged their compiler to detect non Intel chips and bork their performance...and they got away with it scot free, they paid less than what they made in 9 months of the big early 00s PC boom while getting to keep all the profits they made from the rigging.

It would be like robbing a bank of a million bucks and being told you have to hand 10k back and then you can go on your way...who wouldn't take that deal? which is why I was amazed when people were shocked that Intel is refusing to patch their own bugs in a lot of their older chips, I mean why should they? They have already seen the law isn't gonna do anything about them no matter what they do, hell they could refuse to patch anything that isn't under their 3 year warranty and I doubt they would even get a warning from the EU or USA...its a joke, once companies get that size they are "too big to fail" and too big to bust.

Reply Score: 6

tylerdurden Member since:
2009-03-17

I can't understand why Intel still exists.


Because your competence in the computer architecture field seems to be stuck at the level of old 90s 'puter admagazines fluff articles.

Reply Score: 3

d3vi1 Member since:
2006-01-28

" I can't understand why Intel still exists.


Because your competence in the computer architecture field seems to be stuck at the level of old 90s 'puter admagazines fluff articles.
"

You are absolutely right. My expectations from Intel are completely unrealistic. I can understand that lithography has stalled. I can understand that the original quad core i7 from 2010 had 700M transistors and while Moore’s law dictated that we should have chips with 22B transistors by now, we are TDP limited to about 3B in a laptop friendly 45W.

But Intel should publicly acknowledge that the x86 architecture has reached its limits and adopt another CPU architecture that can scale beyond what they are offering currently.

They barely doubled the performance in the past 90 months and I haven’t seen them providing alternatives and it doesn’t look like they have any medium or long term solutions. They crushed competition with anti-competitive practices. That competition could have provided the innovation needed to take us through this slump.

AArch64 can buy us at most another 3 years.

Reply Score: 0

tylerdurden Member since:
2009-03-17

You should understand the problem before you rush with an uninformed critique of the solution.

I recommend you start by learning about the differences between micro-architecture and ISA.


A LOT has changed in this field since the early 90s. Trust me.

Reply Score: 3

zima Member since:
2005-07-06

competition could have provided the innovation needed to take us through this slump

There still exist quite a few architectures competing with x86. In raw performance, they aren't faster...
And don't assume it's even likely "to take us through this slump" ...all technologies eventually plateau (be happy it happens now when computers are fast enough for most needs and not in, say, Pentium 100 times)

Reply Score: 3

zima Member since:
2005-07-06

They used to double the CPU performance every 1,5years. It's been 7 years and they are close to doubling it, but not there yet. Moore's Law is dead and buried for 4 cycles already

It probably _is_ faster, there were other improvements beside clockspeed... (though I wouldn't care about benchmarks, PC CPUs are to me well within "good enough" for a good few years)
And Moore's Law doesn't say anything directly about speed, but about density of integrated circuits (though yeah, it's close to dead; physics is a bitch)

Here's how I remember it:
* When the Pentium 4 came out it got beat seriously by the K7 and the RAMBUS memory.

K7 and RAMBUS? I think you meant K7 and SDR or DDR... ;)

* They moved to LGA775 which they promised would be the last socket you'll need for a while, with Core and Core2 around the corner. That sort of screwed some people that bought Pentium 4 Laptops with 1hr battery life (like the ZD8000) only to discover that the Core 1 is not available on LGA775 and that Core2 or the dual-core Pentium 4 require small updates to the motherboard (profile 5a and 6 vrm, etc.). That was low even for their standards. Keep in mind, that up until then people were actually used to upgrading the CPUs and Intel even offered OverDrive CPUs.

Surely negligible numbers of people upgraded CPUs of laptops anyway, I never stumbled on such example; desktops, sooner.

* Buy a 66MHz Pentium and be able to upgrade it to 233MHz or even the crazy 550MHz K6-2?

Hm, IIRC not quite, later/MMX Pentium models also required different motherboard (dual voltages or some such spec?); likewise Super Socket 7 CPUs such as 550MHz K6-2 / also faster FSB.

* Buy a 233MHz Pentium 2 and upgrade it to a 600MHz Pentium III (not more since the power reqs changed).

Such early Pentium 2 as 233 would likely be on LX chipset, which is limited to 66 MHz FSB, so no Pentium III 600...


And if you loathe Intel for bugs, you should absolutely hate Via... ;) / I'm sort of glad what happened to them for all the buggy chipsets we suffered.

Reply Score: 3

d3vi1 Member since:
2006-01-28

"They used to double the CPU performance every 1,5years. It's been 7 years and they are close to doubling it, but not there yet. Moore's Law is dead and buried for 4 cycles already

It probably _is_ faster, there were other improvements beside clockspeed... (though I wouldn't care about benchmarks, PC CPUs are to me well within "good enough" for a good few years)
And Moore's Law doesn't say anything directly about speed, but about density of integrated circuits (though yeah, it's close to dead; physics is a bitch)
"

I agree perfectly when it comes to speed. We've been toying with 3GHz for 13 years and it's OK since we're scaling with the number of cores. But the density hasn't moved either. i7 840QM had 0,7B transistors. 2860QM had 1,1B. 3840 had 1,4B and all the rest after (4980, 5950, 6970, 7920) are stuck at 1.4B. So for 7 generations (since 2010, Intel barely managed to double the number of transistors at the same speed and TDP. It's as dead as it gets.

"Here's how I remember it:
* When the Pentium 4 came out it got beat seriously by the K7 and the RAMBUS memory.


K7 and RAMBUS? I think you meant K7 and SDR or DDR... ;)
"

K7 hit Intel very hard and their decision to go with Rambus didn't help either. I was unclear, my bad.

Reply Score: 1

Comment by kurkosdr
by kurkosdr on Sun 8th Apr 2018 11:24 UTC
kurkosdr
Member since:
2011-04-11

Well, the naming scheme has to be confusing because they have to somehow hide the fact most advanced the last couple of years have been glacial. Also, hide the fact they artificially restrict chips by fusing off features.

Reply Score: 0

RE: Comment by kurkosdr
by zima on Fri 13th Apr 2018 21:21 UTC in reply to "Comment by kurkosdr"
zima Member since:
2005-07-06

It's how they make money... you sound like you'd prefer for Intel to not try to extract as much as they can from buyers, and to give all the features they can. That seems kinda... ~communist ;) (ekhem ;) http://www.osnews.com/comments/30130 )

Reply Score: 2

v Comment by _LC_
by _LC_ on Mon 9th Apr 2018 12:35 UTC