“Intel was surprisingly talkative when it came to future technologies and products this year. As a result, most of the technical audience is up to date regarding the upcoming micro architecture based on the 65 nm Merom design. We discovered that all of these announcements are the top of a hot iceberg only, because the chip firm intends to deliver almost 20 new processor designs within the next eight quarters; all for the sole purpose of dominating the desktop, mobile and enterprise segments.”
And people thought Apple was crazy…
Of course they were lured by the roadmap, but not the public roadmap; then again, Steve Jobs himself never ever cared about the PowerPC. Fat/Universal binaries are his cup of cake since the early NeXT days.
— meianoite
I still think they are. I will believe this when they deliver these processors. A roadmap is on paper. I think Intel has about as much a chance of delivering on this promise and Microsoft did “Cairo.” Product problems will likely be the word of the day.
Until AMD shows their roadmap that once again will blow Intel plans.
let’s see what they will unveil in january.
From the article:
most of the technical audience is up to date regarding the upcoming micro architecture based on the 65 nm Merom design.
Are they? As far as I remember the pronouncements were all fairly vague and much of the public information is just educated guesswork, e.g. regarding pipeline length or whether it’s gonna have a P4-style trace cache or more traditional instruction cache.
competes with PR. Promising it will make the next and greatest thing , while currently delivering nothing. When I will see any intel processor showing comparbale performance to amd offers I will believe it . The article tries to stress advantage of intel in manufacturing proccess which is irrelevant for the bottom line of the user (which is performance and price/performance ratios) .
So far latest yohan benchmark show that intels processors are still overpriced pieces of junk: http://anandtech.com/cpuchipsets/showdoc.aspx?i=2627
You do realize Yonah is a notebook chip and the other chips in that review were not. That it was able to perform reasonable close to an equally clocked desktop dualcore processor from AMD does nothing to support an opinion of it being an overpriced piece of junk.
>So far latest yohan benchmark show that intels processors are still overpriced pieces of junk:
Go you, so for twice the price, you get a slight advantage…And one is a laptop proc while the other is the desktop one.
So if “intel piece of junk” are overpriced, what are the dual core amd ones?
competes with PR. Promising it will make the next and greatest thing , while currently delivering nothing. When I will see any intel processor showing comparbale performance to amd offers I will believe it . The article tries to stress advantage of intel in manufacturing proccess which is irrelevant for the bottom line of the user (which is performance and price/performance ratios) .
So far latest yohan benchmark show that intels processors are still overpriced pieces of junk:
Itanium would still be be here, Intel was working on Merced ever since they came out with the P6 (Pentium Pro) core.
While you are more or less correct about most of what you have said, I have to be nitpicky about three things:
1. Efficiency with desktop CPUs is just as important as mobile CPUs — the Pentium 4 is a result of Intel ignoring this, and now look at what’s happening.
2. The Pentium M might be more or less clock-for-clock with the Athlon 64, but …
2a. The Athlon 64 has an on-die memory controller, and much, much lower memory latencies than the Pentium M (and Yonah). This is a major benefit when it comes to certain applications.
2b. The Pentium M (and Yonah) have absolutely no 64-bit capabilities. This is an automatic win for the Athlon 64 in terms of 64-bit code. Also, A64 + 64-bit code automatically leaves the A64 winning most benchmarks against the Pentium M running 32-bit code.
3. The K8 is much, much more than a “tweaked slightly” K7 core. Come on, you’re a smart guy … do some research please.
2b) You’re very right about this one. 64-bit code gives the K8 another 10-15% usually. That margin would make AMD win almost all of the benchmarks, and lead in the others. The difference may not be huge, but a 20% margin at the same clockspeed* is nothing to scoff at.
* It should be noted that Yonah is 2.16 GHz at 65nm, while the X2 goes up to 2.4 GHz at 90nm. If AMD doesn’t muck up the 65nm transition (that’s not a trivial ‘if’), the X2 is going to have a significant clock-speed edge over the Intel chips as well, in addition to performance per-clock. Either way, Conroe should be an interesting matchup.
I am seriously looking forward to Conroe and Merom … they’ll be the first Intel processors to be excited about since 5 years ago.
The derision of the P4 is undeserved. Prescott might have been silly, but the original P4 wasn’t exactly a bad design. It was simply one that was optimized for scenarios that the market moved away from. Raw multimedia performance became less important, as GPUs offloaded geometry-processing and video decoding. Integer performance remained important, because game engines moved to more complex physics, and more complex culling algorithms to keep the GPU fed. Most importantly, the bottom fell out of the power situation.
The P4 managed to keep Intel on top for quite awhile during the K7 era. If the leakage current situation hadn’t become so bad, the P4 would be at 5GHz as they expected, and still be a performance leader. If AMD64 hadn’t taken off so well, Intel wouldn’t have had to invoke the sub-par 64-bit support in the Prescott core. Really, the only thing bad about the P4 that its designers could have anticipated at the time was that it never really made sense as a server processor.
Are all these based off of the Pentium M (PIII) design or the P4? Or are they a new core? Are any of these using a integrated memeory controler. If 8 cores all have to share one memory buss, that would rot.
The current DC Xeon chips are so hot I could probably heat all the houses in a block with a 8 core xeon.
I saw the itainium inside tag. Anyone know any of intels plans for this chip?
How is it uncovered when they themselves admit to it? This just seems more like marketing, seeding public interest with “Oops, secret memos” Microsoft style.
If you look at Moores law you’ll realize that in the last several years nothing really has changed drastically in the processor world.
We should be at processors 10x as fast as todays offerings, in fact the Cell processor is supposed to be the wave of the future, too bad it’s hot as hell.
Im not impressed at all with the way Intel is going, they keep banging their heads trying to make smaller and smaller dies so they can extract the most profit per process, when in fact they should be thinking about keeping the dies the same and go with more cores. Even 128 bit processing even.
Think about it, look at all the room in a computer, a laptop even, there’s plenty of space in there to put dozens of cool chips, but no, they want to hog all the space with a giant heat sink water cooling apparatus.
If chip makers would produce volume at a high enough NM and low power requireements, we could have three dozen processors in a laptop and it would be just or even cooler than todays machines.
The computer industry is totally on the wrong track, bigger is better, cooler and cheaper in volume/ease of manufactoring.
What exactly do we need with a 128bit processor? As it stands 64bit is widely available but not widely needed.
Shrinking the die size also means less power consumption and room for more things on the chip like more on-die cache, and even the processor that has 2 dual-core dies on it, which is something you suggested.
Also, did you read the article at all? They have plans for 8-core dies.
“Think about it, look at all the room in a computer, a laptop even, there’s plenty of space in there to put dozens of cool chips, but no, they want to hog all the space with a giant heat sink water cooling apparatus.”
That’s because most software still isn’t designed to handle multi-CPU well. One 3 GHz CPU will perform better than four 2 GHz CPUs (when 3 of those CPUs are idle and the CPU that is doing something has the extra overhead of SMP synchronization).
On second thought, though :
There are plenty of OSes and applications that are multi-CPU aware. Word and Outlook may not be examples of such, but then again, thats not where the extra horsepower is needed (e.g. scientific computing, cancer research, manufacturing etc)
One thing though is that Windows (don’t know if Linux does or not, I’m sure it does) will distribute different apps onto the different processors, and you can even say that you want X processor to run on processor 2.
Should be “X process to run on processor N”
If you look at Moores law you’ll realize that in the last several years nothing really has changed drastically in the processor world.
The processor is undergoing incredible changes right now. The goal of maximum single threaded performance – which they’ve been chasing for 34 years has all but gone.
We should be at processors 10x as fast as todays offerings, in fact the Cell processor is supposed to be the wave of the future, too bad it’s hot as hell. huh?
At 3.2GHz it’s not far above laptop CPUs.
Im not impressed at all with the way Intel is going, they keep banging their heads trying to make smaller and smaller dies so they can extract the most profit per process, when in fact they should be thinking about keeping the dies the same and go with more cores. Even 128 bit processing even.
All processors want smaller dies, it costs less to make them.
Intel’s next Itanium will be one of if not the the biggest die in the industry.
128 processing would be useless in all but the most esoteric apps.
Think about it, look at all the room in a computer, a laptop even, there’s plenty of space in there to put dozens of cool chips, but no, they want to hog all the space with a giant heat sink water cooling apparatus.
You haven’t been following Intel very closely, they’re almost completely focusing on low power now.
If chip makers would produce volume at a high enough NM and low power requireements, we could have three dozen processors in a laptop and it would be just or even cooler than todays machines.
That is planned but it’ll take some radical re-design to achieve it. Software will need to be completely redesigned to make use of such a chip.
Edited 2005-12-05 18:27
“That is planned but it’ll take some radical re-design to achieve it. Software will need to be completely redesigned to make use of such a chip.”
Indeed. Why does anyone think AMD64 killed the Itanic? Poor design or performance? Now, it was the simple fact that the Itanium sucks att running legacy code. AMD64s don’t.
In the end, it’s all about software.
“Even 128 bit processing even.”
LOL. Intel can’t even get 64 bit right (do some reading into the details of EM64T and you’ll see that it really is nothing more than a cheap knock-off of AMD64, and not a proper implementation).
I doubt that Intel will get EM64T on par with AMD64 before Merom, and based on their track record so far, I’m unsure I’d trust them to do it right even then.
If you want a good consumer processor, buy AMD. They aren’t the ones who are going to be playing catch up for the next third of a decade.
LOL. Intel can’t even get 64 bit right (do some reading into the details of EM64T and you’ll see that it really is nothing more than a cheap knock-off of AMD64, and not a proper implementation).
I doubt that Intel will get EM64T on par with AMD64 before Merom, and based on their track record so far, I’m unsure I’d trust them to do it right even then.
EM64T in the Prescott was a quick&dirty hack, as was the dual core. With Merom these things have been designed in from the start.
If you want a good consumer processor, buy AMD.
Fair enough. But in a year’s time the situation will look quite different, with Intel at least back on par.
They aren’t the ones who are going to be playing catch up for the next third of a decade.
Hopefully AMD themselves aren’t that complacent. They did well while Intel went down the Netburst cul-de-sac, but they should be very worried about Merom, especially now that Intel has more or less given up on Itanium and is properly committed to x86-64.
“Hopefully AMD themselves aren’t that complacent. They did well while Intel went down the Netburst cul-de-sac, but they should be very worried about Merom, especially now that Intel has more or less given up on Itanium and is properly committed to x86-64.”
Here’s hoping.
Try programminga system with three dozen processors. There is a reason we’re only at dual-core now, and programmers are still freaking out!
I second that. Were at a point where we can advance software and hardware to a point where it is insanely fast and efficient, but so hard to maintain and work with that it just leads to more problems rather then solving new problems.
Intel plans octo cores processor for 2008. Three years after SUN…
Getting down to 45nm, and even 65nm is no small feat. At that small size, and the high speeds demanded nowadays by people, power consumption has to go through the roof. The smaller the transistor size, and connector size, the larger the voltage swing must be in order to keep the signal:noise ratio at a reasonable level. Since speeds will only increase, current will increase, or stay the same at least.
Why not rip out the x86 translation layer and implement it in software instead?
Then work with the GCC people to make efficient optimizations in software and provide MS with an x86 VM.
If the future is managed, x86 should be dead soon anyway.
Edited 2005-12-05 18:50
Why not rip out the x86 translation layer and implement it in software instead?
Because these days it’s pretty insignificant compared to other things on a processor, particularly caches. And in spite of its rather baroque design x86 has one advantage over 32-bit RISC encodings: it’s significantly more compact, thus saving valuable cache space.
SSE/SSE2/3 anyone?
SSE/SSE2/3 anyone?
Vector processing is overrated. On an out-of-order processor it doesn’t make all that much difference whether e.g. the multiplication of two four-element vectors is expressed in one instruction or four, you still need to schedule and execute four multiplications. Vector instructions can reduce code size though, and the dedicated SSE register are nice to have as well.
Interesting times indeed. This is good news for consumers who are looking for the best out there. I was planning on creating a DCC rig but now I think I should just wait it out a bit more since I am in no rush. Thing is as much as I like AMD to win out and they probably will I would still go for what offers me the highest advantage. More technology is better for us end users…period
I don’t know if Intel’s roadmap is hype or real, but I am definitely willing to wait and see. My last computer (an Athlon 1.2 ghz) has done me so well that I don’t even remember when I purchased it. While I was initially excited about the x86-64 chip revolution and intended to buy a new pc, I have since decided that it is not quite time to upgrade and have instead decided to wait to see what late 2006 offers me. Part of the reason was because AMD has pretty much dominated Intel technologically and I have seen their prices sky rocket to be even more expensive than Intel. The days where you can buy a top of the line AMD CPU for the same price as a budget Intel Celeron PC are long gone (this is coming from an Athlon and a K6-2 owner.) At the moment, Intel can sell expensive because of its name and AMD can sell expensive because of its technology. I’d rather wait until the Intel name means nothing and Intel catches up technologically to AMD since that would mean the two companies will need to compete on price. The other reason I will wait is because there is a cpu revolution taking place quietly. Some technologies exist (like 64 bit and multicore processors) but software taking advantage is immature and having the capacity currently offer very little advantage while others are coming soon (like virtualization technology). For me it makes a lot of sense to wait and see what comes up. If I buy a processor now, I know I would have wished I had waited another 8 months. I think both AMD and Intel will have incredible roadmaps throughout 2006.
Sorry just don’t see the need for such a processor given today’s users practices. There was a time when upgrading yearly brought you noticeable speed increases. My current P4 2.4Ghz HT is more then enough to run all the latest games and applications. In other words its hasn’t showed its age yet.
I can easily use a 2 Ghz CPU. I can easily use 2. I could easily max out 4 or 8 cores, too, but I might have to do more than one thing with all those extra cycles. Usually I max out a 2 Ghz CPU for 2 to 4 hours at a time encoding video or compiling software. If I had more CPUs I’m sure I could find a use for them too.
I’m sorry you just don’t see the need for such a processor. I have friends who do all their work on a Pentium 133 on Linux in a text console. They won’t see a need for such a processor either. There’s no need for graphical email and web. If you really get down to it, from a client perspective, nobody really needs more than a couple hundred Mhz of CPU and about 25 lines of text at 9600+ bps. So why waste our time with your post?
I’m sorry you just don’t see the need for such a processor. I have friends who do all their work on a Pentium 133 on Linux in a text console. They won’t see a need for such a processor either. There’s no need for graphical email and web. If you really get down to it, from a client perspective, nobody really needs more than a couple hundred Mhz of CPU and about 25 lines of text at 9600+ bps. So why waste our time with your post?
You don’t represent the average Dell PC buying crowd. You are probably in the top 1% of power users that just buy the latest and greatest hardware becasue you think it makes you look cool. The best way to mesure the PC power market is to look a the latest and greatest game out today and see what the minimum system requirements are. I will give you a hint…it hasn’t changed much for more then a year so. People like you suck up the hype and believe people need 4 core cpu’s on their desktop ultimately it just amounts to intel selling processors.
Todays cpus are already xlating the x86 layer into microcode. The cpus aren’t running straight x86 binary asm code. I think the future of computing lies in using different technologies to create the next gen cpus. Not even sure if electricity is going to be powering them. The reason we use current tech is because of fast switching electronic gates. If some other tech comes along that emulates switches and is faster, cheaper to make then that will do it. Also, I think folks are falling for the multi-processor hype these days. They think it’s a magic bullet formula that is automatically applicable everywhere pretty much like OOP thought it was back in mid ’90s. But it turned out that OOP is a niche tool pretty much like multi-processing is. Multi-processing is an old topic, academia was using them for ages now it’s just that regular folks have access to them and don’t know any better but to be sensationalized.
I think Intel is getting desperate, like Microsoft, after the realization of losing ground.
AMD is ‘IT’ for the foreseeable future. Born in 1968, yet not getting into the CPU industry until *relatively* recently, it’s only going to continue to get better.
I think Intel blew it with the totally new code for the I64 architecture. AMD was smart with the 32/64 split–it moved in and provided a real solution for transition.
I think the ‘cores’ are going to get a lot more interesting over time.
I look for AMD to continue providing ‘multiple’ solutions there; and as heat issues subside. AMD has the cooler chips, compared to Intel.
I think Intel is almost as over as Microsoft is. Deadpan, Weakest Link style: “Goodbye.”
–EyeAm
(author and programmer of the world’s next big nightmare)
Genius Insight. Rebel Thinking. Finger To The Status Quo.
Probably you are right: “Intel is almost as over as Microsoft is”. In fact I don’t see either of them being dead anytime soon. And this comes from a long time Mac user who has had his good share of reasons to hate Microsoft.
Even in browsers, a market where Microsoft is losing ground I doubt anyone believes Firefox will win the battle if by winning you mean being the most used browser in the world. I think even Firefox programmers know that: they will hit their target, have their success, if they manage to get, say 15-20% of the market. That alone will push everyone to consider Firefox as a reality, for example pushing web-developers to use true W3C standards.
Similarly Apple will “win” if they manage to get 10-15% of the marketshare, which will push many more developers to program for it.
On a processor implementing something like Altivec, you’ve got a 128-bit multiplier, so you need to only schedule one multiplication. Even on SSE, you usually have a pair of 64-bit FPUs, which can do 2 32-bit multiplications in one clock cycle.
Intel CPUs are heat ineffient CPUs( roughly 3x more hot than AMD when idle or fully executing), thanks to the MHz and GHz they were addicted to. Now AMD is the top king in the CPU design, performance,thermal efficiency and quality, but unfortunately not in the volume; If a manufacturer wants a huge amount of chips in a limited time(Let’s say the chirstmas or back to school) AMD cannot deliver; Intel will.
Now AMD is eating Intel’s Profits in the workstation and server markets (figure it yourself: Sun, IBM, HP, others are shipping AMD chips on high end systems; while intels on the low systems)(proof:IBM best intellistation Z priced at 6,289$, best intellistation A Priced at 11,779$).
Intel CPUS are not good for the desktop but they are excellent for OEM who needs alot of chips. Whereas in AMD case OEM don’t want alot of chips (Workstation and Servers demands are less that the desktops demands on the market), they need more performance and heat control which AMD excells in (Of course if you happen to enter a room with servers that use 10 cluster nodes stacked above each other you will notice the noise, the heat and why AMD Opterons are Superior to Intels’ Xeons).
So, “Intel will loose more to AMD” in the workstation and Server Markets.
Intel meanwhile manufactured the best chips on earth for laptops, which AMD was not able to compete with, and which for the same reason made Apple join intel rather than AMD (By the way Apple’s current laptops’ speed sucks!)
All experts agree that Intel is desperate right now from what AMD is doing them (AMD market Cap is 11 Billion $ which is stolen from Intels Market cap, Intel’s Market Cap is 162 Billion $)
Wow! So, does the future lie in sub-nanometre processes and, is that even possible? What happens when they reach 0 nms?
At this rate, they should arrive there within 2 years, right?
Looks like they will go there: http://tinyurl.com/bt94w
That will certainly be an interesting time, when quantum effects start to affect processor performance and design. If anyone cares to potificate on that whole scenario, please do.