Intel is dumping plans to release a Pentium 4 processor that runs at 4GHz, saying will boost performance on next year’s chips using other means that clock speed.
Intel is dumping plans to release a Pentium 4 processor that runs at 4GHz, saying will boost performance on next year’s chips using other means that clock speed.
Intel may have played out the clock speed marketing game to its end.
“clock speed does not matter. its systems peformance.” its going to be fun to watch intel advertising helping apple
Behind the shift is Intel President Paul Otellini, who wants the company to move away from focusing on increases in chip speed, measured in megahertz, as the primary way to increase performance. Intel has talked about such a shift for years, but remained fond of the clock-speed approach until recently.
They should have thought of this before they increased the number of stages on the pipeline(used to allow fast clock speeds) Increasing L2 or L3 cache size is a cheap trick that companies like AMD have been doing for along time to get a performance boost. It still doesn’t address poor processor design.
I think even many people have cottoned on to the fact that all Intel does to increase performance is to ramp up the clock speed and put ever more cache on their processors. In terms of design, AMD’s processors are way ahead – and they’re a damn sight cheaper. I can find no convincing argument to buy an Intel processor in the world today.
Well, it looks like their own marketing has bit them in the proverbial butt. It’s gonna be fun to watch them trying to integrate a memory controller into the chip core like AMD has done.
I think the best thing that can be done, is to get away from the decrepid x86 architecture and dump the legacy crap.
[i]…other means that clock speed.[i/]
…other means than clock speed.
Unless some are unaware it isn’t impossible with adequate cooling to overclock current P4 processors from 3.2 GHz to 3.8 GHz and 4.0 GHz. So if for some reason you want more speed from your processor it is obtainable. If you want to purchase a 4.0 GHz system that is fully tested and nicely liquid cooled then take a look at Alienware’s ALX P4 4.0 GHz system.
are going to ignore the last 10 years of rants and arguments about why MHz ii all the matters. this will be fun 🙂
Thanks Intel, for helping Apple and AMD out with this one
Not only help Apple, but also AMD. AMD has been saying for awhile now that it’s not about MHz. Funny, Intel has been following AMD alot lately.
Then again, Intel admitting that it’s not about MHz isn’t entirely new. Didn’t they say the same thing when promoting the Pentium M’s?
so it seems. Now them, and FreeScale, are way ahead of the game going into 2005.
Ben
Intel promoted new chips based on their clock speed because that is something people understand. Intel never said MHz is all that matters, but when you are comparing 2 processors of the same architecture MHz is definitely a very big factor. The fanboy FUD is thick in here.
This is long overdue, IMHO.
I wonder if we will finally start to see CPU’s in the style of P3M in “real” machine – not wasting the almost absurd amounts of energy in making the CPU act like a radiator, driving electricity bills through the roof in the insane (and more and more diminishing ROI) GHz race, while still giving fair bang for the buck.
As Intel now also likely have come to terms with the fact that Titanic was just like its namesake, except for the most expensive server systems, and has AFAICT so far been a *huge* loss to them, perhaps we finally are going to see some of the technologies discovered in the 1990’s that made it into the Alpha ‘264 migrate into the upcoming Intel CPU at the micro-level?
How else are you going to measure speed if not by Hz? We will forevermore left in the dark as to actual speed simply relying on things like G5 is faster than G4, Pentium 3 is faster than Pentium 2? This is stupid from a marketing viewpoint.
Then why does Apple still specify the Hz on their machines? Why don’t they have a better way to show that they are indeed faster than x86 machines, as Jobs always insists?
You can’t first claim that hz doesn’t matter, and then say that a high hz is bad. Come down from your high horses and join the rest of us. The only important factors are how fast it can run code, how warm it gets, and what it costs. If they can reach this with hz, fine, if they do it by running more stuff in parallell, fine, cache, fine, architecture, fine.
iNTEL has just taken one avenue for some time, now they are going for the next one, this happens all the time.
when chipmakers (or designer) like IBM (and Apple), Sun, AMD .. many others trying to say, in the past, that MHz doesn’t matter .. the market didn’t listen much, and favors Intel for its high clock speed.
this time Intel says it the same line, but the market listen.
wow.
How about a CPU with dual hyperthreading cores? If you get a dual-CPU motherboard with a pair of those, would Windows think you have 8 CPU’s?
The trace cache virtually eliminates the CISC decoder penalty while allowing more instructions per byte in L2+ cache and main memory. (instruction lengths vary, but the shortest are most common) Also, x86 has full support for full interaction between segmentation and paging, although only one segment is almost always used. The biggest disadvantage is the vulnerability of the long pipeline to stalls and flushes, which is fairly inevitable. Had 90nm geometry shrinks panned out better thermally, Intel would have been in a much more advantageous position for frequency scaling relative to AMD and IBM. I think that the ‘ideal’ general processor would be CISC with an algorithmatically optimized instruction set (like Huffman encoding) and many short integer pipelines well suited to frequent branching. Floating point code is almost always more parallized and branch-poor and is really more suited to GPU-type architecture nowadays.
despite the bad press and unfortunate ignorance of itanium’s, in particular itanium 2’s capabilities when running *natively*, the ia64 architecture is afaik truly superior to anything else.
http://www.aceshardware.com/read_news.jsp?id=65000406
– links to stats in http://www.haveland.com/index.htm?povbench/index.php
.. the official povray benchmark site, showing the results in rendering time (hr:min:sec) for the following cpus:
– itanium 2 @ 1 ghz = 00:11:07
– athlon xp @ oc’d to 2.29 ghz = 00:26:07
– pentium 4 @ oc’d to 3.24 ghz = 00:28:31
with astounding performance at just 1 ghz, and nominally (depending on who’s implementation of the ia64 spec) only an _ 8 _ stage pipeline!! .. the ia64 was the very antithesis of the pentium 4’s. .. highly parallel, fat and short (not to mention the other goodies like a who lot of registers and a powerful fpu) VS long .. way long, and slim. but i guess who’s to dictate what the “market” wanted at the time, eh?
but i still have hope for them seeing as they are returning to the pentium 3 based p-m architecture.. well, i hope. i hope they aren’t just going to do dual core lower clocked p4’s/xeons. anything that won’t take such a hit on branching and provides athlon/k8+ quality fpu performance.
“Intel is dumping plans to release a Pentium 4 processor that runs at 4GHz, saying will boost performance on next year’s chips using other means that clock speed.”
I think you mean
“Intel is dumping plans to release a Pentium 4 processor that runs at 4GHz, saying <they> will boost performance on next year’s chips using other means <than> clock speed.”
INTEL is probably dropping the 4Ghz P4 because they plan to release dual core P4 soon….
I wonder, what would be best? 1 CPU 3.6 Ghz or a dual core P4 running at dual 2.0Ghz….?? 2 x 2.0Ghz = 4Ghz ??
As shown by your povray benchmark, now please show us integer benchmarks which demonstrate the “superior to anything else.” ia64’s architecture..
This time, I predict that the results will be much less impressive..
Especially when you take the price into account!
VLIW are very good for FP which tend to be quite regular but not so good for integer where compiler’s writers are still wondering what HP/Intel drunk when they decided to use an in-order architecture..
So much for the “superior to anything else” architecture..
And what’s the price difference?
It’s comming to stores neer you, with a new socket type…:)
you forget that raytracing is one of the few areas where you can optimice for 4 parallel pipelines during compilation time.
anywhere else where you can’t do such heavy opimisations the itanic realy sucks.
und for such massive parallel tasks there exist better architectures than itanic.
I wish Intel would just release an Itanium for the Desktop.
Here’s a QuickTime movie that originated on Apple’s site which explains the Megahertz Myth (deprecated, refers to the old G4 design vs Pentium 4):
http://www.esm.psu.edu/Faculty/Gray/graphics/movies/mhz_myth_320f.m…
Explanation of why the G5 architecture is so efficient:
http://www.apple.com/g5processor/architecture.html
Apple doesn’t have to hide its MHz/GHz specs. It doesn’t shy away from these, even though they appear slower than Intel/AMD specs at first glance. They prove themselves by performance, ala “Silent water runs deep”
Speeeling?
IMO Hz really is only useful when comparing against other chips in the same family, ie. G4 to another G4, G5 to another G5, P4 to another P4 and so on…
You can’t compare AMD to P4 accurately, and certainly not G5 to P4 and so forth, there are so many other factors going on.
The most Hz gives you is a very vague idea as to how compare things. ie. if you have some idea how much difference in speed there is between a G5 of iHz and a P4 of jHz, then a P4 of kHz comes out, you’ll have a very rough idea how much faster/slower k will be…
Maybe because it provides a fairly constant scale to measure relativity between processors of the same line. The only reason AMD and now Intel, imho, are using model numbers is because they want to overcome the different architectures which a common scale obscures.
I admit, IBM’s G5 is an incredible chip, and Freescale’s upcoming dualcore G4 is extremely exciting. And yes, Apple has been right to say that Mhz don’t necessarily matter. But that still doesn’t prove that the G4 has been speed competitive with with Intel/AMD chips all along.
I can find no convincing argument to buy an Intel processor in the world today.
Stability (more specifically, motherboards & their chipsets).
I wonder, what would be best? 1 CPU 3.6 Ghz or a dual core P4 running at dual 2.0Ghz….?? 2 x 2.0Ghz = 4Ghz ??
Depends on what you’re doing. If your workload is primarily CPU intensive applications being run serially, than a single, faster CPU will give more benefit. If it’s many applications being run in parallel, then multiple CPUs are better.
No, clock speed is not always applicable, even in the same family. Processor innovation often means that a first-gen CPU executes fewer commands in a fixed amount of time than a second-gen CPU, even if the second-gen CPU is clocked slower. Things like CPU cache, increased register multiple pipes, branch prediction, &c.
Hz isn’t to important but amd has been with just model numbers for a while id like to see intel go that way and stop this oh look at us we have the fastest chips when they arnt the best weve just been the biggest bs
G5=NICE
64FX=NICE
[email protected]=ok
No, clock speed is not always applicable, even in the same family. Processor innovation often means that a first-gen CPU executes fewer commands in a fixed amount of time than a second-gen CPU, even if the second-gen CPU is clocked slower. Things like CPU cache, increased register multiple pipes, branch prediction, &c.
And when that’s the case the model should be, and usually is, changed to reflect it. Remember Celeron 300 vs 300a? How about P4’s a’s and c’s and e’s? Then there are the E’s, B’s, and EB’s of the P3 world.
Note that I may be off on some of the P4 lettering, I haven’t followed it closely from a marketing standpoint.
All of the industry hit a wall this year and they will eventually get through it. Intel is taking the biggest hit, stock is down 30% right now. They obviously are trying saving thier butt right now.
Windows XP and higher recognize HT as HT, so it “sees” them as 2 cpu’s, but the liscencing doesnt get affected by HT
I mean rib Intel for failing the 4Ghz barrier but the new G5 2.5Ghz Macs are running liquid cooling solution. Come on people, Intel aren’t perfect but neither are the others. I believe the best CPU design we have seen was the Digital Alphas. Still, we can get a lot more out of our CPU’s with decent OS designs but I don’t see too much happening on this front not since the likes of BeOS and even though it wasn’t perfect it definately gave you more bang for buck compared to anything MS or Apple has designed.
Linux, a work in process but is developing well.
Maybe Haiku, we don’t really need super CPU’s if the software is designed properly to use what we already have.
Yeah, so, with XP Home wanting only 1 CPU and XP Pro using 2 CPU…. What will happen with Dual Core CPU? Will XP count them as 1 or 2 CPU? I mean :
1 Dual Core = 2 CPU
2 Dual Core = 4 CPU…
And we forget about HT… 2 Dual Core cpu would mean 8 CPU!!!
Good times…
“mean rib Intel for failing the 4Ghz barrier but the new G5 2.5Ghz Macs are running liquid cooling solution. Come on people, Intel aren’t perfect but neither are the others.”
I agree, G5 and the current Intel aren’t bad chips, but great ones and so is AMD. Clock speeds will always go up and makers will run into technology walls.
“mean rib Intel for failing the 4Ghz barrier but the new G5 2.5Ghz Macs are running liquid cooling solution. Come on people, Intel aren’t perfect but neither are the others.”
I agree, G5 and the current Intel aren’t bad chips, but great ones and so is AMD. Clock speeds will always go up and makers will run into technology walls.
I agree that we shouldn’t criticize Intel for pushing back 4Ghz, or for their .09nm process difficulties. Like was stated earlier, EVERYONE hit a wall here, and nobody was expecting it. What I think we need to continue to criticize them for, however, is the Mhz above all else tactic they were using up until that point. It’s a lot of people’s opinion that this was a technique devised to fool the consumer and hurt the competition, and that just isn’t right. Even if they didn’t intend it, that’s the effect it had, and they need to be more careful.
They’re obviously getting what they have coming, but I for one think they deserve for enthusiasts to milk it for all it’s worth.
Yeah, so, with XP Home wanting only 1 CPU and XP Pro using 2 CPU…. What will happen with Dual Core CPU? Will XP count them as 1 or 2 CPU? I mean
Current Windows CPU licensing applies to *physical* CPUs. So XP home will use both cores of a dual-core CPU (or “four” in the case of a dual-core HT CPU). XP Pro will give you four/”eight”.
The dual core CPU’s are going to be based on the Pentium-M which doesn’t have hyperthreading – and isn’t likely to get it, because the P4 is the only chip with a long enough pipeline to make HT work well.
I agree – one of Intel’s top engineers said a while back that the decision to go with a high clockspeed was driven by management. They knew it would become a problem sooner or later, but MHz provided a simple number consumers thought they understood.
Now of course they’re backed well into a corner; they need to kill the MHz thing and they’re facing a very competitive AMD.
And not that it really makes much difference to their market, but Apple’s got a decent processor again too – the competition is pretty tough at the moment, and unfortunately for Intel physics is too.
The dual core CPU’s are going to be based on the Pentium-M which doesn’t have hyperthreading – and isn’t likely to get it, because the P4 is the only chip with a long enough pipeline to make HT work well.
Actually Intel has shown intent (I believe at IDC) to make dual core Netbursts and Itaniums in addition to Pentium M. I think they’ve already demoed Netburst and Itanium dual cores, shortly after AMD’s Opteron dual core demo.
I guess we wouldn’t be running LONGHORN
The dual core CPU’s are going to be based on the Pentium-M which doesn’t have hyperthreading – and isn’t likely to get it, because the P4 is the only chip with a long enough pipeline to make HT work well.
<p>Hyperthreading is also worthwhile for architectures without crazy-long pipelines, other IBM wouldn’t be using it for POWER [PC]. There was an article on Ars talking about HT and IBM a while back, but I can’t find it again now.
but HT is only a concept. so, IBM in their infinite wisdom and ability might have found a way to get HT to work on a 20 stage pipeline. intel mean while has a technology that they only have been able to apply to 30+ stage pipe lines.
this is all speculation, but it may be correct, and we all know how much intel hates licensing tech (SOI anyone) from anyone else..
WAY overdue. While they’ve consistantly produced CPU’s at higher clock speeds at higher prices, the mere fact that AMD’s ‘comparable speed’ processors usually run anywhere from 20-30% slower in clocks is proof they have done nothing to optimize the instructions themselves.
Personally I’m looking forward to a ‘best of both worlds’ cpu if someone can actually make one. As someone else said here, Intel has focused on one single path – clock speed, for too long.
More clocks usually means more expensive support hardware driving up price – if you optimize the instructions and pipes to garner the same speed boost, the support hardware doesn’t change – cheaper prices.
Bottom line – Intel woke up – it’s about time.
“are going to ignore the last 10 years of rants and arguments about why MHz ii all the matters. this will be fun :-)”
No Debman. The point was never MHz for the sake of MHz alone. The point was the outcome in raw processing power. You can turn this any way you want, and MHZ may not be the most elegant way to do so, but Intel was always waaaay more powerfull than that other company… can’t quite put my finger on the name, some fruit was involved..?! Apple couldn’t increase the performance any way, not even via MHz, so the real dead end here wasn’t x86 the Intel way. The outcome counts, nothing else.
Dream on, it was only during the loong G4 phase when Motorola couldn’t get their thumbs out of their asses where Intel was king.
The 68k Macs stomped Intels offers at the time, the 604 made the Pentium look really pale, the G3 also kicked Intels butt (remember the snail ads, Apple were so provocative–the first time ever–simply because they actually had a point there) with the first G4 to come out it was a head to head race, but then suddenly Motorola went into hybernation and nothing happened for years so Intel regained the speed crown.
(Which by the way they’ve lost again against the G5s)
really are not that impressive performancewise. it is only good at a few things. architecturally it is a very nice design but in the state it is right now i think it leaves a lot more to be desired. its not fast…its alright. i have used a dual 2.5 ghz machine with 4 gb ram…my laptop boots up faster and responds faster and runs games better. sure the os is spiffy but thats all the apple has going for it. amd fx 3 ghz and above with pci express architecture is where its at…and even they are shifting to dual cores…yumm
Is there a diffrence between 1 Dual Core CPU and having 2 normal CPU, on one motherboard?
Is one better than the other? Is it the same?
“Then why does Apple still specify the Hz on their machines? Why don’t they have a better way to show that they are indeed faster than x86 machines, as Jobs always insists?”
Why are you trying to turn Intel’s problem around and put it on Apple? Dude, that’s are very weak argument.
“Current Windows CPU licensing applies to *physical* CPUs. So XP home will use both cores of a dual-core CPU (or “four” in the case of a dual-core HT CPU). XP Pro will give you four/”eight”.”
The way I understand this is XP will only see one CPU due to the HAL. With that same thought in mind I believe XP Pro will only see four cpu’s with the existing HAL.
With 2000 Pro it is possible to swap HAL files to allow for additional processing. Though I have never seen this personally, it apparently can be done.
What I am wondering is if the HAL won’t screw the deal before the licensing even gets into equasion.
For example I’m not sure XP Home would recognize a dual core chip as anymore than a single cpu, without MS offering an updated HAL.(or selling one)
I found the occasional comment aabout G5 responsiveness very bizarre. I have a dual 2.5 G5 machine. I have yet to see a machine more responsive (especially in games!!!!). How a laptop is supposed to be faster than a dual G5 is beyond me.
All is all there obviously many things to consider, the state of the software being used for starters.
on a side note my 733mhz G4 boots up faster than my G5 (but thats all it does faster). Interestingly my G4 boots up faster than anything I have ever seen Including the spangly new 3Ghz+ machines at work.
http://www.theinquirer.net/?article=19105
Because this thread has brought out all the “Mhz mth” people, and who decided it was a myth? Apple…
I can find no convincing argument to buy an Intel processor in the world today.
Stability (more specifically, motherboards & their chipsets).
Where do you get that omniscient info from?
Can’t be google.You forgot to mention that AMD cpu’s are cheaper,cooler, have better performance at less voltage
(AMD64).
Anyone who knows anything about methods of processing data understands that the clock of your process doesn’t measure your process against someone elses.
MHz is nothing more than a clock. A 2.5GHz P4 will be faster than a 2.4GHz P4 assuming all others remain constant. However, make the pipeline 50% longer (Prescott) and up the clock speed to 2.6GHz and you may find a slow down, and in some data processing you may retain a speed increase.
It’s all terribly complex. MHz doesn’t even compare within a companies line, as early P4’s demonstrated by often benchmarking slower than P3’s at the same clock. Wonder why? Some estimate it had to do with a large pipeline length increase.
Why do AMD CPU’s shine in games? I’ve heard everything from “better memory interfacing” to “3DNow is awesome.”
There is no end all be all general purpose processor that’s superior in every way to another of the same time period; unless the others were just that badly done. The thing that Intel has proved is that with 10 times the R&D investment you can hold compatibility forever and keep up with performance, as they did in the 90’s with Alpha, MIPS, PPC, etc. And it paid off, because compatibility was worth a lot to many people.
Apple did not decide it was a Myth. Apple tried to explain to the lay man that computers are complicated, and you can’t just say “derr mine’s got more MHz.”
I’m excited to see dual core emerge, and I’d like to see Intel do cool things with Pentium M, because it looks to me like a better processor than P4.
Also, even a 30 stage pipeline shouldn’t make your computer “unresponsive.” I don’t see how 30 stages can be noticed when it completes billions of them in a second. That’s like noticing dust on your glasses, you only see it when you take them off (we’re talkin one spec).
I can find no convincing argument to buy an Intel processor in the world today.
“Stability (more specifically, motherboards & their chipsets).”
You’re kidding me right? The days of the chipsets for the Throroughbred cores are well behind us my friend. A64 and Opteron are rock solid, as are their chipsets. Every PC in my organization here is now running on A64 based machines with Nvidia chipsets. Flawless in a word. If stability is an issue then why is the Opteron selling hand over fist now via IBM, HP and now Sun in the server arena. Stop FUD’ing from past experiences 2 architectures ago.
This news concerning Intel not offering 4+ GHz Pentium 4 processors shouldn’t shock anyone considering there were rumours of a name change (ie: P5) for the upcoming dual core processors. Anyway I’ve noticed when people argue about which manufacturer produces a better processor there have been comments on forums that Intel’s Hyperthreading technology is a joke. In reality it’s not for those applications that take advantage of Hyperthreading. The cost of a single Hyperthreaded P4 is cheaper than purchasing two AMD processors or a dual G5. I’ve found that having Hyperthreading enabled during benchmark render tests with software such as Mental Ray I have faster times which equates to lower cost. Even though most software cannot take advantage of either Hyperthreaded processors or dual SMP processors the real advantage are to those using software that does. Studios whether for game design, film or broadcast typically run software that use multiple processors in succession during rendering. Instead of a film or game taking months to render technology such as Hyperthreading can cut that time down to weeks or even hours.
So the point is don’t put down technology you don’t really need because there are those that do need and want it. Also, make sure to purchase hardware that is necessary for your needs and not what the sales person is trying to scam you into buying. Example if all your doing is using something like Adobe Photoshop, Gimp, playing games or just surfing the web then you don’t need a dual processor system like Apple’s dual G5 or even have Hyperthreading turned on because your applications will not use more than one processor. This would be like a kid playing games that tries to convince his/her parents that they need a DCC (Digital Content Creation) graphics card such as ATI FireGL or NVIDIA Quadro. Save your cash for what’s important.
I can find no convincing argument to buy an Intel processor in the world today.
“Stability (more specifically, motherboards & their chipsets).”
You’re kidding me right? The days of the chipsets for the Throroughbred cores are well behind us my friend. A64 and Opteron are rock solid, as are their chipsets. Every PC in my organization here is now running on A64 based machines with Nvidia chipsets. Flawless in a word. If stability is an issue then why is the Opteron selling hand over fist now via IBM, HP and now Sun in the server arena. Stop FUD’ing from past experiences 2 architectures ago.
I think it should also be noted that not all motherboards are created equal. A lot of people seem to think that you can buy all the cheapest parts and everything will be fine. But more often than not, if you buy a bargain basement motherboard from the likes of ECS, PCchips, or Biostar then expect a battle to keep stability issues at arms length. The same goes for cheap cases with poor cooling and power supplies, RAM, and really anything can throw off the stability of your system. Time and time again I’ve seen people complaining about their ****** system being a piece of crap, crashing all the time. Then when I go to have a look I discover everything they bought was the low ball deal of the week.
True, chipsets can be buggy too, but this is nothing like it was in the early days of AMD and Cyrix where they just couldn’t get a fair shake with buggy and incompatible chipsets from VIA, ALi, and SiS. These days their chipsets do have the occassional bug, but if you buy from solid brands like Asus, Albatron, Gigabyte, and Tyan (etc) then they’ll more often than not have worked around any issues.
Personally, I expect VIA to be right up there with Intel when they shed PCI. Apparently they made some bad decisions while designing their PCI logic, as evidenced by the latency and bus saturation issues that were discovered about two years ago. I’m pretty sure they just decided to work around it at the time, instead of redesigning a soon to be retired bus from the ground up. Even that, though, the VAST majority of people will never see issues from, especially since VIA addressed it.
Why dont manufacturers rate chips by using the MIPS (Millions Instructions Per Second) rating like they did in old days.
Why dont manufacturers rate chips by using the MIPS (Millions Instructions Per Second) rating like they did in old days.
Because it’s even worse than measuring by clock speed.
Maybe they are trying to get rid of Moore’s Law? 🙂
Is there a diffrence between 1 Dual Core CPU and having 2 normal CPU, on one motherboard?
Performance in general should be roughly the same (although there are specific cases when one or the other might be faster). The big advantage is in cost – a motherboard for a dual core CPU should be no more expensive than a single-core CPU motherboard, whereas dual processor boards tend to be significantly more expensive because they are much more complicated to make.
Where do you get that omniscient info from?
Experience.
Can’t be google.You forgot to mention that AMD cpu’s are cheaper,cooler, have better performance at less voltage
(AMD64).
And…? I never said there was anything wrong with AMD’s CPUs – I think they’re great.
You’re kidding me right? The days of the chipsets for the Throroughbred cores are well behind us my friend.
Such short memories. I’m talking about the history of buggy, incompatible and often just plain broken chipsets VIA has had since the days of the *486*.
Maybe they’ve finally gotten their act together in the last year or two – but the numbers certainly aren’t on their side. There’s a lot of bad blood there. Personally, I doubt anything short of divine intervention will see me buying or recommending anything using a VIA chipset ever again.
A64 and Opteron are rock solid, as are their chipsets. Every PC in my organization here is now running on A64 based machines with Nvidia chipsets. Flawless in a word.
How can you say that when they’ve only been around for a year ?
If stability is an issue then why is the Opteron selling hand over fist now via IBM, HP and now Sun in the server arena. Stop FUD’ing from past experiences 2 architectures ago.
One architecture ago. And the one before that. And the one before that. Etc.
I don’t have a problem with AMD’s CPUs at all – I’ve owned several of them in the past – it’s just the motherboards that are required to use them are, almost to a unit, buggy pieces of shit.
Because the other name of MIPS is Meaningless Instruction Per Second: it is a very bad benchmark..
AFAIK the only benchmarks that are valuable are:
1) application benchmark
2) SpecInt,SpecFP
Such short memories. I’m talking about the history of buggy, incompatible and often just plain broken chipsets VIA has had since the days of the *486*.
Maybe they’ve finally gotten their act together in the last year or two – but the numbers certainly aren’t on their side. There’s a lot of bad blood there. Personally, I doubt anything short of divine intervention will see me buying or recommending anything using a VIA chipset ever again.
So basically, what you’re saying is, “don’t trust my opinion on this because I can’t separate the present from the past.”
I completely agree with you that there were problems in the past, that’s what happens when smaller companies try to stay competitive on the same playing field as a big company like Intel. It still wasn’t forgivable, but to stay bitterly attached to the past, and tell everyone that VIA sucks and their chipsets shouldn’t be used because they bit me once 8 years ago… that’s just bullheaded and closed minded.
Is there a diffrence between 1 Dual Core CPU and having 2 normal CPU, on one motherboard?
Performance in general should be roughly the same (although there are specific cases when one or the other might be faster). The big advantage is in cost – a motherboard for a dual core CPU should be no more expensive than a single-core CPU motherboard, whereas dual processor boards tend to be significantly more expensive because they are much more complicated to make.
Well, in the case of Intel, I believe they’ll be sharing at least the L2 between cores, so there’s going to be a penalty for that. AMD won’t, and this could change either way after the initial offering (I’m guessing Intel will do the changing). Another difference is the latency between the cores, though I don’t know how much this will play out. I guess someone, eventually, will think of some shiny new tech to take advantage of this.
Not a surprise at all, as the P4/Netburst’s days are numbered.
http://www.theregister.co.uk/2004/05/07/intel_cancels_tejas/
Still, it’s always nice to see Intel eat crow now and again.
one core can talk to the other core a the speed of the clock rather than the speed of the BUS.
So basically, what you’re saying is, “don’t trust my opinion on this because I can’t separate the present from the past.”
No, I’m saying that you shouldn’t infer too much from data points that, currently, are the exception and not the rule.
Oh, and that you should use more than just a bunch of numbers in a poorly-spelt “review” on some fanboy website to make decisions.
I completely agree with you that there were problems in the past, that’s what happens when smaller companies try to stay competitive on the same playing field as a big company like Intel.
Funny how other “smaller companies” manage to make quality products. Exhibit A – AMD vs Intel.
Or are you somehow going to try and blame VIA’s incompetence on intel ?
It still wasn’t forgivable, but to stay bitterly attached to the past, and tell everyone that VIA sucks and their chipsets shouldn’t be used because they bit me once 8 years ago… that’s just bullheaded and closed minded.
8 years ago ? Pfft. Try less than a year ago (daily, actually if you consider old machines I still have to deal with that have VIA chipsets). When I talk about VIA’s history from the days of the 486, I’m not talking about one experience I had back with a 486, I’m talking about hundreds – if not thousands – of experiences I’ve had – and continue to have – with machines using VIA chipsets and just about every x86 CPU release in the last 12-odd years.
*Maybe* VIA have finally managed to get it right. If they can keep getting it right for another few years, to show it isn’t just a fluke, I might just consider buying one of their products again. But it’s going to require a lot more than juvenile rantings on fanboy websites (the typical AMD motherboard “review”, after using the product for maybe a whole week) to convince me.
Funny to read so many fanboy comments against Intel (or the other way round, pro ADM or Apple). Do you really think you would take better design/commercialn decisions than Intel?. You probably are better than your country national football team manager, too
Everything working is good, choose the best for *you*…
good job, except that the engineers AGREE WITH US. they said it was intel’s management and marketing that kept pushing the GHz.
No, I’m saying that you shouldn’t infer too much from data points that, currently, are the exception and not the rule.
Oh, and that you should use more than just a bunch of numbers in a poorly-spelt “review” on some fanboy website to make decisions.
That’s just it, I don’t think it is the exception anymore. And for the record, I don’t read reviews on fanboy websites, I read them from the likes of Tech Report and Anandtech. And no, I don’t expect reviews to give me any idea of reliability, that only comes from experience.
Funny how other “smaller companies” manage to make quality products. Exhibit A – AMD vs Intel.
Or are you somehow going to try and blame VIA’s incompetence on intel ?
Tell me, how many generations did it take AMD to be competitive with more than an Intel clone? Sure, part of that had to do with available chipsets, but their architectures were never very fast until the K7, and even then I think a lot of their popularity was due to the proliferation of golden fingers devices in an overclocking crazed enthusiast market.
I’m not trying to blame VIA’s incompetence on anyone, but I’m willing to recognize the fact that coming into a market against a giant competitor isn’t the easiest thing to do. Still, they could have focused on smaller markets and built their way up after Intel started designing chipsets, which in a sense they already did by building chipsets for AMD and Cyrix.
8 years ago ? Pfft. Try less than a year ago (daily, actually if you consider old machines I still have to deal with that have VIA chipsets). When I talk about VIA’s history from the days of the 486, I’m not talking about one experience I had back with a 486, I’m talking about hundreds – if not thousands – of experiences I’ve had – and continue to have – with machines using VIA chipsets and just about every x86 CPU release in the last 12-odd years.
*Maybe* VIA have finally managed to get it right. If they can keep getting it right for another few years, to show it isn’t just a fluke, I might just consider buying one of their products again. But it’s going to require a lot more than juvenile rantings on fanboy websites (the typical AMD motherboard “review”, after using the product for maybe a whole week) to convince me.
You know, it’s funny, I have an Asus P3V4X in my primary machine, it’s based on an Apollo Pro 133a chipset which was one of the first chipsets in what I see as VIA’s path towards quality. Even with this chipset, I have had no problems besides some finickiness over serial ports and my ISA modem, and that I strongly suspect is more the modem’s fault.
I have friends who have/have had VIA chipsets ranging from AP133x, KT133x, KT266x, to the KT400x. With each generation I’ve seen less and less reports of problems, and most of the bad reports I do get are due to a low quality component, most often the power supply. Luckily my friends listen to my advice and buy quality branded motherboards, so in most cases where the motherboard is the problem it was the bad batch of capacitors. If you factor in what I see on message boards, the same trend can be extended to KT600 and K8T800/Pro.
Given, it doesn’t sound like I have as much experience as you, in general, because I don’t work in tech. But being in tech, I wonder, what kind of array of motherboard brands are you seeing? Limiting your set to AP133a and beyond, what kind of phenomena do you see between bargain basement brands vs reasonable quality brands? I’ll admit, Intel chipsets may be more fool proof, but you can hardly blame VIA for those manufacturers who cut every corner they can think of.
It used to be that if you got a VIA chipset you’d be fairly lucky if it worked. Now they’re to the point where problems are rare, nearly to the point of Intel I think. You speak of Intel like they’re perfect, do you remember that big i820 recall? Or how about their more recent 915 Grantsdale leakage problem?
http://www.theinquirer.net/?article=16840
And those are just two examples that come to the top of my mind. It still sounds to me like you have a vendetta against VIA and aren’t at all recognizing the great improvements they’ve made in the last 5 to 6 years.
I can find no convincing argument to buy an Intel processor in the world today.
“Stability (more specifically, motherboards & their chipsets).”
You’re kidding me right? The days of the chipsets for the Throroughbred cores are well behind us my friend. A64 and Opteron are rock solid, as are their chipsets. Every PC in my organization here is now running on A64 based machines with Nvidia chipsets. Flawless in a word. If stability is an issue then why is the Opteron selling hand over fist now via IBM, HP and now Sun in the server arena. Stop FUD’ing from past experiences 2 architectures ago.
AMD has always been an awesome CPU that suffered from crappy chipsets. The fact is x86 is intel they are synonymous. Sure AMD cpu’s are 99.999% x86 compatible but its that 00.0001% that locks up your machine for no reason(often blamed on Windows) Chipset manufacturers that make chipsets for AMD motherboards do their best to emulate things like AGP, and PCI bridges(among other devices) and fail to get 100% compatibility. Intel chipsets are engineered specifically for the processor that it works with a perfect match up. I could never understand why AMD never produced a complete chipset pair to go with their processors. If they had i would have switched to AMD a long time ago
That’s just it, I don’t think it is the exception anymore.
Well, as I said, they’ve got a lot of bad blood to work past with me, so we’ll just have to agree to disagree on that
.
Tell me, how many generations did it take AMD to be competitive with more than an Intel clone?
Once they actually started trying ? Only a couple. AMD were just cloning intel chips until the K5, which while it performed dismally, was actually first to market with the “modern x86” design of a RISCish core and an x86 decoder.
The K6 series were fairly competitive (although, again, somewhat crippled by motherboards) – they were just late to market more than anything else (but they ramped up reasonably well).
As you note, the Athlon was the first CPU AMD had that was not only competitive performance-wise, but actually hit the market earlier enough to give intel a fright. That was 1999 IIRC, so it took them about 7 years.
I’m not trying to blame VIA’s incompetence on anyone, but I’m willing to recognize the fact that coming into a market against a giant competitor isn’t the easiest thing to do. Still, they could have focused on smaller markets and built their way up after Intel started designing chipsets, which in a sense they already did by building chipsets for AMD and Cyrix.
You have to remember VIA have been around building PC chipsets since the days of 20Mhz 486s (and possibly even earlier). They’re not exactly new players in the market. If you want to compare with AMD (although IMHO, building chipsets is a lot easier than building CPUs) they should have “come good” around the 1997-1999 timeframe.
Nvidia have only been making chipsets for a few years now, and they’re already streets ahead of VIA in my experience. SiS are another company who used to have very buggy chipsets but are now much, much better.
But being in tech, I wonder, what kind of array of motherboard brands are you seeing? Limiting your set to AP133a and beyond, what kind of phenomena do you see between bargain basement brands vs reasonable quality brands? I’ll admit, Intel chipsets may be more fool proof, but you can hardly blame VIA for those manufacturers who cut every corner they can think of.
Cheaper motherboards are definitely more likely to use cheaper, less reliable parts and have much lower QA. I must admit that I don’t keep track of exact variants anymore because I simply don’t have the interest in the lower level bits and pieces that I used to.
It used to be that if you got a VIA chipset you’d be fairly lucky if it worked. Now they’re to the point where problems are rare, nearly to the point of Intel I think. You speak of Intel like they’re perfect, do you remember that big i820 recall? Or how about their more recent 915 Grantsdale leakage problem?
I do. They’re not the only ones either. For example, while the intel TX (old Pentium-class chipset) didn’t have any reliability or compatibility issues, it was rather ridiculously limited to and effective max of 64MB of RAM by its L2 cacheable area. The 815 was even worse, being unable to even recognise more than 512MB. Intel have had their screwups – it just seems to me they’ve had less, the problems were plainly obvious and they’ve behaved better when those problems occurred.
IME, VIA chipsets have a tendency to throw up “weird” problems than only manifest in certain circumstances – a PCI card in slot 3 with 3 IDE devices, or a particular brand of PCI card, or having a drive on the secondary IDE controller as a slave with no master, etc, etc. In particular, VIA-based motherboards really don’t seem to like being “maxed out” – lots of RAM or PCI cards, or lots of drives plugged in. They’re very frustrating problems to try and troubleshoot.
And those are just two examples that come to the top of my mind. It still sounds to me like you have a vendetta against VIA and aren’t at all recognizing the great improvements they’ve made in the last 5 to 6 years.
The last VIA based motherboard I personally bought (I think, it’s been a while and no, I’m afraid I don’t remember the brand) used an Apollo Pro chipset and had a 667Mhz P3 on it. After exhibiting numerous “weird” problems in the vein of those described above, I ditched it for a second hand Asus P2B-DS (Dual Slot 1 P3, intel BX) motherboard and a pair of 700Mhz Coppermine chips. It’s fairly maxed out – 4 PCI cards (2 network, 1 sound, 1 IDE), 2 IDE devices on the built-in channels, 2 ide hard disks on the add-in card, 2 SCSI disks on the Ultrawide bus, 2 devices (tape drives) on the narrow bus, a Matrox G400 dual head and a gig of RAM. To top it all off the CPUs are overclocked to ~800Mhz (upped the bus speed from 100 to 112Mhz). It’s been, without a doubt, one of the most reliable pieces of hardware I’ve ever owned – I even brought it in to my newest job and it is my current desktop, running Windows 2003.
I have, however, used many VIA based systems through work and I still see lots of little “weird” problems that just put me off. I’ve also got a friend whole rolled around for a new VIA motherboard every 12 – 18 months (sucker for punishment) from about 1997 until about a year ago when he got an nForce based board (which has been trouble free) – he also gets lots of “weird” problems.
Perhaps I am being a bit harsh – although I wouldn’t call it a “vendetta” – but I really have had a lot of bad experiences that have cost me a lot of time and stress because of VIA hardware. Because of that – and particularly with players like Nvidia and ATI entering the marketplace – I really don’t think I’ll be going back, there’s just too many bad memories.
Well, as I said, they’ve got a lot of bad blood to work past with me, so we’ll just have to agree to disagree on that
.
To be fair, I should tell you that I have the same problem with Abit. A BX6R2 with a CPU slot contact rattling around the box, a ZM6 with a bad AGP capacitor before the capacitor problems, a friend’s USB controllers dying, etc… I like to think if they improved their quality control I’d be able to take a step back and adapt to it, but I just don’t know if that would be the case.
You have to remember VIA have been around building PC chipsets since the days of 20Mhz 486s (and possibly even earlier). They’re not exactly new players in the market. If you want to compare with AMD (although IMHO, building chipsets is a lot easier than building CPUs) they should have “come good” around the 1997-1999 timeframe.
Yes, and the reason Intel got into the chipset business in the first place is because VIA and company couldn’t make a decent chipset. Obviously they had problems and didn’t start making the necessary changes to “come good” until about 1999. Also, I’m by no means a microengineer, but I’m not so sure motherboard chipsets are much easier than a CPU. Sure it’s not as many transistors and they only really have to worry about load in server and rendering environments, but they’re the center of the system, tying everything else together, making sure they get along. That doesn’t exactly sound easy to me.
Nvidia have only been making chipsets for a few years now, and they’re already streets ahead of VIA in my experience. SiS are another company who used to have very buggy chipsets but are now much, much better.
I have to say, I haven’t played much with an Nvidia chipset, and haven’t suggested them to anyone (though a few friends have gotten them). I’m turned off by the heavy use of binary drivers in Linux. That approach is ok with a video card (though I’d still prefer source), but it’s a big turnoff for chipset features. Unfortunately SiS has been relegated to the budget market because of their performance, despite their (in my estimation) good stability. I’ve been considering picking up a Foxconn board on a SiS chipset though.
Correction:- important factors are how fast it can run code, how warm it gets, and what it costs – ADDITIONALLY and in my view perhaps more importantly is how quiet is the machine and lets not forget how much electricity the whole system uses.
Why do we think PC’s haven’t moved into the lounge at home a long time ago? Too much bloody noise for one reason. A machine like a Mac with proper Sleep mode, saves a lot of electricity when in sleep mode. This machine is getting closer to the lounge lizard ideal machine.
Until the computer industry makes quiet, green (lean power usage) machines, they will never have a computer in every home and certainly not in my lounge!
Precisely why I’m waiting for the Sempron ULV before my next build. Just because the mainstream isn’t there yet doesn’t mean the rest of us can’t fall back on notebook chips. Now we just need Intel to promote the Penium-M on the desktop, or even better, use the same socket like AMD. There’s also VIA and Transmeta for those who don’t need as much power, though I’m not sure where Transmeta is on the desktop just yet. I know they’re at least moving toward it.