Apple Computer on Tuesday unveiled souped-up Power Macs, in the first major upgrade to the professional system in about a year and half. The low-end model retains the 133Mhz bus, while on the mid- and high-end models it’s cranked up to 167Mhz; the high-end model boasts 2MB of L3 cache. The creaking ATA-66 IDE controller is retained, but all models also have an additional ATA-100 bus. The Apple store lists the 2x867Mhz model (256MB RAM/60GB HD/DVD-CDRW) at $1,699; the 2x1Ghz model 256/80/Superdrive) at $2,499 – and both available for delivery right away. Read the reports at C|Net News.com and TheRegister. You can check the new Macs here or order them.Our Take: The speed bump is not significant to compete with the high end x86 models, however, it is welcome, and the prices seem a bit more reasonable this time. What concerns me is the fact that OSX is not exactly a multi-thread beast (compared to let’s say, BeOS) however Jaguar 10.2 and especially Finder have taken steps towards resolving this.
Problem is, the large majority of the OSX applications are not multithreaded at all, so a dual machine won’t do you much good if you want all this power to run a specific app as fast as it could go. Apple should educate their third party developers on how to write proper multithreaded applications, because this is not something that most developers usually know or do. Especially now that Apple is going full speed for dual configurations in order to compensate for Motorola’s complete lack of interest for G4/G5. Educating developers properly is something that Be never did, and, well… read the rest in that discussion here. Be went the way of the dodo, and it would be a real shame to see Apple get in trouble too. Please Apple, educate your devs about multithreading. I won’t argue about slow/expensive machines this time. Just let your third party developers know what they need to know to make full use of these SMP systems!
>>Jesus… you must be a brave guy even to dare to tell us that you through such task as a system with only 512 MB RAM. Why don’t you check your system/taskmanager (don’t know how it’s called in Eng.) in order to see what the RAM utilisation says?! It will be maxed out and if it is, then this means swapping, swapping, swapping to the pagefile… if you don’t spend that extra money for sufficient RAM, your investment is largly wasted… even mourning about the outcome is plain — you know what…<<
I was wondering why Windows XP takes 20+ minutes to run the scandisk utility on startup 🙂
The G4 DOESN’T have to have the same Mhz as an Intel chip. Hell AMD doesn’t even, but that is another story. Macs run MACOS X not XP, they are completely diff. platforms – from the OS and Software to the hardware. I mean you don’t here people bitching about the Sun servers just reaching 1Ghz, or SGI being slow at 600mhz (or whatever)? BECAUSE THESE ARE DIFF. PLATFORMS. Oh yeah what does apple hardware have to do with OSes? OSnews and all
Anonymous
LOL… you can figure out how to use Linux but not Mac OS X! This sounds so made up.
You also claim that you learned Windows XP in about a week. That’s pretty sad if you ask me. OS X takes less than a day to learn.
You must have me confused with the other guy. I’ve never used XP before.
CattBeMac’s Link on Xserve vs. PowerMac Benchmark
Darn that stinks! After seeing the Xinet bencmark, I thought the DDR ram was going to make a very big difference. Oh well
Well, big letdown… This “test scene” takes 5H45mins to render on the P4. Gain: 45 minutes or about 8 percent.
To his credit the P4 only has 512 megs of ram versus 1.5 gig for the mac
Check and see if the memory was maxed out on either of the two machines. I’m not sure if that wolud explain the similarity in the performances either way however. I think that rendering very large movies is more a function of disk bandwidth rather than memory bandwidth, since you have to buffer to local storage. If it was a memory bound issue, I think you would have seen even worse performance on the Pentium than on the Mac
>>CattBeMac’s Link on Xserve vs. PowerMac Benchmark
Darn that stinks! After seeing the Xinet bencmark, I thought the DDR ram was going to make a very big difference. Oh well<<
I wouldn’t lose hope yet, supposedly the new G4s are more suited for the Xserve chipset than what was originally delivered with the Xserve 2 months ago, and with Mac OS X (Jaguar) 10.2 on the way, we might see some interesting numbers indeed. I think Eugenia summed it up well that more Mac developers need to optimize their software for MP awareness!
Quote:
I would place bets that application developers who weren’t optimizing for multiprocessor systems before sure will now.
Why do people assume that just because Apple has made their pro line of PowerMacs all dual CPU, that software compaines are going to go “Oh my God! We have to start writing multithreaded software now!” ??
They can write multithreaded software right now and it will still run on single CPU systems. Take Lightwave for example. You can configure the renderer to use up to 8 threads, but on a single CPU system that doesn’t imrove performance, and from personal experience it actually hurts performance. Software for special purposes that can take advantage of multiple processors are already written to do so. All the major 3D packages, and video editing software support multiple processors. What do you want, a word processor optimized for dual CPUs?
A P2 couldn’t get out of its own way, much less anything else!
Note to self start using sarcasm tags. (such as <sarcasm>A P2 mabye</sarcasm>)
There was over two gigs of footage for this render, so I would say that both the machines saturated their ram. (the PC much earlier than the mac, of course.) During renders, After effect is caching everything it can in ram frame after frame, until it fills the cache, then replaces the oldest unreused data with new one in the cache. OSX’s system caching also seems to have a significant impact.
The funny part tho is that on a different, shorter render (much simpler, high rez 2k files, around 40 minutes for 150 frames) a Tibook 667 with only 128 megs of ram (basically just enough to run OSX) was faster than the dual g4 by about 12 percent. This only shows how poorly AE for mac is threaded IMHO. (I will not even speak about what it says of our valiant NT boxes 🙂
Combustion is a very different animal,a bit unstable and leaking ram here and there but so much faster for rendering (fully multithreaded). I will post some combustion render results when I have a break in between projects.
“This only shows how poorly AE for mac is threaded IMHO. (I will not even speak about what it says of our valiant NT boxes 🙂 ”
Try it on windows.. NT etc multi-threads great and many many programs for pc are mult-threaded. I guess because its new to macos u guys are having some teething problems with learning a new programming model.
Glenn
An IT company develops a new technology for the PC market (being so large).
The product is released for PC.
Months pass.
Apple decides its going to be one of the few current technologys included in its line. Claims they invent the technology because its not yet bundled in windows. (The original vendors sales are uneffected.)
Microsoft realise apple is advertising this technology is “brand spanking new and inbuilt into the os” so MS realises stupid users might think you cant do this on pc so MS makes its own solution .. sometimes killing the original vendor because its now for free.
Microsoft gets criticised for bundling the technology in windows.
Glenn
Apple hasnt invented much at all..none of there iBuyRebrandAndSell software and not 95% of the things they claim they do
Glenn,
You were obviously in a coma during the evolution of computing and its history… what a dumb post, unless you’re just being your usual sarcastic self!!!
🙂
Glenn,
You were obviously in a coma during the evolution of computing and its history… what a dumb post, unless you’re just being your usual sarcastic self!!!
🙂
Well well … I think DP boxes are the only solution for apple now but besides be reason they had to go DP .. DP in all the units finally will make developers develop better multithreaded applications.
On windows no one really has DP systems .. in apple .. now everyone .. it is interesting for the future to know if they will stay dual or not ..
we gonna have thanks to AMD 64 bit HW .. and software in 1 year?
Imagine those CPU bound tasks like video encoding once rewritten to take advantage of a 64 bit CPU.
Apple will have big troubles in a ew months once the k8 (aka hammer) is released.
Sure building k8 based PeeCees wil be expensive but no more then the top of the line MAC.
Probably apple is waiting few motnhs to find the best evolutionary step .. waiing for the arrival of the AMD Beast.
once k8 released there will be no apple marketing able to say that their units are faster …
Hopefully if they get G4 at 1.6 Ghz at better FSB speeds they will be in a safe position for 6 motnhs at least ..(waiting 64 bit software comes out).
>>Imagine those CPU bound tasks like video encoding once rewritten to take advantage of a 64 bit CPU.
Apple will have big troubles in a ew months once the k8 (aka hammer) is released.<<
Well I don’t think Apple has anything to worry about since IBM will be demonstrating the new 64 bit PowerPC CPU this October!
Arnaud: To keep it short, a computer is just a tool. What does make a difference is the applications you’re running on it and the user experience.
The wonder of it all is that all your example, if the PC counterpart is actually faster, it isn’t as significant asthe difference now (which is getting bigger and bigger). One of your examples say that the G3 is slower than the P3… which is balloney, back in 2000.
CattBeMac: This is depending on what you’re running… the Xeon got its but pounced by the G4 on the RC5 benchmarks, and a Dual Xeon isn’t going to come cheap!
Can I see the RC5 benchmarks then? I mean one that isn’t rigged?
Besides, Xeons do come cheap, especially if you buy Compaq-branded, a few bundred dollars cheaper for something better than a PowerMac.
CattBeMac: Actually that isn’t true except for Intel designed benchmarks (SPEC).
Intel didn’t design the SPEC benchmarks. Plus, even if Intel did design it, why did they for *years* loose even to AMD on that benchmarks?
Besides, it seems that only Motorola/Apple aren’t part of the consortium of this “Intel designed benchmarks” group (which even includes Intel’s archrivals Sun and AMD)
CattBeMac: And with the Xserve smoking the Dell server in a benchmark last month (which was a PIII, which is faster than a P4 at the same clockspeed)
P3s may be faster than P4s a year and a half back then, but right now Pentium 4s are the fastest 32-bit processors out there (at least current benchmarks shown..).
Plus, notice that the Dell is significantly cheaper than other servers used in the benchmark and the fact that Dell isn’t close to being the #1 Intel-based server manufacturer?
CattBeMac: here are some results from not so long ago;
Ano Nymous: Apple ships useless apps (for me, that is): iTunes, iDVD, iMovie, iWhatever. Dell ships Microsoft Office XP Professional that I could use very much. And there is till a difference of more than a thousand bucks.
Dell also ships with some of its apps (like its Digital Photography Suite) as a recommended, and in some models cannot be turned off.
Anonymous: You also claim that you learned Windows XP in about a week. That’s pretty sad if you ask me. OS X takes less than a day to learn.
I learnt Windows XP in a day. I got used to it in a week. I don’t know about TLy, but that’s me. (Previously, I was using KDE and Windows 2000).
CattBeMac: They’re about as exciting as the flawed SPEC benchmarks we know today 🙂
Agreed. :-). But it is the only benchmark agreed by more than more company.
CattBeMac: You don’t think the IT (and other MS/Intel advocates) aren’t as rabid?!
How many MS advocates are there in the first place? Plus, if there are people like me, I don’t like Intel either. Sure, they have the fastest – I don’t need the fastest :-).
JOel: And following on Apple’s heels will be the x86 manufacturers. Apple has pushed usb, firewire, flat panel screens, etc down into the x86 manufacturer world.
Uhmmm, Apple didn’t push LCD screens first. Apple may be the first people to get rid of legacy ports and go all USB, but it isn’t the first to push it either. And they invented FireWire for goodness sake, obviously they would be first in pushing it.
But I don’t think you would see much dual-processor systems out there. Apple did it because of one thing: speed. Something x86 now don’t have a problem with.
Satchel Buddah: Well, big letdown… This “test scene” takes 5H45mins to render on the P4. Gain: 45 minutes or about 8 percent.
You saved 45mins. That is quite good for me.
But in the first place, if you have gotten a dual Xeon, things would be much more faster than on your P4, because After Effects is heavily multi-threaded, and it could use two processors better than one.
In just over 8-9 months look at how much speed and improvents Apple has gotten out of OSX. What other OS company has done that? MS hasn’t.
OS X was released unoptimized and bloated, as well slow. No wonder the speed increase. Also, Jaguar may be technically superior, but it is only in one way: its vector window system. But currently, Apple is only using that for eye candy, so I wouldn’t put that a really big improvement. But anyway, I would rather use Windows XP to OS X. On OS X, text at small sizes are almost unreadable (not its not the monitor’s fault).
DF: I mean you don’t here people bitching about the Sun servers just reaching 1Ghz, or SGI being slow at 600mhz
Because these platforms are faster
ckristian: we gonna have thanks to AMD 64 bit HW .. and software in 1 year?
Speed improvements for 32-bit software would already be felt. Once Microsoft/bunch of Linux companies optimize for x86-64, it would be even more faster (more native).
CattBeMac: Well I don’t think Apple has anything to worry about since IBM will be demonstrating the new 64 bit PowerPC CPU this October!
But PeeCees would have Hammer before Apple decides to use another platform.
>>How many MS advocates are there in the first place? Plus, if there are people like me, I don’t like Intel either. Sure, they have the fastest – I don’t need the fastest :-).<<
Well all those MCSE types is a good start, they figure as long as Microsoft dominates the market, they have a job in the IT world!
>>But PeeCees would have Hammer before Apple decides to use another platform.<<
True, but by the time normal consumers can afford 64 bit machines (and I mean mass adoption), Apple will already have something in place. I think the Apple adopting AMD/Intel solutions has become moot since the IBM announcement.
Its ironic but the G4 demonstrates the problems with CISC design, hence why it cant scale.. why is this? Because its altivec instruction set is very complete, this means *large and based on maths functions* -CISC .. not *simple instructions based on whats easy in hardware* -RISC
The fewer instructions in SSE MMX and 3dnow enabled the p3 to scale to 1 Gig way before the g4 and without some memory through put problems. (mem throughput is not just due to memmory controllers but the complexity of each instruction as well.)
Intel and AMD added instructions that would speed up NORMAL 3d vector processing .. so games etc speed up lots. RC5 demonstrates a function not supported in SSE or 3dnow because its rarely used. (Rotate instruction)
G4 winning on RC5 but nothing else (Which is how it pretty much is hey) demonstrates how g4 is cisc !
P4 has the complete floating point instruction set in SIMD so addes twice ass much precision as possible on any g4 using altivec.
Glenn
>>Its ironic but the G4 demonstrates the problems with CISC design, hence why it cant scale.. why is this? Because its altivec instruction set is very complete, this means *large and based on maths functions* -CISC .. not *simple instructions based on whats easy in hardware* -RISC<<
You got to be kidding me?!
The fewer instructions in SSE MMX and 3dnow enabled the p3 to scale to 1 Gig way before the g4 and without some memory through put problems. (mem throughput is not just due to memmory controllers but the complexity of each instruction as well.)
False… the extension of the pipeline is key!
>>Intel and AMD added instructions that would speed up NORMAL 3d vector processing .. so games etc speed up lots. RC5 demonstrates a function not supported in SSE or 3dnow because its rarely used. (Rotate instruction)<<
Actually Intel and AMD have been reducing their instructions, that is why they have been screaming RISC-ccentric since! And the RC5 is as respectable in the industry as SPEC, so your bad excuse doesn’t fly!
G4 winning on RC5 but nothing else (Which is how it pretty much is hey) demonstrates how g4 is cisc !
Is this a conspiracy theory brewing?… Intel and AMD are more CISC than Motorola and/or IBM’s offerings will ever be!
P4 has the complete floating point instruction set in SIMD so addes twice ass much precision as possible on any g4 using altivec.<<
P4 might have a decent SIMD implementation, but it doesn’t stack up to the performance of the Altivec, plain and simple!!!
>>Can I see the RC5 benchmarks then? I mean one that isn’t rigged?<<
http://www.geocities.com/sw_perf/RC5.html
http://www.xlr8yourmac.com/systems/dual_1ghz_performance_test.html
>>Besides, Xeons do come cheap, especially if you buy Compaq-branded, a few bundred dollars cheaper for something better than a PowerMac.<<
Not as cheap as you think!
>>Intel didn’t design the SPEC benchmarks. Plus, even if Intel did design it, why did they for *years* loose even to AMD on that benchmarks?<<
Like I said before, it was sarcasm, though most folks will agree that SPEC plays in Intel’s favor… as for AMD, they didn’t win too much before the K7 Athlon, the K6 was terrible!
>>Besides, it seems that only Motorola/Apple aren’t part of the consortium of this “Intel designed benchmarks” group (which even includes Intel’s archrivals Sun and AMD)<<
Actually Motorola is a member, though not active at this time from what the site shows!
>>P3s may be faster than P4s a year and a half back then, but right now Pentium 4s are the fastest 32-bit processors out there (at least current benchmarks shown..).<<
Well if you look at the clock speed of that PIII in the Dell Server being tested (1.4 GHz) back at that time the P4 came out around those clockspeeds, the PIII was faster, which is why the P4 got lots of bad press since Intel was saying it was better than the PIII!
>>Plus, notice that the Dell is significantly cheaper than other servers used in the benchmark and the fact that Dell isn’t close to being the #1 Intel-based server manufacturer?<<
Not much cheaper than Apple’s offering if you look at the prices!
>>CattBeMac: here are some results from not so long ago;
http://www.barefeats.com/pentium4.html
A flawed benchmark. The people behind it failed to understand that 2000GHZ is not the same as 2x1GHz. Why don’t they use dual 1GHz P3 Xeons instead?<<
They never said ‘2000GHZ’ is the same as ‘2x1GHz’… where do you see that? They use what they can get their hands on… it seems that PC users don’t like to lend their machines for testing as the guy from BareFeats put it!
>>CattBeMac: I noticed that you can’t accept the fact that G4s are slower than x86 processors. Accept it. There’s nothing wrong. I’m a huge fan of Athlons even though P4s beat them.<<
Actually I do not disagree that the fastest x86 is faster than the fastest PPC, but I will argue that the PPC is faster than x86 clock for clock, which is what I only argue about… of course other factors have to be taken into consideration (the whole I/O infrastructure)!
>>(Besides, notice that the PowerMac were given a faster graphics card, especially the 1900+ Athlon.)<<
Like I said, they test what they can get their hands on!
Some benchmarks are in comparing the SDRAM d1ghz PM with the new DDR d1ghz.
http://www.barefeats.com/pmddr.html
It appears to be a wash… the article is making the conclusion that the DDR ram and increased bus spead do not have any added benefit… however, it would appear that they do as the newer PM’s only have 1/2 the L3 of the older.
Pretty smart of Apple actually, since they will be able to justify the higher price of the 1.25ghz model based upon overall improved performance, afterall I would expect >25% (processor intensive) improvements from a 1.25ghz 2mb L3 over a 1.00ghz 1mb L3.
“False… the extension of the pipeline is key! ”
No it dosent that just helps.. when u cant clock it higher u increase the length of the pipeline al la the G4+ compared to the G4. The athlon is 9 level superscalar desgign (it can do 9 instructions at once unlike power4 which can do 8).
“Actually Intel and AMD have been reducing their instructions, that is why they have been screaming RISC-ccentric since! And the RC5 is as respectable in the industry as SPEC, so your bad excuse doesn’t fly! ”
Wrong they pretty much keep all the instructions they just make them slower as they are used less. Any reduction of instructions proves x86 is RISC dosent it REDUCTION OF INSTRUCTION SET?
RC5 CatbeMac dont bug me around here .. rc5 uses the rotate instruction which isnt needed in 3d graphics or oftern in vector math so its pretty useless except for this one case. The Rc5 have whole sections on there web page explaining this .. pls go read. They explain how SUN gets .5 mill pc gets 5 million and g4 gets 12 million.. Its because it has this rorate instruction in Altivec that is rarely used. This instruction is so rare SUN dosent have a rotate instruction at all let alone in SIMD. This RC5 benchmark demonstrations the COMPLEX INSTRUCTION SET OF THE g4. Ok? good now dont give me any short sighted non listening responces.. read what i said again perhaps .. but no pfft answers ok ?
Is this a conspiracy theory brewing?… Intel and AMD are more CISC than Motorola and/or IBM’s offerings will ever be! ”
Thats so just stupid .. go read some of the articals im not the only one.. everyone knows all processors use RISC technology .. and the p4 and Athlon use more of these RISC techniques than the g4.. and as i said the altivec instruction set adds MANY COMPLEX instructions to it.. making it a CISC processor. The complexity of each instruction alters the max clock frequency the chip will reach. (dont agree? pls do some chip design subjects like me)
“P4 might have a decent SIMD implementation, but it doesn’t stack up to the performance of the Altivec, plain and simple!!!”
Dont believe this ppl its completely false. In the most hardcore maths such as lightwave and mpeg2 coding the p4 cains a A4 (because SSe2 is so good) and the A4 cains the g4 using altivec. Thats just the way it is i encorgage anyone to find one of these benchmarks to prove me wrong.
“Like I said before, it was sarcasm, though most folks will agree that SPEC plays in Intel’s favor… as for AMD, they didn’t win too much before the K7 Athlon, the K6 was terrible! ” This is so dumb.. Intel has had bad specmark for ages because it isnt tailored to the x86 instruction set and no use of MMX or SSE etc. This benchmark shows how fast unoptimised code will run on your processor .. nothing else.
“Well if you look at the clock speed of that PIII in the Dell Server being tested (1.4 GHz) back at that time the P4 came out around those clockspeeds, the PIII was faster, which is why the P4 got lots of bad press since Intel was saying it was better than the PIII! ”
A 1 gig p4 will cain a 1 gig p3 if the p4 is using SSE2. Theres nothign wrong with p4 performance on native code.. only on older floating point code. Criticisims about non native performance should be seperate from native performance. a P4 cain mpeg 2 a movie way way way way way way way way faster than a g4 so mac ppl have no claim on performance.
“Actually I do not disagree that the fastest x86 is faster than the fastest PPC, but I will argue that the PPC is faster than x86 clock for clock, which is what I only argue about… of course other factors have to be taken into consideration (the whole I/O infrastructure)! ”
Who cares clock for clock.. its real world performance that counts and thats were g4 falls down. It has an aweful memmory controller but whats more as i pointed out the actually g4 cannot take mem that fast. So clock for clock id say a 1.25 ath would prob beat a g4. Be interesting to see but g4 falls over at about 800 Mhz and more and its allready saturated its small busses.
Glenn
>>No it dosent that just helps.. when u cant clock it higher u increase the length of the pipeline al la the G4+ compared to the G4. The athlon is 9 level superscalar desgign (it can do 9 instructions at once unlike power4 which can do 8).<<
Of course now you’re forgetting that even the number of instructions being processed per clock tick is only a part of the sum… how many instructions it takes to complete a particular task is the other factor, and this is where x86 loses its strength!
>>Wrong they pretty much keep all the instructions they just make them slower as they are used less. Any reduction of instructions proves x86 is RISC dosent it REDUCTION OF INSTRUCTION SET?<<
except that Intel/AMD still has to provide some sort of x86 compatibility (decoder layer) which itself is CISC. No matter how you slice it, as long as the legacy is supported, that would make Intel/AMD more CISC than Moto/IBM!
>>Thats so just stupid .. go read some of the articals im not the only one.. everyone knows all processors use RISC technology .. and the p4 and Athlon use more of these RISC techniques than the g4.. and as i said the altivec instruction set adds MANY COMPLEX instructions to it.. making it a CISC processor. The complexity of each instruction alters the max clock frequency the chip will reach. (dont agree? pls do some chip design subjects like me)<<
You’re contradicting yourself Glenn!
>>Dont believe this ppl its completely false. In the most hardcore maths such as lightwave and mpeg2 coding the p4 cains a A4 (because SSe2 is so good) and the A4 cains the g4 using altivec. Thats just the way it is i encorgage anyone to find one of these benchmarks to prove me wrong.<<
I would like to see your benchmarks, since you challenged the notion!
>>A 1 gig p4 will cain a 1 gig p3 if the p4 is using SSE2. Theres nothign wrong with p4 performance on native code.. only on older floating point code. Criticisims about non native performance should be seperate from native performance. a P4 cain mpeg 2 a movie way way way way way way way way faster than a g4 so mac ppl have no claim on performance.<<
You have short term memory and must have forgotten Intel’s embarrassing moment during P4’s debut!
>>Who cares clock for clock.. its real world performance that counts and thats were g4 falls down. It has an aweful memmory controller but whats more as i pointed out the actually g4 cannot take mem that fast. So clock for clock id say a 1.25 ath would prob beat a g4. Be interesting to see but g4 falls over at about 800 Mhz and more and its allready saturated its small busses.<<
I disagree, but each to his (or her) own!
sorry for the short answers, but I’m at work and you know how that is :-
CattBeMac: Well all those MCSE types is a good start, they figure as long as Microsoft dominates the market, they have a job in the IT world!
Ah hah! Just because they got a certification to use a widely used OS, doesn’t mean they adore the company behind it just like you adore Apple! :-D. There are people who just loves MS – people whose profits depend on them, and employees of MS.
CattBeMac: True, but by the time normal consumers can afford 64 bit machines (and I mean mass adoption), Apple will already have something in place. I think the Apple adopting AMD/Intel solutions has become moot since the IBM announcement.
1) It’s AMD making the processors. Not Motorola. It probably won’t seek mass appeal but that’s because of AMD’s advertising company. Intel has a brand name, and anything they release normally get picked up faster than AMD’s.
2) IBM announcement, to me, seems that IBM wants to put PPC in a possition to compete with SPARCs and SGIs on the workstations.
3) For sure Hammer would get a better mass apeal than Apple with their new PPC64 processors, which aren’t announced yet!
CattBeMac: And the RC5 is as respectable in the industry as SPEC, so your bad excuse doesn’t fly!
IIRC, RC5 isn’t supported by any processor company. I wish something like http://www.barefeats.com could be used, because what I read about the test seems pretty fair.
http://www.geocities.com/sw_perf/RC5.html
http://www.xlr8yourmac.com/systems/dual_1ghz_performance_test.html
I saw these two before. First, the sw_perf one.
– It used a 1800+ Athlon XP – not the fastest when the 1GHz G4 was released.
– It didn’t tell what model of the 2.2GHz P4 was used, what chipset and RAM (this plays a major role in performance) plus when the 1GHz was released, again, 2.2.GHz wasn’t the fastest.
– Both x86 processors aren’t DP models – unfair.
– Absolutely no specs on the hardware used.
The Accelerate your Mac page
– The P4 used is obviosuly from a different milestone, which is significantly slower than the highest end sold during the 1GHz launch.
– It uses a Athlon processor that is either overclocked or nonexistant. It didn’t bother to use the fastest 1.8GHz Athlon XP T-bred available when the 1GHz G4 was out.
– The Xeon used is a P3 Xeon. Not the faster P4 Xeon (2.4Ghz which was available when the 1GHz G4 was released.]
– No information on the x86 systems and the SPARC systems used.
I ask for results that aren’t rigged.
CattBeMac: Not as cheap as you think!
Certainly not cheap, but a equalivent to a new PowerMac G4, it is quite cheap. Considering it uses Quaddro4 or ATI Fire instead of GeForce and Radeons.
CattBeMac: Like I said before, it was sarcasm, though most folks will agree that SPEC plays in Intel’s favor… as for AMD, they didn’t win too much before the K7 Athlon, the K6 was terrible!
K6 won on and off. At that time, the performance difference between AMD and Intel is so little, with every introduction of an new processor, you would have a new speed king.
CattBeMac: Actually Motorola is a member, though not active at this time from what the site shows!
Accroading to http://www.spec.org/consortium/ – Motorola isn’t mentioned as a member nor an associate. Neither is Apple. IBM is listed though.
CattBeMac: Well if you look at the clock speed of that PIII in the Dell Server being tested (1.4 GHz) back at that time the P4 came out around those clockspeeds, the PIII was faster
A lot of things have changed since then. Especially on performance.
CattBeMac: Not much cheaper than Apple’s offering if you look at the prices!
I was looking at the prices. Maybe it is cheaper here in Asia, but it is cheaper than the XServe.
CattBeMac: They never said ‘2000GHZ’ is the same as ‘2x1GHz’… where do you see that? They use what they can get their hands on… it seems that PC users don’t like to lend their machines for testing as the guy from BareFeats put it!
Let me quote: “I tried to match the clocks speeds as close as possible. (2000 MHz Pentium 4 versus 1000 x 2 = 2000 MHz G4)”. Besides, I didn’t see anywhere on the article where PC users don’t want to lend their machines… there isn’t a union of PC users banning the act of lending a PC for the use of benchmarks with a Mac, you know.
CattBeMac: but I will argue that the PPC is faster than x86 clock for clock, which is what I only argue about
If you say it that way, yup I agree. Unfortunately, Intel has clock speed about 2x faster than of Apple.
CattBeMac: Like I said, they test what they can get their hands on!
They could at least swap the P4 graphics care with the Athlon when they are benchmarking the Athlon.
Anyway, read Glenn’s post about RC5. I admit I don’t know much about instructions used in RC5.
except that Intel/AMD still has to provide some sort of x86 compatibility (decoder layer) which itself is CISC.
More RISC instructions are used in Windows XP making it much more faster. It is needed for legacy software, but it doesn’t push the legacy instructions to the software developer’s face. Apple could infact avoid all these instructions with their OS, if they move to x86.
The complexity of each instruction alters the max clock frequency the chip will reach. (dont agree? pls do some chip design subjects like me)
If you’re doing a chip design course I’d ask for my money back!
The P4 and P3 run the same instruction set yet one runs much faster than the other – according to your point this shouldn’t be possible. This is due to the pipeline length and reduced complexity of each stage allows a boost in clock freq. Because of this the P4 is vastly inefficient compared to the P3 (about 30%). Previous to the P4 all Intel processors became more efficient each generation.
The athlon is 9 level superscalar desgign (it can do 9 instructions at once unlike power4 which can do 8).
Number of instruction units is only one factor in the speed of a processor. The Athlon may have 9 instruction units but can only issue in effect 3 instructions per cycle (and only 2.5 on average). The G4 can issue 2 but you’ll find there are difference between the instruction units so they are not directly comparable.
The Athlon may have 9 instrction units but can’t use them all, all of the time – and even if it could would would require a completely bizzare mix of instructions to do it.
http://www.arstechnica.com
has some good articles on the G4 and compares them to other CPUs.
—
SPEC Marks.
There are some (pretty bad) SPEC marks for the G4 issued by Motorola a couple of years back, you’ll have to go digging into the CPU95 results.
Note CPU95 and CPU2000 results are not directly comparable so converting one into the other is meaningless.
—
Some predictions:
1) Apple will upgrade the iMac to 1GHz soon (for Christmas Season).
2) The New PowerMacs will not last 9 months like the last ones, and will get an upgrade when Motorola get their arse into gear and get a decent speed bus working on the Next G4 revision – they’ve already got the same technology working on other PPCs so it shouldn’t be difficult – I expect they will show completely different results once the new bus is added, especially for Altivec.
3) The G3 iBook will dropped like a stone as soon as IBM start shipping their new PPC and Apple start using it.
To thse people who think G4 won’t use the extra bandwidth:
Even at 1GHz Altivec could eat 32 GBytes a second, and thats not counting writing results back to RAM!
I agree ..
Would you buy a PM now if such a big technology step is behind the corner?
Personally I would wait – I said I don’t think speed is everything but if you know theres a big boost coming soon why not wait?
“Of course now you’re forgetting that even the number of instructions being processed per clock tick is only a part of the sum… how many instructions it takes to complete a particular task is the other factor, and this is where x86 loses its strength! ”
I allwys agued cisc was better because it gets more done in 1 clock cycle. Thats why x86 boxes were never as slow compared to SUN’s as ppl made out .. cause they are risc and dont do much in each instruction. Its only in Altivec were there is more G4 instructions the SSE or 3dnow, SSE2 is more complete.
“and the p4 and Athlon use more of these RISC techniques than the g4.. and as i said the altivec instruction set adds MANY COMPLEX instructions to it.. making it a CISC processor.” Not quite p4 and A4 use risc technology .. like out of order operation, etc but u still hae to call them CISC cause they are the original CISC processor. The g4 might be based on risc.. but now with altivec im suggesting that it has so many instructions that its very hard to clock higher (One of the reason custom and specific chips are oftern clocked low is because in each clock they must complete more.) G4 and P4 are both valid but by making altivec so complex its limited its clock speed and hence speed of the other parts of the G4 (The cpu and fpu have to remain at low clock speeds because altivec cant be clocked so high).
I would like to see your benchmarks, since you challenged the notion!
Yer i will .. just have to find 2 benchmarks .. 800 mhz mac 1 gig mac… 800 mhz p3 1 gig p3 (guess could use p4 if u want
“You have short term memory and must have forgotten Intel’s embarrassing moment during P4’s debut! ”
Mac ppl allwasy complain that altivec is really fast and any benchmark that dosent use it is false… of course the same thing should apple when benchmarking the p4 right?
The P4 and P3 run the same instruction set yet one runs much faster than the other – according to your point this shouldn’t be possible.
Did i say it wasnt possible? i just said its harder.. the p4 is a hell of a lot bigger to make up for all those calcuations it must do each clock cycle. Yes it does have a longer pipeline and that does slow it down.
The Athlon may have 9 instruction units but can only issue in effect 3 instructions per cycle (and only 2.5 on average).
I see youve read the same artical.. it SAYS on average the A4 gets 2.5-3 and the G4 gets 2. U imply the g4 is doing more with its 2 instructions which isnt correct if u were. The Ath can execute 6 instructions at once .. and another 3 address generation units.. perhaps if we all used SWARC compiler it might be able to use the address gen units as another function
Note: Even a k6 is a supercomputer
AMD K6-2 350MHz PCs. A lot of folk were very surprised to learn that these meek little boxes can achieve 4 32-bit FLOPs/clock (1.4 GFLOPS peak per processor)
from the swarc page
http://aggregate.org/EXHIBITS/sc98.html