Developer Jean-Baptiste Queru wrote a short editorial regarding the SPEC benchmarks and how modern CPUs are competing these days based on the specific benchmarks. JBQ talks about the Pentium4, AthlonXPs, Itanium and especially about the G4. Update: JBQ just wrote a redux article.
Awww… how sweet…
Ever run rc5 on a g4 processor? Ever run it on an p4?…not only will the g4 beat the p4..it’ll crunch more than a p4 at twice the clockspeed…you do the math. One benchmark doesn’t make a particular cpu better than another.
>Ever run rc5 on a g4 processor?
Oh, yeah, and this is exactly the case that you benchmark a XX line, tight loop, possibly optimized by hand. Like the Photoshop tests. Also tight loops and not the whole spectrum of what a CPU does. You see, when you actually using a G4 and OSX, the whole spectrum of CPU is at work.
SPEC is a whole set of tests, not a tight loop. And you do not have the right to modify its code. SPEC is *the* benchmark for CPUs. And Apple knows that. It is the ONLY company that do not want to publish SPEC results. Why is that?
These are rumors !!!!
IMHO Apple would sitwh to POWER IV far easely then to x86.
They Might jump to AMD (remember AMD engeneers where mostly Digital’s Alpha’s Father); but without the x86 front-end, using a full powered AMD would be very cool.
—
http://islande.hirlimann.net
It all reads fine and seems like he knows what he is talking about. But does he?
…
I benched my Ultra on SPEC a few years ago what I have discovered.
My Ultra is three times faster than my Ultra. How is that possible?
One SPEC was compiled with Sun Workshop other with gcc.
And guess what? Workshop produces better. No surprize.
To truely bench CPU one needs series of tests wriiten in assembly hand optimized for that CPU then results are meaningful, otherwise it only shows as much as photoshop or whatever stupid things amatures can come up with.
Not impressive!!!
While I happen to agree with JBQ’s assessment of the state of affairs in the Mac world, I’m not sure it really matters. Does the computer do what you want it to in an amount of time that is acceptable? Yes? Fine, use it. These benchmark number contests are good for geeks and marketing, but how much do they matter in real life?
I watch movies, play games, write code, listen to music, etc… all on a Mac. Am I upset by the poor performance of the G4 on that benchmark? Not really. I can still do everything that I want to do with cycles left over to animate the gum drop buttons of OS X.
Just use the best tool for the job and leave the number games at the door.
grovel
The whole point of SPEC is to be written in C, which is portable, which will result the code to be “equal” for all platforms. If you hand-optimize with assembly a SPEC-like benchmark, you are losing the point of the benchmark. Sure, you might have better CPU-oriented results, but the real life experience will be different from the actual results of the benchmark, which will eventually turn the benchmark itself obsolete.
If a compiler sucks for a platform, it will result applications written with these compiler to suck too. So at the end, it does not matter for the user if a CPU is really so much faster than another one. If the compiler that everyone uses for that CPU is slow, applications running on that faster CPU will still be slow. Sad but true.
I switched 4 month ago to the mac. And you guess it, I don’t want to go back. Yes I knew before that PowerPC’s are slower because I have a BeBox 133 which is less responsive as BeOS on my Pentium 120 (so 2×133 vs. 120Mhz !! ) . Then so why I don’t go back to the fast PC’s ?
Because since I have a Mac (TiBook 550) I just work with my PC, I don’t have to install the System regularly (Yeah I know it’s my fault because I like to install software, but I always thought that a OS is designed to install theme 😉 .
And I don’t have to struggle with drivers and other problems(for example, installing as normal user not permitted, after installing as admin, normal user can’t use the prog !!). I just can open my laptop and work (I only shut him down after installing some stuff which needs to restart).
So for shure it’s not the fastest PC I could get for my money, but I’m a lot more productive than with my PIII-700 (and its not my hobby to install the system, I want to work with my PC !). That’s the reason I don’t go back. Ok I still go back to play settlers3, but that’s the only reason to boot my Intel-box 😉
Of course if your hobby is to run benchmarks or to do some other stuff which needs a fast processor then maybe it’s not the best solution for you. But I’m just fine with my Mac.
Thoems
P.S. if I feels the desire to REALLY use a three button mice then I switch on my RiscPC 😉
Todays gcc does it: just use the -faltivec switch in order to have altivec code generated.
GCC3 which comes as default compiler in jaguar does it see :
http://developer.apple.com/techpubs/macosx/ReleaseNotes/GCC3.html
and that’s why I’m an AMD zealot
well, I’ve got only Intel CPUs at home… oh wait, there is an athlon and a K6 somewhere too…. diversity, great!
And who cares about benchmark anyway ? I just watch my wallet and decide
Businesses consider bench but also many other criterias: reliability being one of the most important, where AMD doesn’t shine very well…
Ah, if only I had a few spare euros…. i’ll just keep dreamin’ for now!
This article was not really needed. I MEan come on. If you want games get a PC. If you want to have the cool stuff and a better computer experience then get a Mac. Plain and simple. I don’t see why people are still arguing about it.
Wow, I think that is funny.
Buy P4 since they are best, don’t buy AMD and G4 since they lie?
Ok! tell me about the powe consumption difference, please.
and keep in mind that since AMD and Intel’s are both x86 the fair way is just use them at the same clock speed, or not?
article vote -1,troll
Glad to hear you’re having a good time with your Mac. My experience with my iMac flat panel 700 MHz was that it performed worst than my Dell PIII 700 MHz at work. And that Dell box only had 128 MB of RAM versus the 256 in my iMac. How does it perform worst?
1. I use Mozilla 1.0 as my main web browser. It launches faster on my Dell box which means I can get over here to OSNews quicker. On my iMac, the icon bounces a few times, then I stare at the splash screen a while longer, and finally a pause after the menubar is loaded before a blank browser window pops up.
2. Playing MP3s on my iMac sucks up more CPU cycles than I would imagine it should.
3. 3D games are not as smooth on my iMac. The GeForce2 MX comes standard in the iMac, but I have to turn down graphics detail and play at 800×600 to get decent framerates. High detail, 32-bit color, and 1024×768 was not a problem on my PC with a GeForce1 card.
4. I use Photoshop 5.5 on my Dell at work, and 7.0 on my iMac at home. I feel that I can get more work done on the Dell box running an older, unoptimized Photoshop.
So there are my “real world” comparisons. I don’t regard my opinion as the final word as other users may have different experiences, such as yourself. But now that I’m back to using PCs, I can’t think of anything productive I did on the iMac or anything that could have kept me there longer. For me, it was an expensive experiment – call it a high budget vacation – but at least I tried it.
Maybe it’s just me, but I’ve noticed that OSnews has been pretty thick in the “bash Apple” department lately. Unfortunately, I view computers and OS’s as more than just speed and memory. I’ve seen nary a positive comment from Eugenia when speaking of Apple and Mac’s and G4. Which is unfortunate because I *USED* to think that OSnews was rather objective, I see otherwise now. Anyone in their right mind would admit that the G4’s are turtling behind the newest Wintel chips, but this does not even come remotely close to how much REAL PROGRESS someone makes on one platform versus another. People buy Apple’s and G4 NOT FOR THE BLITZ SPEED but for the “FEEL” of a Mac. How many users of their OS(‘s) can say that they LOOK FORWARD to getting home to use their OS. Mostly Mac users. The guy’s article was clearly wack, a sucky compiler? GCC 3.0 is a sucky compiler? Don’t tell Richard that!
Yeah… it all comes down to what work you do on the machine. For example, if someone uses a Revo, I think it is as fast in *normal* use with a ~36mhz ARM than an these Pocket PCs with a much better processor. Thats EPOC vs WinCE.
Similarly with Acorn machines and Amigas.
I think one major reason that people buy Macs today is not because of speed, but because of the OS. I loved the speed of Beos, for example, but if I bought it, it would have been because I liked the design, not the speed.
I would have thought BeOS folks would understand this? Any Mac OS X folks agree with this?
Spider, I ensure you, my articles are as objective as they go. I have NOTHING against the Mac platform, in fact, I have a Mac here.
But when something is TRUE, I write it. No matter if it is going to hurt Apple or anyone else.
Same goes for Linux, BeOS or Microsoft. I write whatever I perceive as the truth. Which not all might be all candy for them.
“1. I use Mozilla 1.0 as my main web browser. It launches faster on my Dell box which means I can get over here to OSNews quicker. On my iMac, the icon bounces a few times, then I stare at the splash screen a while longer, and finally a pause after the menubar is loaded before a blank browser window pops up.”
Yes Mozilla feels slow under X.
That’s why I use Omniweb when I just surf …
Yes..I and i’m sure most other mac users buy Macs for the operating system…not for the hardware. If you’re buying for hardware..of course look to intel..better bang for your buck. OS X doesn’t run on intel though
>GCC 3.0 is a sucky compiler?
GCC 3.x no. But GCC 3.x is out in a month for OSX, not today (it is currently in beta since April). The SPEC benchmarks were done with gcc 2.95.x which is the current default.
“I would have thought BeOS folks would understand this? Any Mac OS X folks agree with this?”
I do. But there is two kind of BeOS folower the PPC ones and the x86 ones. When beos was PPC only this peace was written :
http://www.labeille.net/items/FAQBeOSIntel.html .
They where damn right !!!!
—
http://islande.hirlimann.net
While reading the first part of the article my sarcasm meter exploded. The second part of the article explained why. He is saying that benchmarks are good to prove your own point because you can cheat ALL of them in your favor. If I want to prove that my 486 is faster than my P4 all I have to do is mess with the compiler and the data set. Benchmarks are there for quick and cheap comparison. They are only acurate if done very carefully.
I’ve been reading long enough to know that you run a myriad of OS’s and that your cube was a gift. I don’t argue with what your assessment of the truth is, it is your site to post what you deem fit. Nor would I say that it is even your responsibility to curb the bais of articles written by other authors. Do you truly believe that JBQ’s article wasn’t a bit Apple bashing?
Consider this: when comparing CPU’s he insisted on calling the G4 and Apple product despite being made by Motorola. Sure Apple talks a lot of sh|t, but his point of view was CLEARY PC based.
…like not have drive letters, use slashes instead of backslashes as a file separator,… He makes a big list of things that the Mac programmer would have to do to make for a cross-compatible compile. Sure, Mac’s don’t have drive letters. Sure, they have a different file separator. He made this “cross-compatible” compile sound more like a port than a well thought out project. Simply put, his viewpoint put the Wintel system as the standard and the Mac as sub-standard because they have different conventions from a development perspective.
It really urks me. I’m sad now. =(
Eugenia: I see your point about it being ‘equal’ on all platforms, but the truth of the matter is, it isnt. If you want to mesure CPU speed, you use every trick that that CPU has to make the resulting code better. That mesures CPU speed, what you describe mesures the CPU + OS + Compiler speed, which really, isnt the speed of the CPU now is it? If part of the OS that SPEC relies on isnt properly optimized, the results are false.
I stopped looking at benchmarks a while ago, because i see no real point, the only reason i bought a new box was to watch DivX movies (a P120 aint great at that).
>his point of view was CLEARLY PC based.
No, his point of view was developer-based. JBQ was first a Mac user and then a BeOS (BeBox) user and *then* a Windows user. But he is always the “developer”.
>Do you truly believe that JBQ’s article wasn’t a bit Apple bashing?
It was a bashing. A bashing of their marketing lies. Not a bashing of the Apple products.
> Do you think that someone with a mind would go spend
> some time hand-optimizing his/her code in assembly for a
> CPU that only has a few percent of market share?
Yes I believe Amigans will do this when the AmigaOne-XE board with G4 processor ships. PPC assembler code isn’t as horrible to write as compared to x86 assembler code. But also Paul Nolan for example, did rewrite Photogenics in x86 assembler, and you can surely see an enormous speed increase as result in comparison with other similar graphic software.
http://www.idruna.com/
Hand-optimizing software in assembler code isn’t a bad thing per se IMO, at least not if you have the talent to do this. Note, that there is a large pool of demoscene coders in Europe who still master the knowledge to get the most out of the hardware.
However based on alot of information from some good developers with regard to processor performance, I believe that PPC processors perform better per clock cycle compared to x86 CPUs. However the top x86 processors currently easily outperform the top PPC solutions available.
What matters alot more to me however is the overall perceived performance, and IMO MacOS X and WinXP both perform extremely bad in comparison to what you would expect from a GHz super computer. If mainstream OSes would be more optimised, I think the speed problems of many people would be solved. Although I do believe that nobody needs a GHz computer to be able to browse the internet, paint a picture or write a letter, regardless if you get a different impression now while using your current OS.
Instead of relying more and more on faster hardware. Software developers should better optimize the software, this includes the operating system as a major factor as well. Or else our computers will still feel slow when you buy your next brand new 10GHz supersystem.
Take a look at the following movie for example of an 64K intro…. And then you tell me which kind of computer you think is necessary to render such graphics in realtime with only 64K of total code size…
http://www.aminet.net/pix/mpg/Push-DD.mpg
This intro demonstrates the capabilities of the Amiga AGA chipset from 1992 using a mere 50 Mhz 060 68K CPU. Yes, you can also impress with only limited resources, if you have the talent. IMO we need more competent people developing OSes to impress us and get the most out of our hardware!
I’m a Mac person and I think Apple has problems unless they do something – use faster RAM, a faster bus and a faster processor. However, I believe that, not because I’m comparing it to P4’s or PC’s, but because OS X is just too slow. I love OS X and I love my Macs. I have other computers too and I enjoy the OS’s on all of them. But again, Apple’s problems are not really in the area of comparisons with PC’s (although JBQ’s editorial seems pretty accurate to me), it is in the area of its own realm – OS X needs more speed, that’s all there is to it. 10.2 is supposed to be much speedier than 10.1.5, which is good. But, I can’t wait to see what the new Power Macs specs are. If they use the same RAM, same bus and just a small speed bump in the G4, they’re crazy <g>.
I don’t doubt that apple’s G4 offer less computational power than faster pentiums, though i don’t trust all those benchmarks much due to the number of available variables.
Nonetheless, the speed game is over. Its not over because intel won it either. It is over because 90% of all users could care less. It just does not matter that much to the average user that uses their computer for word processing, web browsing and the occasional tax return. Nor does it matter that much to most white collar corporate desktop users.
Jobs is focusing on the right thing now, ease of use and fewer headaches to the user. Those features are more important to most people. I’d also like to point out that the audio/video professionals that Jobs continues to target frequently use DSP powered video/audio cards that do the heavy lifting. DSPs are best for digital signal processing after all. Studio owners will typically just use pro-tools and DSP cards such as those from Universal audio and TC Electronics powercore are catching on quite well in the project studio land. Video editors will usually rely on avid as well.
gee wiz! why didn’t someone tell me earlier that ppc’s sucked! I’m gonna go out and buy a AMD Dual MegaNumberCruncher PowerSuck 5000+ for a scant gumball and used toothpaste tube!
That way instead of using only 20 watts or something to use 1/100th of my sexxy G4 I’ll be using some obscene amount of power and ungainly OS to use only 1/10000 of my computers power! BRILLIANT! My web browsing and email will then enable ME TO RULE THE WORLD! Text will be more meaningful and inspire me to do greater things! Pictures! Oh yes! Pictures! They will then each dwarf the Mona Lisa in artistic might!
Why didn’t someone tell me earlier! Even my code will be 2000 times better, and in 3d! With 200 fps and pixel shaders to render the dot on the i to look more “realistic”. Ha! Stupid apple’s! all this time I thought that stupid dot couldn’t be any better! BUT NO! I WAS WRONG!
Seriously, use the best tool for the job. My cpu monitor shows me on average, even with compiling code, using about 10% of my cpu. And I like the feel (oooo…A scary unscientific word there. Maybe there is a a SPEC mark for ‘good feeling’?) of OS X more than XP or Be OS or Linux or whatever. So Why should I switch? I shouldn’t!
JBQ’s article wasn’t a bit Apple bashing?
Of course it was. And not just a bit. I don’t like Apple’s policy of not publishing SPEC results, especially since they claim to be “super-computers” and that SPECCPU2000 is the benchmark used to compare those. I don’t like the fact that their claims about performance don’t seem to translate into real-world speed.
I am neutral about their products. They are too expensive for me, and they are not Windows. I have to run Windows at work all day. I don’t want to incur the cost of switching OSes every morning and every evening. Eugenia witnessed that I was using BeOS almost exclusively while I was working at Be, and that I switched to Windows as soon as I wasn’t working at Be any longer, because Windows didn’t have any switching cost any longer. Apple’s products don’t do what I need, I don’t consider them when I purchase computer hardware, end-of-story.
Simply put, his viewpoint put the Wintel system as the standard
With 90+% of the desktop computer market (I don’t care exactly how much it exactly is), if it’s not a de-facto standard, I don’t know what it is.
He made this “cross-compatible” compile sound more like a port than a well thought out project.
I have been personally in the situation of the MacOS programmer, when I ported a major game, 800000 lines of code, from Windows to BeOS. The code was supposed to be portable, but I still had to deal with the fact that it used non-standard data types, that it relied heavily on DirectSound, that it used the Windows keyboard event model, that it used drive letters and backslashes and relied on the filesystem being case-insensitive, that it used Microsoft’s C++ extensions. It took me 8 months of intense work to make it work on BeOS, and I got it finally running more than 6 months after it got released for Windows. When I wrote a report to the lead engineer, he was amazed at all the things that I had found which he would never have thought of as causing portabiloity issues. Trust me, I didn’t have time to do any optimizations.
JBQ
Yes I knew before that PowerPC’s are slower because I have a BeBox 133 which is less responsive as BeOS on my Pentium 120 (so 2×133 vs. 120Mhz !! )
Actually, the reason your BeBox/133 felt slower was because there was no L2 cache in the system. The controller for the PPC603 that provided memory access for both chips had a dual purpose. You could use it with one 603 as the L2 controller, or you could use it with two 603’s as the main memory controller. Unfortunately, they went with the latter. I had a BeBox/133 too and was underwhelmed by the performance. But man… what a COOL piece of hardware in every other respect!
Which brings me to the next point: Benchmarks are dependent on more than just the CPU and the compiler. Issues such as:
– How much cache does the CPU have in each level?
– What is the bandwidth of the memory subsystem?
– What is the latency of the memory subsystem?
.. all have major impacts on performance. The BeBox/133 is a perfect example. There really is no definitive way to measure a system’s speed. The best that can be hoped for is a good indicator, not a wager-settler. Spec2K is a good indicator, but bear in mind that the motherboard on that Apple is at least a couple years behind the times in terms of technology. G4’s likely aren’t nearly as fast as Apple says, but Macintoshes are crippled in other ways than just the CPU.
The whole point of SPEC is to be written in C, which is portable, which will result the code to be “equal” for all platforms. If you hand-optimize with assembly a SPEC-like benchmark, you are losing the point of the benchmark. Sure, you might have better CPU-oriented results, but the real life experience will be different from the actual results of the benchmark, which will eventually turn the benchmark itself obsolete.
If a compiler sucks for a platform, it will result applications written with these compiler to suck too. So at the end, it does not matter for the user if a CPU is really so much faster than another one. If the compiler that everyone uses for that CPU is slow, applications running on that faster CPU will still be slow. Sad but true.
Their are two flaws with his argument:
1. SPEC uses gcc which is not the best or most used compiler on OS X. Photoshop, Office, IE and just about every other major application are compiled with Codewarrior. Codewarrior happens to be owned by Motorola. I’ll let you guess which compiler is the best one for the PPC.
2. Code that isn’t optimized by hand doesn’t tell us anything about the performance of ‘tuned’ programs. Since a decent number of (number-crunching) applications are optimized for Altivec, you may very well be using applications that are far faster than you would guess based on SPEC.
There is another reason why SPEC sucks as ammunition for PC zealots (of course, I’m not talking about anyone in particular here ):
3. SPEC is a scientific benchmark. The results tell us how fast Fortran code runs (very important to mathematicians), The speed of Lisp-execution (Artifical Intelligence researchers use this a lot), how fast a computer can play Go, weather prediction speeds, etc. This is quite irrelevant for most of us. Using a Mac for a while (preferably with the apps you would be using) is a much, much better benchmark.
One of the best tech sites around, ArsTechnica, has this to say about SPEC:
“In light of the above observations, you can see that while the SPEC benchmarks are thorough and informative, aside from the occasional usenet platform flame-fest, they’re of limited use to the PC hardware enthusiast community.”
And about benchmarking:
“But if you’re looking to benchmark a complete system (which is the only sane thing to try to benchmark, anyway), then you can’t do better than real-world apps.”
“[…] But the reality is that you can’t find a better benchmark than a real app. If you’re banking on any other type of benchmark to tell you how to spend your $$$$, then caveat emptor.”
http://arstechnica.com/cpu/2q99/benchmarking-1.html
I think we disagree on certain points and but share the same sentiments on others.
Apple’s products don’t do what I need, I don’t consider them when I purchase computer hardware, end-of-story
I think we have the same sentiment but towards different platforms. I feel the same way except towards Windows products.
While I can understand your inductive perspective on being in the MacOS programmer position. You initial article was not referring a port, but to a hypothetical cross-compilable project. Anyone calling their project “portable” and using MSVC++ extentions need to have their head examined. My point was that you cannot code somthing from a PC perspective (drive-letter, file-separators, directSound), call it portable, then cry that it didn’t neatly compile on another platform (Apple, Amiga, Acorn?, BeOS…)
-spider
> Photoshop, Office, IE and just about every other major application are compiled with Codewarrior.
And how do you know this?
And second, please show me benchmarks of GCC 2.95.x against Codewarrior. I do not believe that the Codewarrior compiler would be that much better. In fact, I heard the opposite.
The argument that SPEC uses only gcc is bogus. It uses whatever compiler you want. It uses forte on solaris, icc (or msvc) on windows…
If someone wants to run a full SPEC test compiled with Codewarrior, I’m eagerly waiting for the results.
I’dd like Apple to publish their results. they could spend the time to tune which compiler they want to use, which flags they use, etc. SPEC is as much a compiler test as a CPU test (results with Intel’s ICC 4 and ICC 5 on PentiumIII had differences of about 10%).
Atleast we now have proof that JBQ is still alive ;P
Most Mac apps aren’t compiled with gcc. That alone shows what a ludicrous article this is.
gcc sucks, especially for non-x86 CPUs. 3.1 is supposed to be better, but I won’t wager that it’ll beat any commercial compilers.
> Most Mac apps aren’t compiled with gcc.
Excuse me sir, but OSX itself is compiled with GCC. And that is the biggest app on PPC.
And yes, most mac apps for OSX *are* compiled with gcc. Not all Joe-developers have $500 to give to Metrowerks. They use the bundled gcc.
I have benched my beos ppc system ( http://www.beosppc.org ) system using berometer, and noticed that the spec marks and other benches are MUCH worse than a Intel PII233. However, as mentioned here before, my RC5 score is almost 4 times what a PII 233 does.
Real world is what matters folks. Relax
Benchmarks can tell you one thing: How fast the tested system will run that benchmark. I for myself would never make my decision on which CPU to buy based on SPEC benchmarks.
If you want to use Photoshop, read Photoshop benchmarks. If zou want to do 3d rendering, use benchmarks on your favourite renderer. If you want to run SPEC-benchmarks all day long…well you fig it out.
BTW: Eugenia, JBQ: I’d love to have dinner with you some time again. Any chance you’re coming to Europe? I’m afraid I won’t be able to be in CA before next year.
> BTW: Eugenia, JBQ: I’d love to have dinner with you some time again. Any chance you’re coming to Europe?
Sure!
We are planning to be in France for Christmas, but it is not for sure yet…
I find this article very valuable. Those are facts, not dreams. I always had believed AMD was better than Intel because a lot of magazines told me that, but now, well Im sad. Im still thinking about to buy AMD and not Intel, but Im sad because I thought I was going to buy a better machine for a better price, now I must accept I only would have a better price for some less performance. Live is hard.
But, after reading a lot of SPECs, I got a question… I couldnt aswer.
All Intel (Dell included) specs are with RDRAMs, and all AMDs specs are with SDRAMs, what would be the difference if Intel specs would be made using SDRAMs?
If someone knows the answer, please enlight me.
I think we disagree on certain points and but share the same sentiments on others.
Yes. I’m glad that we can agree to disagree.
I think we have the same sentiment but towards different platforms.
I can totally understand that.
In 1995, I was an Atari-head. I needed a laptop for school, I was given the choice between Win3.1 and MacOS 7.5. I picked MacOS after testing both in a shop because it felt more like my Atari.
In 1996, I was studying a lot more CS, mostly working on Solaris and OSF/1. I bought a PC so that I could decently use linux.
In 1998, I started working at Be, and I started using BeOS exclusively.
In 2001, I was laid off from Be, went to another company, started using Widnows exlusively.
Each time I switched, the switch was pretty painless, at most a few days feeling odd. But I can’t manage such a switch in a few minutes, and for some reason my brain can’t record “muscle-memory”/”intuition” for two environsments at the same time. No matter what I do, I know that I’ll only be comfortable if I can use the same environment everywhere at home and at work.
While I can understand your inductive perspective on being in the MacOS programmer position. You initial article was not referring a port, but to a hypothetical cross-compilable project.
The way this big project was described to me (and Be) was that the Windows-specific code was constrained in three files (graphics, sound, the rest), and that the rest was platform-independent. I didn’t even try to get the Windows-specific code to compile, it was pointless. The the rest of the project (like, 780000 lines out of 800000) that was supposed to be portable and platform-independent, well, wasn’t. It took me maybe 2 weeks to rewrite basic BeOS versions of the platform-specific stuff. the other 7 1/2 months were spent dealing with the Windows/MSVC dependencies in the platform-independent.
You’ll always find someone doing something stupid. You’ll see programmers making assumptions about whether a pointer fits into an int. You’ll see programmers making assumptions about the endianness (ever scanf’ed a %hu format into a 32-bit variable? works on x86, not on PPC). You’ll see programmers assuming that ints are 32-bit everywhere (constant overflows, enum overflows, bitfield overflows). You’ll see programmers that assume that all compilers use the same syntax for packed structures, and that all CPUs support packed structures. You’ll see programmers that assume that all characters fit in a single byte (UTF-8 unicode, anyone?)…
Whether you work in the same team (like my pseudo-example was in my editorial) or you work standalone from a monthly code drop (like I did), the problems you have to solve are the same.
There’s a famous quote that I like (although I can’t remember who it is from): “there’s no portable code, there’s only code that’s been ported”.
However, as mentioned here before, my RC5 score is almost 4 times what a PII 233 does.
Real world is what matters folks. Relax
If running RC5 all day long is your real world, I prefer staying in mine.
I don’t think all this Mac bashing is a bad thing. Criticism – for or against – is always useful. I just wish Apple would listen to what people are saying. Sure, they take suggestions and made requested features. But that’s peanuts compared to larger issues that have been overlooked for many generations of Macs.
What would be the difference if Intel specs would be made using SDRAMs?
Good question.
First, a number of SPEC components are memory-heavy – those that are are tuned to use 200MB: http://www.spec.org/osg/cpu2000/analysis/memory/
so that memory can definitely have an impact. None of the test machines have a cache big enough to hold 200MB (the biggest POWER4 from IBM has 128MB of L3 cache).
Second, well, it all depends on what kind of SDRAM.
with PC133, I have no doubt that the P4 would score some significantly lower scores.
with PC3200 or event PC2700 DDR, it’s quite possible that it would actually score higher. Or not.
If SDRAM could help P4s score higher, I’d think that Intel would be publishing their results with SDRAM. Same think for Dell.
I’ll therefore concluse that on the current Intel (and Dell) motherboards, RDRAM makes the P4 score higher than SDRAM.
Sure!
We are planning to be in France for Christmas, but it is not for sure yet…
Great! We’ll stay in touch – I’m visiting OSNews almost daily anyway.
On getting back on topic: If I should need to buy a new x86 anytime soon (where my dual 533 Celeron is currently enough for me, although it does take a while to compile 200MB worth of source code…), chances are it’s gonna be a super-slow Via C3, for the pure reason of being a CPU that will run fine with passive cooling and therefore result in a silent computer.
>Great! We’ll stay in touch
Sure. Please email me though, because I am not sure I got your email.
My 1.5 GHz Athlon box does not run my desktop (KDE 3.x) as fast as I want it to. I don’t ask much from it, just that it stay out of my way as I compile code, surf the internet, etc. With the advent of bloated desktop environments (Windows XP, OS X, KDE 3.x, GNOME) and consumer multimedia (movies, MP3s, etc) speed has become *more* important to the average home user, not less.
I find it easier on my wallet to use a slimmer window manager instead of buying a new CPU every 6months.
I agree – and that is especially why Apple has to get off the dime, because their Digital Hub thing is their Big Thing.
> Photoshop, Office, IE and just about every other major application are compiled with Codewarrior.
And how do you know this?
These apps are carbonized. Originally they used classic API’s. On OS 9 nobody would think of using a different compiler than Codewarrior for such a project. And when you move to OS X, you don’t switch environments if you can avoid it. It would incur high costs: training, acquiring the same experience, changing processes, writing new glue code, fixing your code that barfs on the subtle differences between the compilers, etc. This is the intelligent explanation. For basic facts, you’ll find Adobe’s and Microsoft’s testimonials here:
http://www.metrowerks.com/MW/Develop/Desktop/Mac_Testimonials.htm
And second, please show me benchmarks of GCC 2.95.x against Codewarrior. I do not believe that the Codewarrior compiler would be that much better. In fact, I heard the opposite.
I can’t find any. I have seen one previous post on OSNews that claims the winner depends on your coding style (since they excel at different things). Regardless, the SPEC results we are talking about aren’t based on the most used compiler. My point stands.
The argument that SPEC uses only gcc is bogus. It uses whatever compiler you want. It uses forte on solaris, icc (or msvc) on windows…
I was unclear. I meant to say that JBQ computed his SPEC scores with gcc. Mea culpa.
If someone wants to run a full SPEC test compiled with Codewarrior, I’m eagerly waiting for the results.
Me too. The benchmark will still be mostly useless for comparing CPU’s for regular users, but it will give me interesting information about gcc vs Codewarrior. Of course, I’d like to see the gcc 3.1 SPEC result then as well, not the (almost obsolete on OS X) 2.9.5.
I’d like Apple to publish their results. They could spend the time to tune which compiler they want to use, which flags they use, etc. SPEC is as much a compiler test as a CPU test (results with Intel’s ICC 4 and ICC 5 on PentiumIII had differences of about 10%).
I’d rather have them work on the usability/performance of OS X and it’s apps. I will notice when they are faster, more responsive, etc. The results will be less for most users if they optimize the compiler for SPEC. Changes that improve the SPEC-score may even be at the expense of regular software. Using SPEC as an important goal (which it will be if publicized) will certainly limit the scope of optimizations. This may be hard to understand so I will give an example:
Scenario 1 – The Usability and performance of key apps + OS X is paramount
Apple will determine the most promising optimizations (the biggest pay-off in increased user happiness per developer hour). This may be launch time, responsiveness (better threading), GUI acceleration (Quartz Extreme and optimized Swing), C/C++/Obj-C/Java performance, I/O, VM efficiency, multi-processing, Altivec support, boot time, better tools (profiling+debugging) and probably lots more.
Scenario 2 – SPEC is the holy grail
There will be heavy optimization of C-code. Focus will be strictly on C/C++/Fortran/Lisp performance and then only on the features used by SPEC. When there is a choice between:
1. Optimizing the GUI to be as fast as BeOS and
2. Optimizing C++ performance so SPEC will run a tad faster (and apps might run a trifle faster as well),
the second option will be chosen. Of course, the first option will probably make you far more productive. Unfortunately, this doesn’t show up in SPEC at all. If you judge the performance of the platform solely on SPEC scores like JBQ did, Apple will be forced to make these kinds of decisions.
mips and mflops matter.
$ and best price/performance matter too.
Compilers or stuff, it all comes down to who is crunching binaries numbers the fastest.
In MIPS AMD is certainly the winner. The MFLOPS of a pentium 4 are only faster if the application is sse optimized.
I wonder what the linpack tests are from AMD/Intel.
By “most used compiler” I’m referring to big apps. I’m sure that gcc is used more often if you count every little utility that runs on OS X, but these are rarely limited by compiler perfomance.
Hullo!
1. SPEC uses gcc which is not the best or most used compiler on OS X.
C’t magazine did the run of SPEC which gave the scores that Mr. Queru reported. They said that they’d tried the Metrowerks compiler and it was even slower than the version of GCC that they used.
2. Code that isn’t optimized by hand doesn’t tell us anything about the performance of ‘tuned’ programs. Since a decent number of (number-crunching) applications are optimized for Altivec, you may very well be using applications that are far faster than you would guess based on SPEC.
That is reasonably true. On the other hand, good assembly programmers are NOT common. And not every number crunching application can make use of altivec.
3. SPEC is a scientific benchmark. The results tell us how fast Fortran code runs (very important to mathematicians), The speed of Lisp-execution (Artifical Intelligence researchers use this a lot), how fast a computer can play Go, weather prediction speeds, etc. This is quite irrelevant for most of us. Using a Mac for a while (preferably with the apps you would be using) is a much, much better benchmark.
The SPECcpu tests measure a very wide variety of codes, in various languages. The integer benchmark, SPECint, consists of 11 C programs and 1 C++. SPECfp tests six Fortran-77 codes, four Fortran-90, and four C programs.
Your application is a better benchmark than anything else. But SPEC (either the whole thing or better yet, the individual tests that best resemble your application) can be a useful predictor.
I like Macs. I’m posting this article from a Cube. But Apple does not have a lead in raw computational performance anywhere except a very small number of applications that are a) able to make serious use of Altivec b) have a programmer able to code the Altivec, as the most common compilers don’t seem to support it all that well, and c) don’t make similar use of Intel’s or AMD’s SIMD instruction sets, either for technical reasons or lack of the programmer’s skill.
MacOS X.II’s bundling of gcc-3.1 should help; it is a much better compiler than GCC-2x, and at least knows how to spell Altivec. But even that won’t improve Macintosh performance enough for them to justifiably claim supercomputer-in-a-box performance.
Yours truly,
Jeffrey Boulier
Well, the header says it all. Like most consumers, I’m more concerned about getting the most *bang for buck*, though I have been eyeing the new Xeons more and more lately.
A question to JBQ (if he’s reading this far into the thread): the game you tried porting to BeOS, that wouldn’t be SimCity3000, would it? Or was it Q3A? Oh well, now its only of sentimental value, just like the famed AAA chipset for the Amigas. I hope you’ve still got a copy of your work, might be priceless on eBay in 10 years or so (just like the AAA prototype board sold by Dave Haynie).
it’s not apples fault or the macs fault it’s motorollas fault, it’s a stagnant chipset and they obviously dont give a shit about it or apple. So the lesson motorolla will learn is that apple will leave them once the osx transition is complete say in about a year or two. That being said, I own a 933g4 pmac it’s fast enough for everything I do. I bought it to consolidate platforms I had accumulated one windows machine one linux machine. Now I do all my development from one spot because I have the tools I need on the mac, I even have apt from my beloved debian. All I know is that the experience of using a mac and os 10 was worth the extra I payed for outdated hardware. It’s made my life easier, that’s not to say it will make yours easier, only that for my particular situation it was the appropriate choice for my environment.
I suspect Codewarrior is the most used compiler on the Mac for projects started on OS 9 (or earlier)–projects in which there was already a vested interest in that development environment. (The Metrowerks’ testimonial page supports that thought, upon reading it.)
Further, I suspect that as the Mac slogs forward into an OS X-targeted world, this isn’t going to remain the case indefinitely even for the biggest developers. NeXT’s cultish reputation was based almost entirely on the wonders of its RAD tools–the ones which Apple makes available for free as Project Builder and Interface Builder. Few people will argue against the superiority of them as a development environment compared to the not-very-RAD PowerPlant, and even fewer will argue against their price.
I made the comment a couple months ago that the articles on OS News occasionally seem to edge toward ranting about how Emperor Jobs has no clothes, and I got slapped down for it–perhaps rightly so. The impression of an editorial anger at people who still insist on liking, using and buying Apple products still occasionally returns to me, though, as if we need to be shown the error of our ways. Yes, Apple oversells the speed of their machines. Yes, the PowerPC 750 needs to be made to run at a much higher clock speed. No, great case designs don’t make up for a lack of high-end graphics hardware and RAM and bus speeds that have been anemic for a few years now. Isn’t it high time to acknowledge that those willing to admit this did long ago, those that aren’t willing aren’t going to anyway, and just move on?
When developers use the fastest HW, the end users always get an experience far poorer. It is a never ending chase. If Apple & MS would force some of their developers to work on a workstation that is at the bottom of the current shipping curve (say 500MHz), they would quickly fix some of the obvious bottlenecks. Remember that much of the world is still using those old junkers that are barely Pentium class. Top of the line speed should be a luxury that gives extreme performance rewards or for running big compute jobs, not a bare minimum to have acceptable experience. Cynically, I am sure Apple/Wintel will want OS developers to be on top of the hill pulling up the rest of the base.
The actual builds ofcourse must use top speed PCs since they will get new releases out quicker.
As for some of the x86 bashing, well a little knowledge can be a dangerous thing. When Apple educates non cpu technical users on the advantages of PPC risc, they generally paint the x86 as that thing that draws a line back to the 4004. I used to be one of those x86 bashers too, & sure on paper the PPC & other riscs should produce better results with less effort than a souped up 4004. But the BeOSfaq referenced earlier was clearly dated even in 98. The x86 may only have 8 general registers, but today they are orthoganal & the x86 uses 40+ regs internally along with reg renaming HW to detect that unrelated uses of the same reg name nearby can be safely changed to unique reg names. But more direct registers are always better. One should also remember that the level one cache essentially stretches the reg space to effective infinity with 1 cycle penalty per access.
As a cpu designer myself, I would much rather read benchmarks written by people who are prepared to write asm optimised code for all the cpus being compared without prejudice for or against, that makes most vendor benchmarks useless. At least if C compilers are being used, look at the asm code to see if it is at least fair for each cpu.
End of rant
It’s not just compilers that make a difference.
The language also makes a dfference, look at this table from here:
http://www.kuro5hin.org/story/2002/6/25/122237/078
Standard C++: 27.99
Standard C++ + SGI STL 11.15
Standard C++ + SGI STL and hash_map 6.04
g++ C++: 17.28
g++ C++ + SGI STL: 14.93
g++ C++ + SGI STL and hash_map: 7.29
Standard C++ compiled /clr: 34.36
Standard C++ + SGI STL compiled /clr: 25.09
Standard C++ + SGI STL and hash_map compiled /clr: 12.98
Managed C++: 111.59
C#: 93.08
Java: 65.57
It’s the result of a test written in 4 languages (C++, Managed C++, Java and C#.
Even a single language can get times between 27.99 and 7.92 and thats on the same compiler with different options. Whereas a variation on the same language (Managed C++) takes 111.59 seconds.
There was some critisicim that the author did write the Java very well and the score could have been a lot better.
—
There was a tiem when people were overclocking Pentiums and found that lowering the clock speed allowed a different multiplier to be used and this allowed increased memory clock speed. These systems actually outperformed the higher clocked systems.
I remember a college who had a Linux system he found slow, so he up graded to a faster dual processor system – the difference? He didn’t notice any!
Speed is however only one measure of a system, does it matter if you have the fastest system if you can’t stand the OS it uses? It’s only goign to matter if your CPU usage is 100% for long periods of time, and I know on my system thats not very often.
The point is the CPU power is only a single variable in a complex system which includes CPU, cache (size, latency, bandwidth) memory (latency, bandwidth, capacity), compiler, language, algorithum, hard disc speed busses, drivers etc etc etc.. Improve everything and you will get a faster system, don’t and you might not.
JJ, your comments are exactly what I feel. In the corporate world that happens all the time. The programmers in IS seem to have the latest and greatest machines running the latest OS. But then end user is running 3 year old computers and Win95/98. And when a program is released to the end user, the programmers have no idea why the end user is complaining about the speed of the application.
All benchmarks in my eye are worth nothing. The last few years benchmark testings have been questionable as to their purpose in my mind. Look at who is footing the bill and then decide.
Besides, benchmarks do no good when the customer can not get anything done with those blazing machines.
I suspect many people on this board (90% at least) probably do not have to answer to an end user EVERY STINKIN DAY like I have too. You would learn what all that speed and all those benchmarks mean to you at the end of the day.
What day isn’t complete without yet another OSNews article bashing the Mac! Please … post some more! I haven’t quite gotten the picture … for some reason I still LOVE my Mac! Yeah, the PowerPC is slower than x86, so what. I support and love the Mac because even though Apple is shackled with Motorola’s mediocre processors Apple is still doing a helluvalot of innovating. OS X is still slow but 10.1 and 10.2 are each substantial improvements. I have no doubt that in a few years OS X will be regarded as a fast OS in addition to a gorgeous and easy-to-use and standards/UNIX-based one. I also don’t worry about Apple being behind in processing speed. Everyone knows that once everyone is converted to OS X that Apple will switch to x86 if Motorola’s PowerPC or IBM’s Power4 chips aren’t good enough. But they CAN’T switch now while most of their users are still on OS 9. OS 9 is not a portable OS. OS X is. Not to mention the fact that with each passing year the GPU becomes more and more significant and the CPU more and more insignificant. In terms of GFLOPS GPUs are already 100x faster than typical CPUs. In a few years GPUs will be 1000x (yes, GPUs will do ONE THOUSAND times more processing than CPUs) faster than typical CPUs because GPUs are doubling in performance every 6 months while CPUs double every 18 months (well, except for those slow PowerPC G4’s hahaha… good joke yeah! never tire of those G4 speed jokes!) Anyway, Apple has the same GPUs that PCs do. The fact is that Apple is doing the best it can. It is not milking its customers or overcharging to pocket profits. In the past year Apple has either lost money or has made less than 5% of gross revenue in profits. In the most recent quarter Apple turned a 32M profit on about 1.5B in sales. They are not milking their customers. They are DOING THE BEST THAT THEY CAN. I for one am praying for Apple to survive. Apple is the only viable desktop competition to Microsoft. I like competition and I want Apple to be around to compete. I don’t want to live in a world where everything is Dell/Intel/Microsoft. Apple is also doing tons of innovation like OpenGL-driven desktops or with networking ala Rendevous. Apple is also very standards-based with nearly every technology that they are pushing being based on a standard unlike Microsoft which is proprietary everything. Just about every application or OS feature has some sort of standards-based innovative idea at its core. But no … let’s ignore all that and keep talking about how slow the G4 is … SOMETHING APPLE HAS VERY LITTLE CONTROL OVER. Anyway, I used to enjoy reading OSNews back before they became obsessed with bashing the G4 performance. Jesus Christ, WE GET THE PICTURE! THE G4 IS SLOW! IT’S NOT THE ONLY ISSUE OR EVEN THE MOST IMPORTANT ONE FOR A LOT OF PEOPLE!
/me removes OSNews from his bookmarks.
I have a 2 laptops, a PowerBook G3 500MHz w/ 512MB RAM, 100MHz BUS, ATA66 20GB HDD, OS X 10.1.5, and an IBM ThinkPad 800MHz w/ 640MB RAM, 133MHz BUS, ATA100 20GB HDD, Win2000 Pro. Although I use the ThinkPad more because I need to in order to run the programs I need for my job with networking whenever I use the MAc it feels faster to me. It may not be faster, but if I feel like it is than that is what matters.
We could still be stuck with classic OS. I personally think OS X 10.2 is a giant leap forward compared with classic and as such, am willing to settle for a slightly slower computer. Sure it would be nice if Apple had some up to date hardware but hey, at lease they now have a world class OS coming. In fact, you pc zealots that think XP is great should be thanking Apple for giving microsoft ideas on how to design XP.
And one more thing. I read an article in a tech magazine about 5 months ago and it was about transistors that can switch at +200GHz. The IBM guy that was talking about it basically said (and I’m paraphrasing here) that it’ll speed up powerpc chips such as the ones that go into Apple computers and that now the turtle will be faster than the rabbit. These are the exact words but you get the drift.
Just be a little more patient…
– Mark
CC 3.x no. But GCC 3.x is out in a month for OSX, not today (it is currently in beta since April). The SPEC benchmarks were done with gcc 2.95.x which is the current default.
Does anyone know if the PPC people are going to see the same types of speed increases that the x86 people have seen in their compiled code performance? That would be great news towards getting OS X faster, without fancy graphics card tricks.
On the topic of the G4 “supercomputer”, remember where that name came from. It came from a Department of Defense classification of computers for export at the time that the first G4 PowerMacs came out. Basically if the computer could do 1 GFLOP then it was classified as a supercomputer and had to undergo different regulations for export. The G4 peak floating point performance at that time was over 1 GFLOP and it therefore would have been export controlled. That is, if Apple’s version of a floating point operation was the same as the department of defense. None of the floating point benchmarks shown, I would guess except the SPEC benchmark, are working with double precision floating point numbers. Even with AltiVec enabled in the compiler by default, there won’t be much speed increase because it isn’t doing the double precision math. Therefore, the name, while technically accurate, has always been a misnomer, and the PPC will not make up any floating performance due to AltiVec enabilization in the compilerin gcc 3.1 for the SPEC benchmark.
There are some benchmarks by the SWAR group at Purdue University for using these processors for double precision math (link below), however they aren’t published yet. That should be interesting…
http://shay.ecn.purdue.edu/~swar/
but I bought my computer to run real apps not an artificial benchmark like SPEC. That’s why I always check out anandtech or tom’s hardware. It probably doesn’t help the mac so much but they certainly show that clock per clock (and even more important: price/performance), AMD is better than Intel.
One thing that _can_ help macs is the fact that the GUI is now completely hardware accelerated in 10.2 as well as lowered audio latencies
>Does anyone know if the PPC people are going to see the same types of speed increases that the x86 people have seen in their compiled code performance?
No. Reportedly the speed increase for the PPC will be about 5% (x86 version has seen increase anywhere from 10 to 30%.). But at least GCC 3.1 “pushes” the programmers to write better C++ code, so ultimately a good thing altogether.
A lot of the Mac faithful have been complaining about the SPEC mark not showing real world results. This is only half true. For someone trying to write cross-platform (ie not-hand optimized code) scientific code, this is as accurate a meter stick as any. This obviously won’t tell the same story for signal processing people, database people et cetera. Benchmarks in those categories have to be looked at. Basically, the programmer or buyer needs to make sure the benchmark peg fits into their usage hole.
One of the first comments quotes the performances of the G4 in a RC5 computation.
In the article, it is said that the G4 can only do fine in hand-optimised routines.
And in another comment, someone quotes the performances of the 36Mhz driven revo versus 200Mhz driven PocketPCs.
Ever used the maths and imaging libs from intel? These are hand optimized routines that are doing well.
The programmer usually rely libraries. So, if libraries are weel optimised, performances are good.
I think that (some) of the MacOS X libraries have to be optimised. Moreover, to ease the use of the Altivec, there should be a lib like the one of intel. I think there is not.
C’t magazine did the run of SPEC which gave the scores that Mr. Queru reported. They said that they’d tried the Metrowerks compiler and it was even slower than the version of GCC that they used.
Since they didn’t give the numbers, I expect them to only have a run a few tests before giving up. What does this prove exactly? I’d like facts, hard numbers, things that can be checked. This is not much better than hearsay.
That is reasonably true. On the other hand, good assembly programmers are NOT common.
There happens to be a C API for Altivec. You don’t need to use assembly at all. If you have a bit of dough, it is certainly possible to hire an Altivec-programmer for a while to optimize your application (ask around on Apple’s Scitech mailing list). A programmer with experience in optimizing systems should have few problems learning Altivec though. It’s not rocket science. You need to know your algorithms and understand the way the system/processor works (the same things you need for regular optimizations).
The SPECcpu tests measure a very wide variety of codes, in various languages. The integer benchmark, SPECint, consists of 11 C programs and 1 C++. SPECfp tests six Fortran-77 codes, four Fortran-90, and four C programs.
I see. I don’t believe I’ve ever run a program coded in Fortran. SPECfp is thus absolutely useless. What’s the point in testing the combo compiler/CPU for Fortran? Why did you bash Mac because of it’s result on this benchmark? Clearly you are biased and/or uneducated.
Your application is a better benchmark than anything else. But SPEC (either the whole thing or better yet, the individual tests that best resemble your application) can be a useful predictor.
Only if you know what the relevance of the tests are to the apps you run. Given the wide variety of codes, the single number you quoted tells us very little about the speed of a particular app. Only the seperate benchmarks can be used by an expert to draw sensible solutions. I don’t think you are such an expert (this is quite clear from your editorial). This is (one reason) why your article is such a meaningless rant.
I like Macs. I’m posting this article from a Cube. But Apple does not have a lead in raw computational performance anywhere except a very small number of applications that are a) able to make serious use of Altivec b) have a programmer able to code the Altivec, as the most common compilers don’t seem to support it all that well, and c) don’t make similar use of Intel’s or AMD’s SIMD instruction sets, either for technical reasons or lack of the programmer’s skill.
I’m not disputing that. I’m disputing the relevance of this observation to users. If you limit yourself to number-crunching applications on OS X, the percentage of Altivec-enchanced apps will be considerable. If you run Photoshop, SPEC will be flawed as a benchmark. The same for OS X, Final Cut Pro, DVD Studio Pro, Sorenson, Media cleaner, Reason, Cubase, etc, etc. Furthermore, the benchmark tells us exactly nothing about GUI performance, the most relevant benchmark for small applications (that are not limited by the CPU-speed).
MacOS X.II’s bundling of gcc-3.1 should help; it is a much better compiler than GCC-2x, and at least knows how to spell Altivec. But even that won’t improve Macintosh performance enough for them to justifiably claim supercomputer-in-a-box performance.
The supercomputer claim is based on Altivec. It was never based on the performance of raw C. Why don’t you check your facts before getting your rant published? It is really that hard to type http://www.apple.com/g4/ in your browser?
JBQ, I understand your point of view. But I believe that AMD hasn’t much of a choice with regard to their marketing.
Consumers generally don’t understand that clock rates do not correlate well with real CPU performance. Many consumers just think, the higher the MHz count the faster their machines. That puts CPU manufacturers in a very ackward position, as better designed CPUs offer more performance at a lower MHz count. Will they produce CPUs with higher clock rates while they know that the performance will be severely bottle-necked? Yes, if that sells them more CPUs as compared to the competition…
For example if I would sell an Amiga system to a general PC consumer now. I would not be suprized if I would get more money for the system using a 50 Mhz 030 as compared to one with a twice as fast processor namely a 25 Mhz 040. It’s a dilemma and it would take alot of effort and time before consumers are better educated with regard to this.
Further, I suspect that as the Mac slogs forward into an OS X-targeted world, this isn’t going to remain the case indefinitely even for the biggest developers.
The most recent conversion to OS X of a large application, Maya, was done using Codewarrior. So you are probably talking about a future when we have gcc 3.1+ (which Apple seems to be optimizing for PPC at a fast rate), Codewarrior 8+ and a new PowerPC. We can only judge about that future when it comes (or if you have a key position at Apple).
NeXT’s cultish reputation was based almost entirely on the wonders of its RAD tools–the ones which Apple makes available for free as Project Builder and Interface Builder. Few people will argue against the superiority of them as a development environment compared to the not-very-RAD PowerPlant, and even fewer will argue against their price.
Codewarrior 8 is integrated with Interface Builder and you can use Cocoa (instead of PowerPlant) if you want. Project Builder is nice and all, but it’s not exactly the ultimate IDE (yet). Apple is working hard at it, but Metrowerks (the company that creates Codewarrior) isn’t sitting still either. The sparks are flying which is very good for OS X developers.
I made the comment a couple months ago that the articles on OS News occasionally seem to edge toward ranting about how Emperor Jobs has no clothes, and I got slapped down for it–perhaps rightly so.
It’s true. Eugenia simply does not have the technical or business expertise to correctly judge these kinds of stories (benchmarks, OS X on x86, etc). She also reports on badly written rants, filtering that should be her expertise. I hope that I won’t get bashed for writing this. But of course, JBQ admits in his follow-up that he is a sucky writer that talked nonsense, so it seems he agrees with me.
Don’t you think you were wrong about reporting this ‘story’, Eugenia? I’m not trying to bash you, but I’m seriously interested in what you think about your responsibilities as an editor. Do you feel responsible for printing quality stories or don’t you have a problem with publishing rants if they create page views?
. Yes, the PowerPC 750 needs to be made to run at a much higher clock speed.
The G3? I assume you meant the 7540 (the newest rev of the G4). That processor is mostly limited by the bus. What we really need is a faster FSB and/or an onboard DDR controller. Coupled with incremental clock speed improvements, that would result in a system which should perform quite well. Of course, I’d also like to see the excellent engineers at IBM use their Power4 expertise to create a new ‘low-end’ PowerPC. Fairly reliable rumors tell me such a project is underway and should achieve fruition early next year.
> Don’t you think you were wrong about reporting
> this ‘story’, Eugenia?
I believe Eugenia did the right thing as she perceives the article as being correct. Nobody claims that OSNews is always 100% right, all of the time. If JBQ, an ex-Apple AND ex-BeOS developer perceives these statements as being correct, than believe me others do so too.
If the statements are wrong, then use the comments section to give your personal take on the situation. Regardless if right or wrong, it brings up an interesting subject which is excellent food for discussion here on OSNews.
Luckily there are all kinds of people reading OSNews, all with different backgrounds. Give your personal views. IMO the biggest advantage of OSNews is the diversity of backgrounds, that makes OSNews stories more interesting reading material than those found on most other websites, which only view things from one angle: The angle of their similar minded visitors.
A lot of the Mac faithful have been complaining about the SPEC mark not showing real world results.
Why do you try to put down Mac-users by calling them the Mac-faithful? You admit yourself that the complaint is (at least partially) valid, so why are you bashing people? I bought a HP printer the other day, does that make me one of the HP-faithful? Or could it be that I just chose the best tool for the job? I might actually be a rational consumer.
BTW, SPEC is also invalid for AMD vs Intel comparisons, although it is less so since the CPU’s have more in common. So I used the PPC to make the point, but my argument is valid for dismissing SPEC as a tool to bash AMD as well (for the most part).
For someone trying to write cross-platform (ie not-hand optimized code) scientific code, this is as accurate a meter stick as any.
And how many of the people who use SPEC to bash Macs actually write/run non-optimized, scientific code? I’d be amazed if it would be more than 0.1%. I criticize the people that don’t make this distinction and just declare the PPC to be a useless piece of junk based on SPEC results.
BTW, cross-platform doesn’t clash with non-hand optimized. Photoshop is both for instance.
believe Eugenia did the right thing as she perceives the article as being correct.
I accept the fact that she cannot judge the technical merits of JBQ’s editorial. But the article is a badly written rant. I expect (quality) editors to refrain from publishing such stuff. I don’t know what the value of an editor is, if he/she doesn’t work as a filter to bring us the most interesting and well written stories.
If the statements are wrong, then use the comments section to give your personal take on the situation. Regardless if right or wrong, it brings up an interesting subject which is excellent food for discussion here on OSNews.
Perhaps. But if a site publishes too many crappy stories, the smart people will leave and flame fests will ensue. I have the feeling that there already has been a distinct increase in the number of run-away threads over the past few months (it might be me though). I’ve already refrained from responding to an article a few times because the comments were mostly flames. I felt that the choice of stories to publish and the comments on the stories had a lot to do with the crappy comments.
IMO the biggest advantage of OSNews is the diversity of backgrounds, that makes OSNews stories more interesting reading material than those found on most other websites, which only view things from one angle: The angle of their similar minded visitors.
OSNews has a slant to it. I certainly don’t consider it to be very impartial in it’s reporting. Of course, that why I visit it in addition to other sites that have different angles.
We will know when computers are fast enough because people will then
stop arguing over benchmarks.
My guess is that that will be when they are roughly 25000 the speed of
current models. Current models are at least 1000 times the speed of
the Apple][
A practical aim would be to ray trace a scene with plenty of textures,
using radiosity, in under 1/80 second.
> he/she doesn’t work as a filter to bring us the most
> interesting and well written stories.
For me, an interesting part of this article was *who* wrote this rant. JBQ is a well-known ex-Be engineer, as well as Eugenia’s husband.
> OSNews has a slant to it. I certainly don’t consider it
> to be very impartial in it’s reporting.
If you believe this, then why not do something about it! OSNews considers posting stories by its userbase as well. Feel free to write something interesting and well written (in your opinion) and then submit your article. Actually the people involved in editing OSNews would love insiders/computer professionals/leading community figures to do so!
Funny how Eugenia bashes MacRumors.com for publishing articles that generates hits when she does exactly the same.
The article was valied in a way, Macs are currently slow (except when crunching RC5) compared to the fastest x86 processors but that will probably change in time. And they aren’t _that_ slow that some people like to belive. Most normal mac users are aware of this and a rant editorial like this (that doesn’t say anything intresting really) is really uncessary (if you don’t want a flame-fest in you forum).
This article was silly, can’t we just forget it and move on to something more intresting?
Spider: I don’t think gcc is a good compiler for the PowerPC architecture, until now. Apple have been working very hard on optimizing gcc to work better and give better results on PPC.
They have also put alot of work in optimizing the libraries wich should make a big diffrence.
Jaguar will have gcc3 and thats good news for all osx users.
But you are right, Eugenia have been kind of childish lately with all the Apple bashing.
Everybody should write Apple and request that they “get with the program” and allow SPEC benchmarking to be done and published. Apple needs a good prodding to make them move their lazy asses on most issues. They’re excellent at some things, and SUCK at others. I want them to be all-around excellent, so tell them what you think!
And while you’re at it, you might wanna mention the G5. Where *IS* it? Missing in Action, or Killed in Action?
Speck.
I have 5 computers at home, 4 x86’s with different linux/bsd distros, 1 xp box for my wife and 1 G3 500mhz imac with OS X 10.1.5
My mac performs great for me, it is indeed a little slower that my other computers but i don’t care, i enjoy using a good operating system.
It never crashes, looks great and i can use all the unix appz i desire within a nice enviroment. Everyone has their opinions but i use every OS i can and finally i can say i found my home.
i can’t wait to get home to my 500mhz g3 to get away from this top of the line win2k dell box here at work.
mmm apples.
To whomever posted the link to the Kuro5hin article, you really ought’ve read the comments attached to it before quoting those numbers. There was a serious load of discussion regarding the poor coding practices of several of the examples, and I believe a string of diaries also discussed the evolution of those examples and some rather substantial shifts in performance of various languages.
I posted the link.
I am aware that there was quite a discussion but that wasn’t relevant to my point (though I did note that Java had been discussed).
Different languages and compiler options can make a very big difference to performance. Some of the differences in those examples absolutely dwarf the difference between performance of x86 and G4. The fact that the performance changed when other people looked at the code only serves to reinforce my point.
Last month Eugenia was called a Linux basher, even thought she uses it every day. In July she is called a Mac basher, even though she uses Mac OS This is ridiculous. None of these discussions have to be flame fests. In them are also very intelligent posts, even if they disagree with the articles, etc. Flame fests only happen because people respond with flames instead of well thought out comments. And, when articles are posted, it’s no fair calling them flame bait either (well, maybe some are on purpose, like John Dvorak’s <g>). Look at the interest generated by JBQ’s article! I learned a lot in this discussion – after plowing through the flames to the useful posts. Many people here write posts that are longer than the articles themselves. Why not polish them up, try to use an objective tone and submit them?
I don’t consider Eugenia a ‘Mac basher,’ really, no–just to be kind of peculiarly zealous in pointing out flaws in the Mac, as if she feels a duty to counterbalance those who refuse to consider the possibility of flaws.
JBQ’s follow-up article is amusing–but I’d change his Camaro and a Metro to a Camaro and an Acura RSX. The Camaro will still whip the RSX’s butt in a straight-line drag race, and may well cost you less, depending on options. There will nonetheless be people who prefer the construction, ergonomics, handling–the fit and finish, or dare I say, the “look and feel”–of the RSX over the Camaro.
Handling, handling, who needs handling? The Camaro is so fun when it starts fishtailing around 100mph (getting close to redline in 3rd gear). NOT.
Plus, the Camaro is so easy to maneuver in tight underground parking garages. It is short, has little overhangs, excellent visibility, feather-like steering, and a very short first gear. NOT.
Finally, the Camaro is such a pleasure to drive in the steep streets of San Francisco. Starting at a stop sign at the top of the hill is incredibly easy thanks to the light clutch that engages progressively, and the smell of a burnt clutch is so pleasant. NOT.
I didn’t drive an RSX, so I can’t compare. But I did drive a Metro, and buy does it feel slow and fighteningly light. (“where are the brakes, where are the brakes?”).
I happen to drive a Z28 myself.. Polo Green, 6 spd. Might trade it for the last ‘SS’.
As far as processor speeds go, most software doesn’t require that kind of processing speed. If you do, then you have a reason. If the software really needed it, you would be seeing much better computer sales right now. I like Apple, but their hardware is unfortunately falling behind the times, and not just the CPU Mhz.
One thing with Motorola though, it seems that their processor families have very large performance improvements, where Intel always had many more small incremental releases. Hopefully, the G5(?) will help Apple regain some hardware performance.
>Funny how Eugenia bashes MacRumors.com for publishing articles that generates hits when she does exactly the same.
I wrote that for MacRumors the other day because they *LIE*.
*I do not, never did, and never will lie.*
Read my homepage about how important truth is for me.
I do not hate the Mac. As Jay very successfully pointed out, I point out flaws and problems for all platforms. Problems for ALL platforms. It is one of the jobs I got to do over here.
Deal with it. You like it or not, you will have to face the truth. By calling “bashing” a true statement, makes you look like an blind zealot, at best.
Thanks a lot to the author of this article. I was always suspecting that Apple is trying to cheat people in many issues, including its Os 8, 9 versiosn where even the most simple things can not be done, like threading and so on.
In Os X, they overcame many problems and finally came up with a good operating system. I was thinking to buy a mac recently, but I was always very suspicious about certain claims. The fact that Apple is the only company which didn’t publish SPEC benchmarks shows that Apple is not serious about its claims.
Also they are makings lots of things very costly, .mac, the hardware itself, upgrade to the jaguar and so on. You pay more and you get less.
Some people compare macs to the luxury cars, where less people have, but the cars are top quality. Now that people prove that macs are slower than pcs, I think macs can not be compared to a luxury car at all, because you pay more but you get more for a luxury car. For a mac, you pay more but you get less, at least in SPEC benchmark which is a strong indication of performance for many other real world applications. Even in a SPEC benchmark mac should have been better because we are paying more for that machine.
I was really tempted to buy a mac, but now I will never buy.
Don’t believe me? I’m talking about beetles like the one that Gary Berg ( http://www.geneberg.com ) owned a few years ago. And also lot’s of other aircooled VWs will beat JBQ easily on the 1/4-Mile. The funny thing is, the beetles have less displacement and less HP (k, a some have more) then JBQ’s Camaro.
What I’m trying to say: You can’t compare ANYTHING on a paper. You’ll have to test it in real life.
Peace,
LoCal
—–
Who has a fast Old-school CalLook beetle (so JBQ, wanna mess with me
Who will buy a Mac soon, but will also stay with BeOS
Sergio, Apple (well, really Steve Jobs) is guilty of hot air and too much hype (lying <g>). However, there are considerations that could be looked at, although there is no excuse for how far Apple has gone over the line. Apple was on the brink, Jobs came back and really did save the company. He focused first on trying to re-invent the Mac through design – the iMac and cool looking Power Macs. Then began the road to bundled software, which Apple was very scant on – iTunes, iMovie, iDVD, iPhoto, updating AppleWorks, iTools etc. And, of course, through all of this the development and progress of OS X. So, to me, Apple has a pretty good thing going – cool computers with a friendly looking OS and lots of bundled apps that ordinary people can really use. And now there is .Mac, which is controversial right now (I think they should at least let people keep their email for free and have written to them saying so), but actually has a lot of stuff. Getting Virex just in itself is worth something. But, Apple deserves criticism for not being truthful and gouging people. To me, they only need to do two things to make buying a Mac a great thing – just tell the truth and speed those Macs up! That’s it. If they do that then buying an iMac for consumers or a Power Mac for design, DTP, etc. is once again a great thing. Now, if only they will do these things <g>.
For optimised applications a p3 600 goes about the same as a g4 500.. when using altivec and SSE. Thats what an average of all the optimised benchmarks ive seen shows.
At higher clock speed the G4 lack of optimisatinos that are needed to keep scaling the cpu with clock mean it dosent perform linearly… unlike AMD and intel cpus which almost are.
Spec isnt optimised.. but neither is 90% of programs used. However fortunatly an OS can use optimisations in providing the 3d engine to games and applications. So in reality some of the 90% unoptimised applications use some optimised code.
It is faster if code is written in asm.. but you can use an autovectoring compiler (like intel compiler of ms studio .net) to produce near optimal code. OR you can use allready vectorised libraries.
Since Intel C++ optimising compiler and ms visual studio .net (see the speed improvement up to intel when they added the autovectorisation) are the most common compilers for pc. gcc cannot autovectorise .. however it can use extended instruction sets.
If anyone is complaining that the spec mark isnt revelent to usuall applications .. its pc users who have the best compilers used allmost exclusivly.
Eugenia knows way more than most of the ppl i see who post here. So many posts like “arg i have a mac and i like it and it is really fast but i cant prove it to your but i know its so.. and also its so much better than windows and anyone using windows must be an idiot”…
for gods sake
I chose to use windows because its better.
I get 1 ms latency on audio fx with live monitoring now and have for a year… im not waiting for CoreAudio and some osx sequencer to finally be released.
I find the Mac Ui designed for ppl who like to do 1 thing at a time.. (such as the context menus for the app with focus being in what windows has as the start bar! how dumb is that… how do i access the menu of an app not in focus)
Mac Ui cannot be used by just the keyboard or just the mouse.. you have to use both a lot (since its only got 1 button) and the UI cannot be access fully from either one (but windows can)
Mac is about style.. not computing.. its about the color of your box.. not whats in it. ITs a lifestyle if u want… i dont care… im a IT professional and muscian and i am VERY informed and i make my own choices.
This is not a conspiracy.. i use windows (and so do other ppl) because we believe its better.
“Mac is about style.. not computing.. its about the color of your box.. not whats in it. ITs a lifestyle if u want… i dont care… im a IT professional and muscian and i am VERY informed and i make my own choices. ”
LOL Glenn, you blew your whole argument with that paragraph! I’m glad you’re an IT professional, but I’m afraid that doesn’t make you The Universal King of Computing <g>. The fact that you like Windows more than any other OS is great. Windows XP Pro is the first real version I really, really like. I think it’s great. Every OS, every platform has their strengths and weaknesses and that’s part of what makes it all fun. To me, when you proclaim one as supreme over Macs or anything else, you are robbing yourself of one of the great enjoyments of computing – diversity. Enjoy!
It should be pretty clear to many people by now that if you really want to see speed on a Mac, you need to vectorize. The good news is that if you do, the Mac is very competitive and usually scores much better than the other platforms.
While GCC will compile AltiVec code if you turn the -faltivec flag on, it will not autovectorize code — that is to say that you cant just turn the -faltivec flag on and expect SPEC to be vectorized automatically for you. There is a VAST autovectorizing preprocessor sold by a third party. I am not sure that anyone has looked at using that with SPEC. It might provide some improvement.
All and all, I am not at all bothered by these numbers, because it is fairly clear to me that high performance is no longer the domain of scalar processors. SPEC is a scalar performance benchmark. As a result, it is getting pretty irrelevant. I think we will see more vectorization on all processors in the future, not less.
Performance sensitive code will need to be vectorized. Perhaps with luck there will be some better vector programming languages out there that can be compiled cross platform to make a vectorSPEC. Until then, you need to rely on key functions vectorized commonly used by many apps. Good examples might be BLAS and FFT:
htttp://developer.apple.com/hardware/ve/summary.html
Sadly, our author reveals his bias by categorically refusing to vectorize. I seems unlikely he has actually tried it himself. Developers who want to compete in the mac market can and will vectorize. Those who do not will be at a significant performance disadvantage vs. their competitors.
Vectorizing is not really that big of a deal. You run a profiler, pull out the top N functions using CPU time and vectorize them. You dont have to vectorize the whole app to see multi-fold speed improvement. Just a small percentage will do.
“The good news is that if you do, the Mac is very competitive and usually scores much better than the other platforms. ”
Arg why do ppl think this THEY ARE SO WRONG. Resistance is futile allready apple users are so far up steve jobs butt there is no sight of any light at the end of the tunnel.
Look up bryce look up adobe look up digital video editing look up games look up audio benchmarks.
Look up all these things that have been optimised for SSE and altivec.. and who wins ? the PC hands down.
AS I SAID
a mac g4 at 450 mhz is about the same as a p3 600 when running optimised code.
HOWEVER the mac cannot scale as well. A mac G4 1 gig is not twice as fast as a mac G4 500 Mhz.
Amd and intel cpus scale much more linearly.
Altivec cannot use 64 bit floating point numbers.. which is why the apple is awefull at REAL scientific computing. Thats why the head of mathmatica has said he cannot use altivec at all in the upcomming osx mathmatica. (However he could use SSE2 in the p4)
If u look back a few days in another apple artical here you will see a real scientist debunk the apple myths. And that was particular code where the users could use 32 bit numbers AND it was heavily optimised and premoted by apple.
“Vectorizing is not really that big of a deal. You run a profiler, pull out the top N functions using CPU time and vectorize them”
ARG would the ppl from macrumours.com pls return there.. they know nothing about compiling or operatings systems.
Go back to macworld and stay there please. Or become a little more informed before posting so you know what your talking about.
It is NOT allways possible to vectorise code.. and it certainly aint easy. It might be easy if your have a data in a format that is easily vectorisable.. but then if that was the case a simple compiler flag would vectorise it into SSE 3dnow or altivec instructions.
Im sure some of u apple ppl hate me.. but ive posted many benchmarks on these forums before.. just like the ppl who run osnews and get complained that they are bashing apple every time. If u intend to post again saying “i own and mac ands its really fast but i cant prove it but i love steve jobs and i know im right”.. then pls do some sort of search and find the many benchmarks allready on osnews.. MOST OF WHICH USE ALTIVEC and yet the PC wins.
The truth hurts,… especially if u own a mac.
hehe my friend just read my post and said it was a bit harsh… so um maybe it was.
I dont mind using apples, or apple ppl.. in fact some of my friends own apples.
The way that the apple community responds is annoying.. i like these forums cause oftern i learn knew things and there are great debates. This one however was once again killed simply by the number of apple faithful who dont understand computers at a level to see through the reality distortion field. All i would like is for apple users to stop posting here when they simply dont understand the topic and so try debunk some low level risc vs cisc talk with something like.. “apple web page says my mac is faster”.
Anyway peace love and happyness to all
Glenn
What a roast…
Glenn is right though, altivec ain’t that flash.
( But he does tend to get a bit biblical about these arguments
Glenn, I don’t hate you. I totally agree, Apple has to speed things up – or else. LOL, I don’t know what they’re going to do with all these digitial companies they bought if they can’t get the OS moving. I do think consumer Macs are a good deal for people who are, well, average consumers and computer users. The suite of iApps and AppleWorks is just right for them. But even those iMacs should have at least a 1 GHz G4 – OS X is just plain too slow as it is now. And I’m only using the benchmark of my eyes <g>.
I get 1 ms latency on audio fx with live monitoring now and have for a year… im not waiting for CoreAudio and some osx sequencer to finally be released
I’d be curious to know what you are running to get this type of performance b/c we in the r.a.p. group have not even come close to this with XP. If you are indeed getting 1ms latency (And I am quite curious about that), I must ask if you have any addition hardware assisting in the processing (TDM?, dedicated AudioCards?).
What you fail to realize, since you brought it up, is that Apple *specifically* developed their CoreAudio system with 1ms latency [as a goal] such that ANY sequencer/audioApp developer can benefit from this innately within the OS, from the Apple API’s. Their exists no such equivalency in the Windows world. If a developer wants to get 1ms latency in Windows, they have to reinvent the wheel and write that low-level code themselves, if they want to do it in OS X, they simply use the Apple API’s.
You are 100% correct that audio app development has been very slow moving on OS X, understand that all these apps have very little portable code from their OS 9 counterparts, it is a huge task to rewrite many of these high end audio apps (almost) from scratch.
=)
-spider
>>Mac Ui cannot be used by just the keyboard or just the mouse.. you have to use both a lot (since its only got 1 button) and the UI cannot be access fully from either one (but windows can)<<
WRONG! You can control the UI completely from the keyboard if you wish and vice versa… get your facts straight before spreading FUD!
>>Mac is about style.. not computing.. its about the color of your box.. not whats in it. ITs a lifestyle if u want… i dont care… im a IT professional and muscian and i am VERY informed and i make my own choices.<<
WRONG! It’s a tool just like any other computer!! I am a programmer and also a musician and I am very informed and I also make my own choices… that is why I left Wintel platform in the first place!!!
With Sonar 1 and WDM drivers designed by ms this old reviewer was getting 2.9 ms with 72 tracks. This artical is a year old and is running on a cel 600. Is osx yet claiming to get 72 tracks on a G3 600 in 3 ms? I know ppl nowdays get 1.5 ms but i cant find any pro recording link online. Since u only notice live monitoring above 5 ms (only noticbly over 10).. its a pretty mute point. But yes my computer monitors in 1 ms (even less actually) but my card is an echo gina and not as good as some of the other ones so it drops out sometimes. If u want to get low latency you have to get a good sound card.
http://www.prorec.com/prorec/articles.nsf/3fbdd95be86013f1862567f30…
WDM is a protected driver model.. it means the sound card driver despite being a kernal level (hello speed) driver it CANNOT overwrite memmory of other processors (however my x86 os writer friend tells me it could READ mem and hence be a security risk). This is why i cant crash my computer or my soundcard with SONAR. No matter how hard i push it.. and ive tried. I also use ASIO on pc but this isnt as good latency or reliability. Id be making a guess but id say since Linux and all other operating systems dont support wdm type drivres, osx dosent.
What this means is an audio device under osx is in kernal level but with no mem protection (Like all operating systems except for win2k) and so can overwrite any memmory. If your sound card crashes it can take your whole system down.
Hey again CattbeMAc
I spend ages trying to find that IBM review of osx UI .. in it and on zdnet i read that most but not all of osx is avalible from the keyboard. They had some screenshots of sherlock (i think) and said how u cant just use the keyboard to get to all the parts of the form. Sorry i cant find it perhaps they have fixed that now
Yup i know its about the computing.. it was a cheap jab You made your informed decision and me mine.. it only becomes a problem when people berate others about their choice of computers and force them to buy something with false marketing information.
>>Yup i know its about the computing.. it was a cheap jab You made your informed decision and me mine.. it only becomes a problem when people berate others about their choice of computers and force them to buy something with false marketing information.<<
I am in agreement with you here 🙂