Apple seeded a new build of Mac OS X 10.4.2 to developers. Build 8C27 addresses a few bugs from build 8C26 and features an “improved widget download experience.” Individuals have submitted Xbench benchmark results from Apple’s Pentium 4-based Power Mac systems. The benchmarks do not reflect native performance of the 3.6GHz systems, however, but rather provide an indication of how PowerPC-compiled applications will run under [the emulation-based] Rosetta on Intel-based systems.
New Mac OS X 10.4.2 build, Intel Mac benchmarks
About The Author
Eugenia Loli
Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker.
Follow me on Twitter @EugeniaLoli
87 Comments
“For example a typical game (really console, not PC) to speed up the loading times, may precook it’s data (textures, shaders, models, etc.) to be directly loaded off the disk and used directly (e.g. no “float x = file_read_float(…)”, but directly load the full structure)?”
Which is smart. Would take at most a static_cast. All in all, this kind of casting doesn’t produce any extra machine code; it’s all about compiler errors/warnings and such. It’s just there to let the naive programmer know what he’s actually doing.
“Fat/Universal binaries would provide two different executables (per platform), but data would stay the same (or that would be preffered, as otherwise double the amount must be produced – as in CDROM’s, DVD’s, etc.).”
Unless you’re a complete failure at optimizing software, in your file format you’d include a extra field in the header which hints the endianess of your data. If you happen to run on a different endianess, swap all relevant bytes and OVERWRITE the previous file (unless you did it in-place, which is more reasonable). If that’s a read-only media, then you’re screwed anyway. Still, nobody said optical media was fast, and the I/O contention will overshadow the CPU contention anytime — i.e., there will be plenty of time to swap the bytes on the fly.
So you’ll have a one-time-only penalty of converting formats the first time your data is read, and it’s still portable, since all you’d have to do is swap the data back if you change platforms/endianess again.
I wish Apple could provide something similar to what DEC did for the Alpha version of NT 4.0 way back in 1997.
And you don’t think most ISVs are fully aware well before hand about the switch?
No. Otherwise it would have leaked much sooner. The Mathematica guy certainly didn’t know anything about it until he was invited to do a quick port over the weekend.
Apple would have announced immediate availability of actual Intel Macs if it somehow could have got the ISVs to secretly prepare for the switch.
It would have been better if they would have being able to start shipping Intel Pentium M iBooks today instead of a years time, yet as people have mentioned it will probs not be the current M chip that will appear in the first mac, but a P-M dual core 64
I wonder how they will distinguish between iBook and Powerbook?
Celeron & M type thing?
As long as Intel Mac can boot into target mode and from Firewire, then as far as I care, it’ll still be a Mac. If they take those features away, then there’s gonna be trouble…
The upper management of big companies would have known. Such as Adobe.
Mathematica, by contrast is small potatoes.
@James, Rayiner
Thanks for explaining. I don’t understand fully what Rayiner is saying, but I get enough of it to see the issues involved.
It’s silly to compare this new x86 move to “what might have been” with Cell (but not for the order of execution or other CPU traits – simply because it’s not going to happen – yet). Cell is something with the future very much in mind, it’s potential is massive, and according to the specifications it CAN be based around other POWER5/6 architectural “cores”. The particular Cell design for the P3 is very specific to Sony’s requirements, and BOY is it impressive to a chip-geek like me!
I do understand why Apple ducked out of the CPU race, but it’s kinda sad that they will never really be ahead of the game again, ever, using generic Intel CPUs. There’s a limit to what’s possible with the archaic mess that’s x86, many others (like me) are tired of faffing around with SSE, simply to provide the kind of facilities we should now be able to “expect” of a “modern” CPU design.
I would have loved for Apple to have bought into Cell, but sadly that’s something which they could ill-afford, it’s now upto others to innovate, and I’ll buy into the first company ambitious enough to take the first step with Cell.
…I’ll keep my eye on the updates which are happening to the Linux kernel (for Cell), and the coming announcement of the first desktop for cell at the end of the month
There’s a limit to what’s possible with the archaic mess that’s x86
That’s what people have been saying for the past 20 years or so.
All the messy instruction set incurs is a bit of extra decoding hardware to turn those instructions into RISC-like micro ops. With every processor generation that becomes less of an issue. Furthermore, the variable-length encoding yields smaller binaries and thus saves memory bandwidth.
Besides, AMD has done a good cleaning-up job with x86-64.
many others (like me) are tired of faffing around with SSE, simply to provide the kind of facilities we should now be able to “expect” of a “modern” CPU design.
What’s your problem with SSE? That’s one of the cleaner parts of the x86 instruction mess.
It’s silly to compare this new x86 move to “what might have been” with Cell (but not for the order of execution or other CPU traits – simply because it’s not going to happen – yet).
Well, the fact that Cell isn’t actually available yet is a bit of a problem, but you can’t just dismiss the technical issues.
An in-order design with a long pipeline and no branch prediction would choke badly on desktop applications, even more so if they haven’t been specially compiled for it. And the slave processors would go unused anyway without substantial application rewrites.
On the other hand, if the Cell did have all the nifty techniques of a G5, it would also be as big as a G5 and there would be no space for the slave processors.
Lastly, if the Cell concept with a big core controlling lots of small ones should turn out to be better than having just a few big cores, there’s no reason why that couldn’t be done with x86.
“There’s a limit to what’s possible with the archaic mess that’s x86 – That’s what people have been saying for the past 20 years or so.”
Yup, and it’s just as true now as it was back then, infact it’s kinda worse.
I used to program a number of alternative architectures in the early days, starting with ARM1, a beautiful design to work with, very powerful yet simple. Writing assembler was quicker (for me) than writing in c (no kidding). I’ve worked with 68xxx, and PPC/Altivec again some nice features for the assembly-coder….then there’s the x86…..horrible.
No matter, I had a case of burn-out and took a career change, now I’m thinking I need to return, Cell looks very interesting to me.
“All the messy instruction set incurs is a bit of extra decoding hardware to turn those instructions into RISC-like micro ops. With every processor generation that becomes less of an issue. Furthermore, the variable-length encoding yields smaller binaries and thus saves memory bandwidth.”
I don’t care how “clever” the microcoding is, it’s still a dag-awful instruction set, derived from an 80s obsession – all stack and no registers.
“Besides, AMD has done a good cleaning-up job with x86-64.”
True, but only for those who can be troubled with writing many instances of the same code, for each instance of AMD/Intel rattlin’ more bones and burning another onion! There’s only so much voodoo you can do with bad Juju!
x86 has always been something of a skate board with a rocket attachment, something Wile.E.Coyote would no doubt approve of
…and yes it DOES matter to C/C++ etc. for kicks go over to the gentoo forums and checkout the near-infinate discussions about compiler flags and optimizations for gcc. Different optimal settings for each CPU, one mans speed-grease is another mans engine-brake…it’s all a huge mess.
One good thing about Apple controlling the hardware, is that the won’t HAVE to implement or re-compile for 10 different instances of essentially the same CPU base architecture.
“many others (like me) are tired of faffing around with SSE, simply to provide the kind of facilities we should now be able to “expect” of a “modern” CPU design. – What’s your problem with SSE? That’s one of the cleaner parts of the x86 instruction mess.”
Because it’s not that simple, if there was just SSE, not SSE2,SSE3, 3DNow….and further cleanups and band-aids here there and everywhere, leading to more variants, it’s a parody on itself, and a bad one at that. My point is, that the enhancements are not replacements for the original instruction sets, they’re additions, and they’re not even consistant.
“It’s silly to compare this new x86 move to “what might have been” with Cell (but not for the order of execution or other CPU traits – simply because it’s not going to happen – yet). – Well, the fact that Cell isn’t actually available yet is a bit of a problem, but you can’t just dismiss the technical issues.”
Cell is a future technology, true. That’s why it’s speculation, and a bit odd to compare it to something which exists now. However, some devs have prototypes and are working with them right now.
“An in-order design with a long pipeline and no branch prediction would choke badly on desktop applications, even more so if they haven’t been specially compiled for it. And the slave processors would go unused anyway without substantial application rewrites.”
A lot of the issues with branch prediction and out of order implementations are down to the architecture in question. For x86 those things are essential, Cell is something of a paradigm shift remember, it’s no “ordinary” cpu with a bunch of Altivec’s bolted on a’la Intel’s methodology, it’s a bigger picture….and it does have multitasking and threading very much in mind, check the spec
Lets just wait for the end of the month, when there’ll be a desktop machine announced, and some real-facts documentation – not cpu-hyperbowl.
“On the other hand, if the Cell did have all the nifty techniques of a G5, it would also be as big as a G5 and there would be no space for the slave processors.”
Not according to information so far, like I said, lets wait and see….exciting ehy?
“Lastly, if the Cell concept with a big core controlling lots of small ones should turn out to be better than having just a few big cores, there’s no reason why that couldn’t be done with x86.”
See above There’s a lot of reasons, but mainly that Cell is a paradigm shift in the way CPUs operate and interact. For x86 to adopt the concept of Cell, it would become Cell.
x86 will have to go somewhere soon, Intel can’t keep tweaking the clock and adding cores forever, I hope they change it I REALLY do!
“They chose BSD to build MacOS X, so now they have a stable but slow OS. If they switch to Linux and build their OS on it, they will have better performance.”
I agree totaly with this comment. Read this article from AnandTech: http://www.anandtech.com/mac/showdoc.aspx?i=2436. The server tasks are 10 times slower than on PCs. The Darwin kernel sometimes really stinks.”
Yes, the Mach microkernel grafted into the FreeBSD base stinks due to the way in which it interacts with the base OS. Linux won’t solve the problem at all, that’s purely fanboy speculation. The problem is that the latencies involved (due, no doubt, to the emulation required to stick a microkernel in a monolithic framework such as FreeBSD) make the whole OS (not the hardware) a bottleneck… A pure FreeBSD OSX without the Mach microkernel would perform an order of magnitude faster… (Sticking Linux as the monolithic core would induce the same latency issues, and as such would make no more sense… but it might be interesting to see a Linux OSX. Assuming, of course that you could talk Apple into shooting at the moving target that is Linux development…)
Well, I´m eager to get one of those OSX Intel Preview copies, whether to see if it runs on a “normal” computer, just to test it myself…
Huh?
Have you actually worked with widgets?
Click on the download link
Download window tells you: you’re downloading a widget, do you want that?
You say: NO => when you just clicked a link and some smartass wants to perform an unauthorized install
You say: YES => when you actually wanted to download and install the widget.
The widget goes to your personal folder [only available to you, unless you move it to the general system folder].
F12
clicky on the big PLUS sign
scroll through the available widgets
drag the new widget to the widget desktop
you’re all set
What do you mean: CREATE the widget folder?
Have you been smoking aunt Selma’s washing powder again? Didn’t we tell you not to do that?
Intel Purposely weaken the FP unit in the Pentium for an introduce the SSE-2/3 as both as a replacement for the Slow Cisc Fp unit and as a double precision SIMD core. It thus help them make faster Ghz processor faster than(in Ghz ) than the traditional Athlon XP. While it may be true Athlon is faster in FP than Pentium 4 but Pentium 4 is faster in SSE compile program. As they sell more and more P4 more and more SSE adoption in app occurs.
So now they(intel) have a modern SIMD core (urges developer to use it) and an archaic FP unit and interger unit. since apple dont have legacy 32 bit app or 16 bit legacy x86 program, Apple may only need to deal with SSE and the archaic 32 bit Interger unit. But maybe in 2006 the 64 bit x86 ISA will replace the archaic 32bit interger unit.
so by 2006/7….. 2 modern optimise 64bit FP/interger unit, one new modern optimise double precision 128 bit SSE and for backword comp 2 small and unoptimise archaic 32 bit Fp/integer core. Good thing is apple don have legacy code in 16/32 bit x86. So at worst is 32 bit only + SSE Double Precious FP.
This Rosetta things obviously needs optimizing and more G4/G5 support. Even so, I’ve got a bad feeling the new Intel Macs will have to sport some serious Intel stoves just to get to *today’s* average decent PPC performance. Which kind of sucks – one will probably need a super fast, thermally challenged CPU just to get the basics. I hope Intel comes up with something clever quick – the climate is getting warmer after all
It would be also interesting to compare MacOS X with Windows on the same hardware…finally one point of reference (I wonder also if it will become this for emulator authors, demoscene, etc…)
Overall, the Intel Mac are scoring between 65 and 70 with Xbench, a far cry from the 200+ scores higher-end G5 systems reach. The CPU test is landing in the high teens compared with scores of 100 to 200 for G5 systems, but that appears to be primarily due to lackluster FPU scores. According to a recent Macworld story, Rosetta does not support AltiVec instructions, which substanties the results. The GCD Loop score for the Intel Mac, part of the CPU test, is a respectable 110, compared to dual-2.5GHz G5 Macs that score about 140.
That’s not so bad. The total score is about as good as an iBook. This is a really fast emulator. No it’s not it si some kind of dynamic binary translator.
Nonetheless this seems fast enough for me. It will be mostly for the old apps anyway.
They said not to benchmark with these, and when the Intel chips are used they won’t be the current crop of P4s, they will be the duel core, and the next gen laptop chip. These numbers are pointless, don’t worry.
The binary translator is very similar to qemu behavior. They convert the API call on the fly so that only the application itself is slowing down.
Seems that overall the CPU/Memory score is in the range 30-50%. This is the range I was expected at best. I doubt it will raise higher in the future. BTW, the score is impressive when you know how different are the “mnemonics” and registers of each architecture.
Meanwhile, I expect intensive vector-ied programs to be slower in their x86 port than their PPC equivalent. But the clock frequency should compensate very quickly. Altivec is a very very good instruction set and the G4 implementation impeccable (the G5 one is lower quality). Expect a SSE2 2.2 Ghz Pentium M program to be ashamed by the G4 PPC 1.67 Ghz version.
for a emulator/translator the results are quite good. I’ve tried PearPC and i bet it is a lot slower
I also think that the intelmacs will feature new intel cpus and not the pentium4 because one year is quite a long time ;D
Hopefully there will be no “intel inside” sticker on the intelmacs ’cause that would ruin the whole macdesign XD
Another reason not to worry is that those “development kits” are something quickly glued together that works so that developers can port their software.
It won’t support Altevic, G4 or G5 chips. So if your program needed one of those you better not try and run it in emulation.
will be drastically different.
First, the dual core CPUs that Intel will furnish for Apple for the consumer grade hardware are a radical departure from the P4’s currently being used. There is also the fact that the pipelines, cache heirarchy, decode and execution units (the internal risc core) follow very different architectural philosophies. The performance characteristics are thusly, going to vary. The P4 has crap FPU capability, unless you heavily leverage SSE2, but doing that in this setting isn’t really easy. Also, I haven’t looked at it, but SSE3 might contain some nice goodies — in the details of the implementation moreso, AFAIK, they’re not adding much feature wise.
Second, there is software, I doubt the Intel drivers have been tuned much, this alone should substaintially improve I/O performance, which should get rid of a lot of the user perceptable slow down.
something that the others don’t seem to have noticed:
This is about a single P4 vs. dual G5.
So overall, that performance is really good, especially under emulation/translation. Native performance ought to be much better, especially once things are more optimized.
Well, at least it beats CherryOS (eh… PearPC
)
I wonder how long it will take before this dev release starts popping up on the p2p networks.
video card (nvidia/ati) ? chipset ? hd ?
I’m building a new homebrew pc and i’ll love to run os x as well (at least the preview)
lets say .. i’m building a PowerMac X5!
Is apple actually shipping experimental P4 systems to developers who joined the select or premium ADC memberships?
Hi,
I don’t know if I’m understanding the statements from Apple correctly. But they are for sure going with Intel now for chips.
My question is this, will programs now be written only for Intel , or will it also work for PPC? I know there is mention of fat binaries etc. But if a developer decides to only code for x86, I won’t be able to run the program on my Mac mini?
I am pretty worried about this, hence saving up to buy the mac mini, and feeling that it might be obsolute and stuck with the same software going forwards? I was already saving up for a G5 or even dual G5, but that would be a waste of money now?
Any help would be appreciated!
My question is this, will programs now be written only for Intel , or will it also work for PPC?
Well, if this Rosetta is really just Transitive’s QuickTransit then it can go both ways – theoretically. The problem for today’s Mac Mini will be perhaps future code written for newest, baddest x86 procs …
It will be interesting to see what info we get on the motherboard. Perhaps someone will put in a dual-core Intel 820 (2.8Ghz) or faster and see what happens….
I’m looking forward to the Mac on Intel platform. As far as performance goes for those apps that need high-performance, I’m sure that either fat binaries or dual builds will be available. I’m sure that Rosetta will perform much better on Intel’s 2006 crop of dual-core processors. I don’t think its fair to benchmark its performance now when the test machines are in no way indicative of what their release system will be in mid-2006. It’s a year away.
No actually it is called Darwin and is based on MACH, NOT BSD, it just uses a BSD personality with some userland ports.
That’s funny that you claim the OS is slow, over here the old 450Mhz G4 boots OSX faster than the 1.6GHZ x86 boots XP. You don’t even want to know how much faster the G4 is at shutdown or suspend/resume.
You would change CPU vendors too, if IBM screwed you over as much as it did Apple. 3Ghz G5 two years ago? 8-core Cell for Playstation 3? Hello? Can you make the connection, because I’m not going to spell it out for you!
You do realise that BSD is different from Mach, and that xnu is a modified form of both, being that BSD is a Mach service running in kernel-space, right? (I’m being rethorical; evidently, you don’t.)
xnu is still quite unoptimized. I estimate it will take Apple 2 more years (i.e., Mac OS X 10.7) to heavily optimize it. The recently-introduced KPIs are all about allowing these optimizations to happen.
I should have written Mac OS X 10.6, of course.
>>>I don’t know why Apple thinks that the problem in Mac’s is the processor. Their G5 is great, nothing wrong with it. If they want better performance, they should change the OS. They chose BSD to build MacOS X, so now they have a stable but slow OS. If they switch to Linux and build their OS on it, they will have better performance.
What are you talking about? First, they didnt change cause of performance. The problem is the roapmad for the PPC is not clear. You cannot gamble with the future, when AMD and Intel have ver clear road maps for the future. Second, who said that BSD is slow. Their software is compiled for their hardware. This article refers to the use of an emulator to run their software on x86. This has always been a problem. That is why they showing the performance. It has nothing to do with their reason to switch architectures.
“They chose BSD to build MacOS X, so now they have a stable but slow OS. If they switch to Linux and build their OS on it, they will have better performance.”
No. Please attempt to know what you are talking about before you post such FUD.
I don’t think you are going to see much difference in actual usage. Intel is hard at work doing dual-core CPUs and so when Apple ships it may be possible that Apples dual CPU line up will have quads.
Everything is speculation at this point. I am sure Jobs had a nice talk with Intel before announcing the switch.
Well, the last thing Apple wants to do at this point is show the Intel platform as superior. They still have a lot of PPC boxes to sell before the transition and poor benchmark comparisons are great for them at this point.
When their Intel hardware is in production, you’ll see a whole new set of benchmarks that show how smart they were for moving to Intel. It’s all just one big marketing game. What else is new.
If PowerPC’s future isn’t clear, but loads of companies are jumping on IBM’s Cell, why didn’t Apple go with a Cell processor instead of an Intel?
I realize I’m asking for speculation, but the jump to a chip that consumes immense amounts of power befuddles me.
The main reason the old 680×0 emulator improved over time was because the OS did (more got ported to PPC.) The entire OS is already native on Intel Mac’s (Rosetta doesn’t support mixing code types in a process, according to Apple’s docs.) Upgrades to the 680×0 emulator itself (there was one big one, IIRC) had a lot to do with adding JIT, which Rosetta is already based on.
The fact that it’s as fast as it is is amazing, and has caused me to back down slightly from my original prediction that software availability would quickly kill the new machines. I still don’t expect to see any significant native software for a long, long time after launch, but I think Apple has a fighting chance. Also, if they don’t speed bump the G5’s again, then Intel should have a hefty performance margin by the time anything ships.
This leaves a huge opening for lovers of Java and other non-native-compiled languages. A well written java program (since the JVM is native) is going to toast most Rosetta / PPC software, and judging by past experience there will probably be a year or two window to gain mindshare before the big companies catch up.
I know exactly what I’m talking about. Current G5 code is 50-75% slower running on similar performance P4 processors than on native G5. A single 3.6 Ghz P4 is a fast machine by any objective measure.
Suppose I have a large and expensive collection of current OSX software (FCP, Photoshop etc). I buy a new intel Mac in 2006/7 and install my expensive G5 optimised software. I find out that it runs slower than on my 3 years old G5. I am very pissed off. I now have the choice of buying new software or taking a performance hit.
The only reason Apple is switching to Intel is because they were dumped by IBM.
According to St Steve his switch from IBM is the equivalent of ‘Ugly girls have much better personalities than cheerleaders anyway’ rather than admitting he got rebuffed.
Once again people are not seeing what is right in front of them boohoo. The ‘benchmarks’ are indeed, as previous posters stated, taken from a program that is being run EMULATED and with no Altivec/SIMD. It is not even a native benchmark, wakeup! It’s bloody amazing to say the least, that at this point the performance of the EMULATED binaries is so good.
Now on to the real question that noone seems to be asking: will Intel finally build a CPU with big-iron FP math huh? The G4/G5 are the “poorman’s big iron” of the computing world. Professionals use them for their kick arse FP, in mathematic and simulation purposes. Speaking from personal experience it also rocks for video, but moreso for audio production; letting me put an insane amount of superb quality realtime effects on crazy number of audio tracks streamed in. Compared to x86 boxes, which lagged with far fewer even being used, there is no question that professionals are going to DEMAND these capabilities.
Yeah I know you x86 kiddies are going to scream bloody murder over that last part. It’s just a reality – not flamebait, so wakeup and smell the java peoples.
“Suppose I have a large and expensive collection of current OSX software…”
That is why the G5 is still being supported. Now you’re just being daft.
And I am sure that apple will have a deal where you can upgrade to the new version for cheap when you buy an Intel Mac.
Why can’t Apple offer computers with AMD chips inside as well as Intel chips?
As much as I disagree with this stupid decision to go Intel, I’d rather have AMD than Intel inside.
Maybe we need a petition or something to get the ‘AMD inside’ ball rolling..
– Mark
The fact of the matter is that most software will simply recompile for Mac OS X for Intel just as I can easily recompile Firefox for PPC Linux or x86 Linux. This will probably be the case for 99% of applictions like Photoshop, MSOffice, etc. Rossetta is just not important to the majority of OS X applications because they will be recompiled well in advance of anyone buying one of these computers. Rosetta will be important for the orphanded programs that no longer have developers to recompile them (assuming they are closed source).
Also, people seem to be taking these benchmarks as indicative of native performance of the Intel boxes. This is when things are being translated and you have to assume a huge overhead for that. I really wish someone did some sort of native benchmarking on these systems because that’s what’s going to be important to me.
Frankly, IBM/Freescale don’t make most of their business off of desktop processors. They don’t put any of their muscle into it. No matter what you throw at Intel, they are a dektop processor company that has been consistant in their product rollouts rather than the fits and starts of IBM/Freescale. I mean, when Apple came out with the G5, it was great, but then IBM put it on the back burner to more profitable projects and Apple got back in the position where they were begging for scraps. intel wil keep moving forward because this is a huge market for them (desktop chips). Apple doesn’t get that security with IBM/Freescale.
And I am sure that apple will have a deal where you can upgrade to the new version for cheap when you buy an Intel Mac.
But will Propellerheads? Adobe? Quark? Mathematica? This is where the money goes in Mac software, DLP software, music sequencing and creation, scientific software…
All of these companies have no compelling reason to upgrade you for free to the Macintel version…
It is barely two days sine Steve announced the change to Intel, and everybody is already jumping to conclusions. Let’s get a few things straight here:
** The existing G4 and G5 Macs are not going to explode the day the first Intel Mac hit the shelves. Your PowerPC Mac will continue to run as before. The change will only be an issue to people who buy Macs then.
** The Systems on which these benchmarks ran are the first generation of reference platform machines. They are not here ot break speed records. Apple even had an anti-benchmark clause in the NDA because of all the really daft people who will assume the future Intel Macs will be just like these boxes.
** Steve Jobs is no idiot. Egomanic, maybe. But no idiot. He did not ditch IBM in a fit. We have known for years that IBM was unable to provide fast G5 chips. Now, Steve pulled the plug. We will see if Big Blue can deliver the Cell chips they promised for the XboX2. Wouldn’t it be a laugh if MS has to downgrade the XboX2 because IBM promised them too much?
** Intel does not want to sell cheapo P4’s for cheapo boxes for the rest of the century. They probably have some super-powered new chip in the works and need an OS to run on it. Old Windows XP is no contender here; by 2006, it will be over five years old. Longhorn will not arrive before the end of 2006 (make that early 2007). Apple has Tiger, which will be a very mature and powerful system by 2006. It is the perfect platform to launch an entirely new line of Intel chips.
Apple lost market share when it switcheed from Apple II/III to Macintosh.
Apple lost market share when it switched from 680×0 to PowerPC.
Apple lost market share when it switched from OS9 to OSX.
Complete the picture.
MS has maintained backwards compatibility and a constant chip arcitecture since DOS.
Apple thinks it is a good idea every few years to make all your hardware and software worthless.
It will take at least 3-5 years for a complete switch to OSX for Intel to be completed.
I think a lot of companies will simply say “f*ck you Steve” when they are asked to port their software for the umpteenth time. They will simply walk away from further development.
Damn the benchmarks, I want to see what exactly seperates this developer machine from a standard pc.
What bios, graphics adapter, etc … with pictures 🙂
Frankly, IBM/Freescale don’t make most of their business off of desktop processors. They don’t put any of their muscle into it. No matter what you throw at Intel, they are a dektop processor company that has been consistant in their product rollouts rather than the fits and starts of IBM/Freescale. I mean, when Apple came out with the G5, it was great, but then IBM put it on the back burner to more profitable projects and Apple got back in the position where they were begging for scraps. intel wil keep moving forward because this is a huge market for them (desktop chips). Apple doesn’t get that security with IBM/Freescale.
Exactly. It makes little sense for IBM/Freescale to be putting resources into building processors they can’t sell in sufficient quantities to be profitable. Only one major computer manufacturer would be buying them. And as great as an architecture as PPC is, it makes little sense for Apple to commit to the architecture when they aren’t able to get chips built to the specifications they require.
Intel, OTOH, as the supplier to virtually every other PC company on the planet, has the economy of scale necessary to justify developing a wide variety of chips suitable for desktop computers.
It breaks my heart to admit it, but this was really the best decision by all parties involved. I really think PPC is inherently a better architecture than x86, with a great future. But that future, unfortunately, isn’t in desktop computers. It can’t profitably compete there, unless all the world’s computer companies decided to ditch x86 and move over to PPC. And how likely as that?
OTOH, if the game consoles start “growing up” and taking over functions which were previously relegated to personal computers….
But that’s a whole different discussion. 😉
Here’s a photo of the new frankenmac:
http://img180.echo.cx/img180/7850/g5withintelchip18fs.jpg
Even if Apple has switched to Intel. I must say these benchmark aren’t bad at all. I am still surprised this is just the beggining, imagine later. PPC has been so strong, and i mean it is the greatest architecture. It’s so hard to let go, and now that this P4 is in a MAC box all i hope is that they do well. I will never buy a box with intel inside. Been a Mac user and fanboy for so long, i prefer AMD a lot more than Intel. Yes Intel has a great future in the x86 32bit, but AMD is better in the x86 64bit industry, and who wants to pay more for a XEON 64 bit only processor, or a Itanium Processor that will cost me $5000 the box. I mean yeah this is a P4 not bad I must say. But ummmm…. what’s next, the dual Core P4 just sucks…AMD’s chip is far from being a bad player. True they suck in mobile product, that can be fixed. Apple you were the best of the best, now you are just a good company. This move is great, but has some fall down!Probably more bads than goods… I am not a fanatic, I think that if longhorn can give me a great performance, I will wait. Leopard is not going to be as much different than Tiger. None of the releases of OSX have much difference. Just features that most of us don’t even use or need. Microsoft should now more than ever, get involve more with AMD, and Dell should switch too to AMD. So they can compete head to head. With longhorn comming and the new MAC Intel Product. This is not about what I like or what you like. Microsoft will always be around, so will Apple. Microsoft is a monopoly not because they are a good company. I must admit that it’s for the fact that, Apple Corp. has been playing too much with their client. I am tired of working with a company that is not stable. RIP PPC… we will see you soon! (in the next 10 years)
“They chose BSD to build MacOS X, so now they have a stable but slow OS. If they switch to Linux and build their OS on it, they will have better performance.”
I agree totaly with this comment. Read this article from AnandTech: http://www.anandtech.com/mac/showdoc.aspx?i=2436. The server tasks are 10 times slower than on PCs. The Darwin kernel sometimes really stinks.
I say we now start a version of Wine that will run the newly 86’d OS-X apps on Linux. Use Linux running KDE with an Aqua-like skin and run Mac Apps – coooool. Probably wouldn’t be hard at all.
“It will take at least 3-5 years for a complete switch to OSX for Intel to be completed.”
YOU complete the picture – no wait that’s too difficult for you I suppose, so I will.
OSX has been cross-developed for x86 during it’s entire lifetime. It is already a production-ready multiplatform OS.
Like the previous poster said, most programs are a simple rebuild away from running.
You obviously are not a developer, and more specifically it appears that you have never done any programming on Apple’s OSX. Applications on it are written for frameworks that are designed to be multiplatform and layer transparent to the inner-frameworkings of the underlying system. Regardsless if the company or designer of said program do tie-into kernel hooks, use their own libraries and/or have specific PPC optimizations, these are companies and corporations who already have x86 versions for Windows and obviously know (and have already dealt with in the past) the issues concerning endianess. Examples are Adobe, Macromedia, Microsoft, Native Instruments, Steinberg, the list goes on and on. These are not free projects designed by amateurs – they are professional-made products. That does not even include Apple’s own offerings of software, such as Final Cut, Logic, Motion, Shake – and not to forget iLife or iWork bundles.
In regards to your diatribe concerning MS and compatibility, you seem to CONVENIENTLY FORGET all of those applications that broke in the switch from 3.1 to 9X, and again from 9x to NT/XP.
Jack,
AFAIK Cell is power hungry, and hot. This whole Intel thing is about notebooks. The majority of Apple’s non-iPod hardware sales are notebooks. In fact, the entire industry is experiencing a notebook sale flux ( http://www.tomshardware.com/hardnews/20050527_155225.html ). People want portable power that’s light weight without too many sacrifices. AFAIK the Cell is not suitable for this application at this point in time.
Frankly, I expected to hear an announcement of Powermacs based on Cell chips and a new chip for the portable systems.
Apple seem to be after the dual-core Dothan announced for launch next year. Unlike the veritable habache grill that is the Pentium IV, the Dothan core runs very cool, just like the Athlon 64 CPUs. Allegedly Intel are also prepping another super-cool (thermally) technology to make chips even cooler, but that’s very much up-in-the-air. One thing is for certain, Apple certainly wasn’t looking at the PIV for their performance specs.
Since we are talking about performance, I thought I might bring up GCC.
Since all of Apple’s (not sure about 3rd party) software is compiled on GCC, does anyone know if GCC can produce more optimized code for x86 as opposed to PowerPC archictecture? I know it would be tough to directly compare, but I was just thinking that GCC may be something helping Apple…if it does better optomizing x86 code.
ALSO, Intel’s ICC is supposedly really really really fast, so if Apple and third parties can use ICC, that may be even better.
Just a thought…bored at work.
-Eric
The current widget implementation is horrendous, and I’m surprised it took Apple this long (10.4.2) to fix it. Tell me, how does a newbie find out how to install a widget if they don’t run “safe” files from Safari or use another browser? Here’s the answer, but I’m more interested in how the newbie finds out…it’s not anywhere in help and widget developers don’t provide instructions:
1) download widget
2) decompress it
3) Browse to your local Library folder and create a folder titled “Widgets”, ’cause it doesn’t exist yet.
4) move downloaded widget into the new folder
5) invoke Dashboard
To remove a widget, remove it from your local Library/Widgets folder
The fact that this isn’t automated is a serious joke. Most newbies simply double-click the widget on their Desktop, with adverse consequences.
yup, intels compilers is much faster. who knows if they are going to use it tho. I would i would use the intel compiler
for all software under apple.
Funny stuff.
Use an opensource program that you admit does not optimize well on the platform.
Use the workstation product as a server, and then complain that it is only good as a workstation and not as a server.
There is a reason why the server version is what it is. There are system tunables used for it that are enabled by default, DUH.
The ‘normal’ version of OSX is good as a workstation but not as a server for the same reason that Linux is by default good as a server and not as a workstation. It is tuned that way by default, and designed specifically for those purposes.
Even more funny is the inclusion of Linux 2.4, but no OSX 10.3 which uses the funnel lock.
Funnier still is no mention of if they compiled themselves for Tiger, and if they used the newer gcc 3.4 which drastically improves performance (they mention 3.3).
Yet another dork article, and frankly Linux is a pathetic excuse for an OS. Be a real man and use BSD. Yes I’m serious.
“Suppose I have a large and expensive collection of current OSX software (FCP, Photoshop etc). I buy a new intel Mac in 2006/7 and install my expensive G5 optimised software. I find out that it runs slower than on my 3 years old G5. I am very pissed off. I now have the choice of buying new software or taking a performance hit.”
This is a non-argument.
That’s exactly why they encourage universal binaries from software developers. Use your old G5. Why do you “need” the Mac Inel machine?
The Apple Computer company is now dead.
They still have cash and there is a chance they’ll be able to transform into a successful consumer electronics business. (They just to fire him again).
Every time Apple went through a switch 68K->PPC, Classic->OSX the company lost about half of the install base. This time they’ll loose enough people to make hardware a loss leader again.
It’s too bad, because it started turning corner. He did a pretty job after Gil saved his neck, but just like the last time his ego ruins everything.
will Intel finally build a CPU with big-iron FP math huh?
What makes you think that you don’t get “big iron” FP math on Intel? You do realize that x86 has had double precision floating point vector functions since SSE2, right? You do realize that the x86-64 machines have been performing quite well in mathematics benchmarks when compared to the G5’s, right? Don’t confuse the POWER5 processors, the brains of Blue Gene et cetera, with the G5 processors. The G5 is as small potatos as the Xeons and other non-Opteron x86 chips are.
I never code directly to the processor architecture, as do most developers, so this transition is more a matter of Apple making sure that cross compilation doesn’t introduce inadvertant hiccups for developers like me. As the migration guide states, there are some unavoidable issues to be aware of, but most software just needs to be compiled twice instead of once. Scientific software will need to be validated twice instead of once.
The battle for the desktop/laptop processors has been won, and it goes to Intel/AMD. It’s sort of sad, just as the passing of Motorola’s desktop presence was, but that’s life. Let’s move on and keep cranking out excellent work on our favorite platform–OS X.
Anonymous wrote:
“Frankly, IBM/Freescale don’t make most of their business off of desktop processors. They don’t put any of their muscle into it. No matter what you throw at Intel, they are a dektop processor company that has been consistant in their product rollouts rather than the fits and starts of IBM/Freescale. I mean, when Apple came out with the G5, it was great, but then IBM put it on the back burner to more profitable projects and Apple got back in the position where they were begging for scraps. intel wil keep moving forward because this is a huge market for them (desktop chips). Apple doesn’t get that security with IBM/Freescale.”
This is the second time a chip supplier has done a similar thing to Apple. In 1999, Motorola delivered PowerPC chips with clock speeds around 450 MHz. At which point Motorola became more interested in making chips for printers and cell phones. Which is fine for Motorola, not so good for Apple. It’s not easy to sell 450 MHz Macs while PCs are running at 1.2 GHz.
…is that these benchmark numbers are about the same or even faster than the old mac i’m still using (dual G4-450MHz), which i find quite usable.
that’s pretty cool.
“They chose BSD to build MacOS X, so now they have a stable but slow OS. If they switch to Linux and build their OS on it, they will have better performance.”
I disagree. The FreeBSD portion of Mac OSX is only a part of the operating system, there is nothing to indicate that FreeBSD is responsible for the seemingly poor performance of Darwin.
I do agree that GNU/Linux is much faster than FreeBSD though, in the majority of operations.
That’s not so bad. The total score is about as good as an iBook. This is a really fast emulator. No it’s not it si some kind of dynamic binary translator.
Nonetheless this seems fast enough for me. It will be mostly for the old apps anyway.
Agreed. With newer hardware available to the public at the time of the Intel/Mac release, plus 1 year to tweak Rosetta, the speed loss should be acceptable for most apps.
I expect the standard Mac will ship with a 64bit single dual core hyperthreaded CPU and an OS compiled to take advantage of that. Many whitebox systems will be dual core at the time too — though running at a modest per-core clock rate — so it’s not outlandish for Apple to do the same just to keep up.
Yet another dork article, and frankly Linux is a pathetic excuse for an OS. Be a real man and use BSD. Yes I’m serious.
Better, worse, … it’s all bragging over beer.
At the end of the day, *BSD (any of them) or Linux or OSX or Solaris or … — do not matter. Pick one.
In each case, you get unix as the base OS. True, with OSX and (for the short term) Solaris you can’t replace the stock kernel with a hand tuned one so you loose out on that. Otherwise, each of them give you a great deal of control.
Come to think of it, just about every OS out there is or will be based on unix — even Palm OS is switching to a unix variant based on a Linux kernel. Only Microsoft’s Windows is both popular and not based on unix — yet it has a POSIX layer and many of the default settings are quickly moving to a unix model even if the commands and tools aren’t used on unix systems.
The Cell isn’t a desktop processor. Its two-issue in-order running at 3.2GHz with moderately long pipelines. That suggests performance on the order of a UltraSPARC III, which is quite a slow chip unless ganged together in a large SMP machine. If the Cell’s PPE had been a POWER5 derivative running at 4GHz like expected, that’d be a different matter. Indeed, if it had even basic OOO execution and more than two issue ports, it’d be a different matter. But it doesn’t, and there is no way a pair of Cells could take on a pair of ~2GHz G5s in a lot of real-world tasks.
To address Anonymous’ comment about the G5 being a good chip: yes, the G5 is a good chip. The question Apple faced was not whether the G5 is good enough now, but whether it’d be good enough several years down the road. Apple was faced with the prospect of an eventual 3GHz G5 competing against 3GHz+ dual-core Opterons because IBM wasn’t improving the PPC 970 fast enough.
Now on to the real question that noone seems to be asking: will Intel finally build a CPU with big-iron FP math huh? The G4/G5 are the “poorman’s big iron” of the computing world.
1) Itanium? The fastest “big-iron FP math” CPU in existanec?
2) The G4 was never big-iron. AltiVec couldn’t do double-precision FP, and the G4’s single FP pipeline and complete lack of memory bandwidth made it spectacularly slow. Even when you could use AltiVec, unless you could fit entirely in cache, it was still slow.
3) The G5 is used a lot in scientific computing, but its not especially faster than the Opteron, which is also used a lot in scientific computing. The Opteron has a slight advantage over the G5 in that it has three double-precision FP pipes vs the G5’s two, but the G5’s are more symmetric which balances things out (almost, the Opteron still gets a higher specfp at the same clockspeed). Clock for clock, the two are comparable, with the Opteron having the advantage of being available in higher clockspeeds.
“By Tyr
Damn the benchmarks, I want to see what exactly seperates this developer machine from a standard pc.
What bios, graphics adapter, etc … with pictures :-)”
INDEED! Seconded!
There is a reason why the server version is what it is. There are system tunables used for it that are enabled by default, DUH.
That’d be a great point, if it weren’t for the fact that the machines (though they were PowerMacs and not XServes) were running “OS X Server 10.3 (Panther) and OS X Server 10.4.1 (Tiger)”. The emphasis isn’t even mine, it was bolded in the original article!
Even more funny is the inclusion of Linux 2.4, but no OSX 10.3 which uses the funnel lock.
Yes, because showing an even poorer score for 10.3 would have helped Apple’s case.
Funnier still is no mention of if they compiled themselves for Tiger, and if they used the newer gcc 3.4 which drastically improves performance (they mention 3.3).
Tiger is compiled with GCC 4.0. But that’s besides the point, because SLES9 is compiled with GCC 3.3.3. And GCC 3.4 does not “drastically improve performance”, what improvements it does have are mainly for C++ code and not C code, and even if it did, you still wouldn’t notice because these are system benchmarks and not CPU benchmarks!
Finally someone making sense her!!
I cannot believe the amount of crap that peope say. Steve Jobs is not going through such a massive change if Intel’s roadmap wasn’t incredibly compelling. I bet they spoke to AMD too. And frankly, AMD have a wonderful line-up TODAY, but are you willing to bet against Intel enormous resources and ability to spank AMD within the next 2 years? I wouldn’t.
Apple is now partnering with a chip manufacturer whose main business is the desktop and laptop market contrary to IBM. Apple is also partnering with a manufacturer with ENORMOUS fab capacity. I guess Apple was fed up with Motorola (now Freescale) and IBM excuses for shipping more than 6 months late their CPU. DO you have any idea how much this has cost Apple in lost business alone?
The other important aspect of this deal is that Intel have many times complained about how late its customers adopted new Intel technology; mainly because of the huge Windows userbase and the extra work on Microsoft to adopt them. Apple can and WILL showcase their bleeding edge technology faster than anybody. And this is good for everybody.
Regarding speed, nobody can extrapolate how fast Mac on Intel will be from these developer machines. What I have seen is the Keynote; and frankly I was surprised at the “snappiness” of the system on a single P4.
Finally even if Apple won’t support it, being able to somehow run Windows and MacOS X on the same machine at native speed could prove irresitible to many people. The first universal machine: MacOS X, UNIX, Linux and Windows!
Potentially this is a great move for Apple and down the line for its users.
agnostic has a point. I saw the keynote and boy that machine was snappier than any other Mac i have seen. But regarding the OS I think the performance report @ Anandtech gave me the impression of Mac being abyssmal at best compared to Linux. Stevie has a lot of work to do to get that OS running as fast as Linux.
What also struck to me at the keynote was…it seemed rather like a cult meeting…Steve would say something and people would just laugh…even though it was not remotely funny…kind of spooky really! 🙂
I am personally waiting for Apple to fix the performance on the OS, have top of the line multicore AMD boxes running that OS before i make the switch…but yes a laptop with the Yonah and its future generation platforms with OS X running on it sounds awesome.
I’m wondering what amount of code must be rewritten, if certain apps chose to write directly their structures/data to files, instead of serializing them?
For example a typical game (really console, not PC) to speed up the loading times, may precook it’s data (textures, shaders, models, etc.) to be directly loaded off the disk and used directly (e.g. no “float x = file_read_float(…)”, but directly load the full structure)?
Fat/Universal binaries would provide two different executables (per platform), but data would stay the same (or that would be preffered, as otherwise double the amount must be produced – as in CDROM’s, DVD’s, etc.).
Either all future OS X developers should chose slow serialization (slower reading times I guess), over in-place streamed-loading, or otherwise data must be “doubled” for some cases…
just various ramblings… Still probably nothing to worry about, as current PC games developers tend to use the “slow serialization” mechanism, instead of using pre-cooked binary platform-specific data.
I think you are refering to the Anandtech article. I am not convinced by the way they did their server benchmark. Linux and OSX have different ways of handling disk writes: OSX will not move to the next transaction before writing to disk. Linux (by derfault) write to the HDD cache and moves on. OSX way is the safest, Linux way is the fastest.
I doubt very much Oracle would port its latest database to OSX if they would impose a 10X performance penalty on their customers. Idem for Sybase. If there was such a penalty you would have heard it by now. It might just be a problem with MySQL and Apache. I don’t know but these results are suspect to me and I doubt that Apple personel in Belgium could help either! No offense…
The other problem is the use of gcc 3.3 which is probably the worst compiler you want to use for a G5.
It was a very sloppy review very unusual for Anandtech.
Regarding the keynote I think most developers were very nervous. It is a momentous shift after all.
I expect Intel to come up with some kickass CPU in 2 years time. But time will tell. One thing is for sure, love him or loathe him, Jobs is not an idiot, and I think we can give him the benefit of the doubt.
Five Stages of Intel Macs:
http://joyoftech.com/joyoftech/joyimages/693.gif
yup, intels compilers is much faster. who knows if they are going to use it tho. I would i would use the intel compiler for all software under apple.
You couldn’t, because Intel’s compiler doesn’t support Objective-C.
But you never know, perhaps Stevieboy can get his new best buddies to add an Objective-C frontend to icc. Shouldn’t be awfully difficult.
Every time Apple went through a switch 68K->PPC, Classic->OSX the company lost about half of the install base. This time they’ll loose enough people to make hardware a loss leader again.
It’s too bad, because it started turning corner. He did a pretty job after Gil saved his neck, but just like the last time his ego ruins everything.
I presume “He” is supposed to mean Steve Jobs, rather than someone a bit higher up?
Jobs had nothing do with the PPC switch. (Although it did turn out to be counter-productive and unnecessary.)
He also wasn’t responsible for the Copland failure and the following search for an external solution.
And what choice did he have regarding the Intel switch? No new PPC processors were on the horizon, and Powerbooks were going to fall ridiculously far behind Pentium M machines. (And no, Cell is not a viable alternative.)
Like the Adobe guy said: “What took you so long?”
I can see why people are concerned about rosetta but far more important will be the rate at which apps are ported to operate on x86.
Rosetta is really just a phase and hopefully one that major app vendors will avoid entirely.
Incidentally, i look forward to getting myself a new apple PC in 12 months along with a copy of logic audio.
@Surya: Darwin’s performance issues aren’t something you’ll notice on a fast desktop machine. Even course-grained SMP and slow system calls are fast enough to keep up with the average user. Where Apple needs to spend some effort (and has been spending effort, from 10.0 to 10.4) is in the speed of the UI components. The oft-mentioned scrolling/resizing issues are a part of this, but so is application startup time (which isn’t great), and the lack of responsiveness under load.
@Platform Agnostic: Anand’s results aren’t anything new. It’s been known from forever that Darwin is painfully slow at regular UNIX tasks. In lmbench in particular, a set of benchmarks that measures the latency to perform basic UNIX system calls, Darwin has always been slow. There is an article on IBM’s DeveloperWorks that shows the performance of Darwin 7.2 (OS X 10.3 I believe) vs Linux on a G5: http://www-106.ibm.com/developerworks/library/l-ydlg5.html
It points out the same flaws Anand noted, slow fork/exec() and context switch times that grow very fast with increasing numbers of processes.
What also struck to me at the keynote was…it seemed rather like a cult meeting…Steve would say something and people would just laugh…even though it was not remotely funny…kind of spooky really! 🙂
Nah – it’s just a group of people with some shared interests and enthusiasm, of which you are not a member and therefore don’t ‘get it’.
Happens all the time everywhere, only with less publicity (because who would want to watch a webcast of the Annual Needlework Convention?)
the benefits of x86. There actually exist a fair bit of hand tuned code for x86, not to mention the compilers for x86 are superior — they have more R&D behind them.
Adobe, MS and most of the other ISVs have equivalent offerings on windows/x86 and mac/ppc. So now they can port their hand tuned assembly routines to mac/x86, gee that was hard.
Tiger also introduced a whole lot of smartness, basically they’ve created all the necessary API to abstract everything they possibly could. Even the most demanding of work can be done through calls and you can avoid machine tuning. This means, it’s portable.
None of this was by mistake. This deal must be at least a year in the making, things this big don’t happen quickly, so Apple and its close ISV friends are well prepared.
And you don’t think most ISVs are fully aware well before hand about the switch? I’m sure Apple tried to smooth things over for their user bases’ sake, such that patches to old copies of software were released or significant discounts will be handed out, for the latest version of mactel native software.
The P4 FPU pipeline is weak, SSE2 is not. The G5 has very good FP performance. The Centrino line of systems have had pretty good FPU performance, consider their application. This is only going to improve.
Dude, how the **** could Apple lose its installed base during the transitions? The computers simply blew up? Or did they teleport themselves into the legendary, perennial void? Seriously, get real!
A bit of history:
What undermined the MARKET SHARE (a totally different measure) was that people were running 486s at 50MHz when the best Quadras were at 25MHz, and shortly thereafter came the speed madness.
Then came the 60x PPCs, who could keep up with the Pentiums up only up to the release of the Pentium Pro.
Then came the G3s, and those regained the speed crown for maybe 8 months.
Then came the G4, and people thought PPC was back in the benchmarking game. Only to find that the Pentium 4 stopped yielding diminishing returns when compared to similarly (over-)clocked Pentium 3s.
Then came the G5, and it did beat the P4 for like, 3 months?
Then the fight was OVER. Intel came back to the design board with the Pentium 3 blueprints in front of them, and made the Pentium M, which finally burnt and buried the “MHz Myth” (already much debunked by AMD and IBM itself, but Intel’s own admission craved the final nail in that coffin). And I admit myself guilty of cheerleading for the PPC all this time. I made myself forget that the last great programmer-oriented RISC was the MIPS, and that the last great compiler-oriented RISC was the Alpha. And that MIPS is now essentially relegated to the embedded marketplace, and the Alpha was ripped appart and its pieces eventually found their way into AMD’s offerings.
So the “war” was over. Then came the dualcore talk, along with AMD saying their design took dualcore into account from day one.
Then came Intel, who slapped something together to show a pair of cores on a single package, evident duct tape nonwithstanding.
And off we go to 2006/2007. By then, Intel will have fixed that duct tape thing and will come up with something packed with patents to regain the speed crown (read: silicon photonics, with wich the maximum FSB speed problems will be essentially gone).
I’m happy that I own a legacy iMac (the Performa is taking a sabbatical for the time being) just to be sure I’m not doing anything wrong regarding endianess (since I also have to deal with UltraSPARC and HP-PA, but can’t afford such hardware), and that I also own a pair of x86s just to be on the safe “high-volume commercial software” side.
All in all, I covered all my bases. Obviously so did the big shots.
I have spoken to a few programmers who do cross-platform and have also read developer comments.
I think those that have followed Apples guidelines since the intro of OSX will have no problems with the transition. Apple has been pushing Cocoa development since OSX was released. Developers still using outdated tools or even doing Classic apps, will feel the pain.
Almost all the big developers and the Mac-only developers are using XCode aren’t complaining.
I am sure Quark will have a compatible version by 2010.
Five Stages of Intel Macs: