The Rosetta emulation platform in 10.4.3 build 8F1111A has been upgraded to feature full G4 support, including Altivec. This not only adds a new layer of compatibility to Rosetta, but also improved speed for Altivec-equipped applications. Also, new ATI drivers available in 10.4.3 seem to offer much greater support for PC ATI graphics chipsets.
I’m a bit confused, is this the CURRENT build of 10.4.3 or an unreleased build?
To route the altivec emulation to some specific hardware that speed’s the things up…
To route the altivec emulation to some specific hardware that speed’s the things up…
That bit of “specific hardware” is already there, and it’s called SSE. Most Altivec instructions map quite nicely onto it; only some like shuffle are a bit more difficult.
This mac os X on x86 hardware turns more interesting by the day…
Apple is planing something big, I can smell it!
“Apple is planing something big, I can smell it!”
Nah, that’s just urine you’re smelling, after you weed your pants in Apple fanboyism.
I hope so because there are no problems with firmware any longer.
I hope so because there are no problems with firmware any longer.
The only thing that could hold it back would be driver support; but considering how easy it is to provide drivers due to the *STABLE NATURE OF THE API* <stares in the direction of the penguin> unlike another operating system, not thinking of any operating system in particular.
Even if Apple were to go with EFI – which if the don’t, I would be immensly disappointed; EFI provides backwards compatibility for ‘legacy’ graphics cards.
You might have to edit the XML file for the driver the card should use to look for the PCI id of your card. It is like a one line edit.
It makes sense that they would add Altivec, after all the OSS equiv – PearPC supports it.
I would really like to see Apple create a hardware independant (instead of or in addition to the platform hybrid) binary file for their applications.
This would allow them to really commoditize their hardware – and the underlying cpu architecture would become irrelevant.
“I would really like to see Apple create a hardware independent (instead of or in addition to the platform hybrid) binary file for their applications.”
What you’re asking for is for them to make software which doesn’t leverage the strengths of their own platform, but instead allows others to benefit from it…. Or at least not allow Apple to receive maximum benefit from their efforts.
“This would allow them to really commodities their hardware”
Working toward commodity status is bad for a company. Too many people equate commodity status as the point at which it achieves maximum exposure to the public. The fact that the PC has turned into a commodity is more of a stumbling block for the industry players than anything else else.
Apple is wise to continue the route they are going and achieving continued growth through offering a better product.
Their platform is tied to their software – that is where their strength is. The inclusion of the TPM module in their Intel Macs demonstrates that the Apple hardware “platform” is no stronger than any other Intel platform.
What I’m talking about here is creating a multi-hardware platform that will allow them to buy parts from a commodity market – they do not produce the commodity now, and would not under my model.
What I’ve suggested would actually improve their market strength, since they would force their chip providers to compete in a larger market. There would be competition between not just AMD and Intel, but AMD, Intel, Motorolla, IBM, Toshiba and anyone else that can come up with a fast processor that can be made to run their software.
To clarify, Apple’s computer systems would not be the commodity here – they would be using components purchased from a commoditized cpu market.
“What I’m talking about here is creating a multi-hardware platform that will allow them to buy parts from a commodity market – they do not produce the commodity now, and would not under my model.”
So, in essence, you would prefer that they use comodity *parts*? I can only assume that you’re requesting this so that you can build your own Mac. Am I correct? How would that benefit them?
You could either buy parts from other people (thus not benefiting Apple) or you can buy the computer from Apple.
I think he might be suggesting, hold it, a CHANGE to the business model.
Quick, gag him, before anyone hears what he has to say. Mod down his post. Change is one thing we do not want. Not now, not later, not ever.
“I think he might be suggesting, hold it, a CHANGE to the business model.”
Nothing wrong with change.
The problem is in the specific change he’s requesting. He’s saying that Apple should adopt a business model that takes away its primary benefit… its ability to offer a complete, integrated solution.
I know that Windows PC users desperately need a new champion, as Microsoft has repeatedly let them down, but adopting the same business model which has since caused every PC manufacturer to have dwindling returns is NOT a good business model. When a company’s primary means of differentiation price, they end up pricing themselves right out of the market. I’m always surprised that so few PC users are so blind to see this trend amongst all the major PC OEMs.
By making Apple just another run of the mill PC manufacturer with just a different OS causes the company to follow the same dying business model… which may have been great in its earlier years, but is quickly fading.
Adopting this business model does nothing more than allow the PC defenders to save face afters years of arguing that the proprietary platform approach is somehow inferior.
Apple is on the right track now. Let the plan unfold
You’re actually missing the point…the suggestion is not to become another run-of-the-mill PC shop, but rather to make the underlying hardware unimportant. They can still offer a “complete, integrated solution” — but one that doesn’t require the enduser to care what is under the hood. No matter what Apple box you have, the same software will run. Yet you’ll have no idea what chip is at the core. Kewl, indeed. No one would be able to touch them agility-wise.
Tell you what. Go to http://www.artofillusion.org and compare that product to 3DS MAX. Seriously. AoI will run on any hardware. Essentially what you are suggesting apple do is re-invent Java but at an OS level. What see seem to miss is that to do so they would still have to invent some sort of VM or interpreter that is platform specific to execute your platform agnostic code. This would be, to be frank, a waste of time and energy. I can’t even begin to imagine how painful a full OS would be written in Java. There are so many technical issues with what you suggest that it boggles my mind that no one has bothered to really call you out on it yet.
“but one that doesn’t require the enduser to care what is under the hood. No matter what Apple box you have, the same software will run. “
That’s already the case.
You could either buy parts from other people (thus not benefiting Apple) or you can buy the computer from Apple.
Or better still, buy a really stripped down version of a PowerMac, and upgrade the components ones self; for me, I don’t use the memory Apple has on its site because the cost is rip off; its cheaper for me to go out and purchase two high quality low latency dual channel DDR modules than it is to purchase their generic, yet surprisingly expensive memory – which is, after all, generic Kingston memory.
“Or better still, buy a really stripped down version of a PowerMac, and upgrade the components ones self”
That’s nothing you can’t do already.
” don’t use the memory Apple has on its site because the cost is rip off”
Apple’s memory used to be priced out of sync with the industry… now their ram prices are right inline with any quality ram manufacturer… Crucial for example.
Apple’s memory used to be priced out of sync with the industry… now their ram prices are right inline with any quality ram manufacturer… Crucial for example.
It is the same damn memory as the generic stuff, except with a little sticker that says, “this memory module was blessed with the holy spirit that Steve Jobs exudes from his body” – standard kingston or samsung.
I would really like to see Apple create a hardware independant (instead of or in addition to the platform hybrid) binary file for their applications.
Kinda like Java?
Wonder if they’re translating AltiVec directly into SSE3…
Are the AltiVec and SSE3 instructions sets in any way similar? I guess what I mean is do we have a one-to-one ratio of operations that altivec can perform compared to SSE3?
No. SSE3 is just a simple addon to SSE2 (it only adds about 13 functions). AltiVec provides 128-bit vector registers and a large number of floating point and vector functions. SSE/SSE2/SSE3 provides -some- 128-bit registers, a smaller number of functions, and a smaller number of registers. SSE2 combined with SSE3 combined with 64-bit extensions could possibly duplicate most of the functionality of AltiVec, but there’s really nothing on the PC end that offers 1-to-1 conversion…
I’d say there are some similar features, implemented differently; as for the suckage factor, some things in SSE suck and some in AltiVe suck – but then again, one can use the vector maths framework in MacOS X IIRC, which will take alot of the hand optimisation that many coders cream themselves in delight over.
As for the previous inquiry about availability – it is an update that is available for download for x86 users on the $999 deal.
Not quite one to one, but similar. I wrote a bunch of SSE1 and Altivec code a few years ago. The algorithms were almost exactly the same. The only differences were that in Altivec I could do a multiple and accumulate in one operator, which I believe finally made it into SSE2 or 3 so it’s no longer an advantage. Additionally I had more registers to work with so the SSE code had like one extra load the Altivec didn’t. In the end there was no measureable performance difference between the two.
Those who have only read the marketing literature will tell you Altivec is massively superior. The truth is they are nearly the same and it should be pretty straightforward to automatically to translate Altivec to SSE. It may not be as efficient as a hand translated version, but should perform pretty well.
I thought it kind of strange that Altivec wasn’t supported when the Rosetta anouncement was made, or wasn’t going to be supported (as was implied originally a while back).
Glad to see it’s making it. This’ll speed up PowerPC versions of software like Photoshop. Still I can’t imagine running photoshop ppc binaries on an x86. *shudders*
We’ll see, it’s all speculation anyway. Maybe I don’t read the right places but it seems the developers who have x86 macs aren’t saying much about them. Do they have to sign NDAs to get one?
“Glad to see it’s making it. This’ll speed up PowerPC versions of software like Photoshop. Still I can’t imagine running photoshop ppc binaries on an x86. *shudders* “
While your response seems appropriate given the public’s current understanding of hardware emulation, but rosetta is not like any emulation you’re already familiar with.
Though you do notice a difference when running apps in rosetta, its certainly not anything that would cause anyone to shutter. Its only about 20% slower than native speed.
Though I realise it will be faster than just about any other emulation platform out there I’m still skeptical. Where’d you get your numbers?
The reason I’m doubting is because a lot of AppleFolk have claimed for a very long time that PPC is so much better than x86. If this is to be believed then an emulation of “greater” chip on a “lesser” chip should be *much* slower, yes?
Aside from this how can we say that Rosetta is in any way xx% slower or faster than it is on a PowerPC? One can’t simply take any given PPC and compare it to any given x86 emulating PPC. You can state that it’s only 20% but that’s a metric with no bearing.
I could be running a 1.4ghz G4 mac mini with 512mb of ram and run Rosetta on a 64bit Pentium 4 clocked at =>3.4ghz and say it’s only “20%” slower.
You have to pick “comparable” chips to do this, and “comparable” chips don’t really exist. One x86 chip can do certain types of number crunching faster than a PowerPC and vice-versa. Different chips do different things, different ways, making some faster than others. So when someone says they have “benchmarks” stating that something is “xx%” faster/slower they’re likely using simple benchmarking apps, not real world apps.
I think the best thing to do is to sit down with an dual 2.3ghz g5 for a week running normal, everyday apps (that people with powermacs do everyday that is ). 3d rendering, photoshop, illustrator, video processing apps. Do this for a week. Then sit down with the same class intel based system (we’d leave it up to apple to decide which one is “comparable”) and try the same binaries under Rosetta and see how it “feels”.
We’ll see, it can’t be too bad. You’re right, it won’t be terrible but right now, we just can’t spit out percent figures and generalizations. Everyday average user desktop applications probably won’t notice a speed difference at all but stuff like photoshop, 3d rendering, video processing etc will suffer under rosetta.
Having said that I doubt many of those running these apps will upgrade anytime soon, and when they do they’ll be upgrading to the newest apps that are intel native anyway. So I guess it’s all a moot point now.
Well since Intel makes bogus processors in comparison to AMD. How about using a similarly clocked dual 2.3 ghz amd processor?
Yonah has, surprise,surprise, turned out to be a total joke. Slow. Hot. Who here didn’t see that one coming? Running Altivec code on a Yonah laptop will be hilarious to see.
Just think, if Job’s hadn’t botched the IBM relationship, we could all be looking forward to a Powerbook/iBook running IBM’s latest low power 970 chip. Future desktop systems and servers wouldn’t be crippled with Intel chips. The huge numbers of developers who leave the platform every time there is a chip change wouldn’t happen.
Instead, Jobs’ incompetence has us reading stories about emulating Altivec code with SSE.
F-ing pathetic.
Remember something to note here, Rosetta will not be running on current generation hardware. Whilst DevKits and Pirates are running on P4’s, Apple will be using a Dual Core chip –
A dual core chip uses virtualisation to almost entirely remove the overhead in VMWare5. This could help reduce the emulation loss a bit more for Rosetta. Also, the new chips will be faster than current generation chips.
I’m in the belief that when Apple have finished optimisation and the new hardware is out, that Rosetta will emulate to 100% speed of *current* hardware; i.e. What we were used to now. I of course might be wrong; but I think it’s heading that way.
With altivec translation, and a optimistic claimed 80% native speed on – current single core technology – I’m hoping my guess is right.
Virtualisation won’t help – it only helps kernel level code, not the “application space” which Rosetta and PPC applications run under.
Am I the only one who thinks Apple won’t reinvent the wheel? They will just copy a strategy which works, the way it works: switch to commodity HW and let Mac OS X be pirated to gain market share. It’s that simple.
Now, they will probably avoid doing like MS which didn’t enter the HW market until a few years ago since they’re already in HW market. They will rely on successfull HW products (like iPOD) to get profits that they will loose by not selling OS X copies which will pirated but, in the end, we know how it works. Let your system be pirated and you will gain marketshare. First, use that marketshare to force HW makers to bundle (preload) your product gaining money from those licenses. Then, use that marketshare to try to impose a few users to buy licenses and drive sellings of your other products and HW.
It’s that simple. Even if, by doint that, Apple raises its marketshare from 5% to 10% (of which maybe 4% will be pirated copies), they’re still accessing a market they were COMPLETELY out of.
As all HW protections, Mac OS X HW protections will be cracked in a few weeks (do you remember Windows activation?).
If it’s not broken, don’t fix it! That strategy works, they just need to copy the leader (and perhaps, try to be original sometimes…). Apple has something Microsoft only decided to recently exploit: a strong brand. That’s why Microsoft replied by trying to improve Windows brand sexyness (did you see any of their commercial? Did you EVER see ANY MS commercial since Windows 95??).
“Am I the only one who thinks Apple won’t reinvent the wheel? They will just copy a strategy which works, the way it works: switch to commodity HW and let Mac OS X be pirated to gain market share.”
Ahhh, but commodity hardware as a business does NOT work. Ask all the PC OEMs on the market how their business has to be subsidized but other products and services and yet still they continue to lose money. Yes, the commodity PC may have been a booming business in the 80s and 90s but these days the computer industry is a dying business unless you have something to differentiate yourselves from the competition. A commodity product doesn’t allow for that.
Apple is not in the business of gaining market at the expensive profit. Apple would have to sell at least 10 copies of OS X on average to make up the difference of 1 lost computer sales by adopting your strategy… and that’s assuming they sold it. You’re talking of piracy.
[i]”They will rely on successfull HW products (like iPOD) to get profits that they will loose by not selling OS X copies which will pirated”
The Macintosh computer is a successful product too. They won’t need to depend on other successful products because they have no intentions of following your plan.
“we know how it works. Let your system be pirated and you will gain marketshare.”
Market share at the expense of profit?
“First, use that marketshare to force HW makers to bundle (preload) your product gaining money from those licenses.”
But that will take time to gain that dominance… a time when they will not have garnered any money from products being *SOLD* thus giving them less reason to make their products better than the competition which in turn will result in fewer people pirating it as the profitable products continue active development and then your plan goes in the toilet.
“It’s that simple. Even if, by doint that, Apple raises its marketshare from 5% to 10% (of which maybe 4% will be pirated copies), they’re still accessing a market they were COMPLETELY out of. “
Or Apple continues to increase its market share by selling profitable products. This way they lose nothing and gain the full benefit of their labor.
“As all HW protections, Mac OS X HW protections will be cracked in a few weeks (do you remember Windows activation?).”
We haven’t yet seen hardware protection from an OS manufacturer… only software protection. I think that when you do so the two working in tandem, you’ll be upset that Apple supposedly took something away from you that you stole from Apple in the first place.
“If it’s not broken, don’t fix it! “
Agreed. Why are you trying to break it then?
“That strategy works”
You’re 100% right except for the “works” part.
“they just need to copy the leader (and perhaps, try to be original sometimes…).”
Apple is the leader… they’re just not dominant. They’re the leader because they’re original.
“Apple has something Microsoft only decided to recently exploit: a strong brand.”
Microsoft doesn’t have a strong brand.
What people should understand about Rosetta is that all API calls to the OS will be treated natively at full speed with no emulation.
For example, if a PPC application under Rosetta asks OS X to draw a window, it will do so at the native x86 speed. It applies to much more than just window drawing.
Memory management for example will be native, all Quartz drawing calls will be native, Quicktime will run natively for those apps, and the list goes on and on.
Rosetta is nothing like PearPC, which has to emulate the whole hardware and OS.
So there is no magic involved, and while the PPC emulator inside Rosetta is really more powerful than any other PPC emu, it’s not the only reason why most applications will run 20x faster (guesstimate) than they would on PearPC.
That being said, I’m a little skeptical about Altivec emulation being implemented in Rosetta. There aren’t that many OS X apps that really requires Altivec (G4 or G5), and those who do are apps that won’t run well in emulation anyway. I don’t understand why Apple would have invested time in adding Altivec support. Maybe it was a nice simple trick so they decided to implement it to appease unfounded fears.
OS X already contains vector calculation API’s that will route calculation to Altivec if present or to SSE on intel, and to CPU based API’s if there is no Altivec or SSE.
Many PPC Altivec-enabled apps don’t use these API’s and do direct calls to the Altivec unit, and those would not have run in the previous version of Rosetta. What Apple has probably done is to find a simple way to route Altivec calls to these CPU based vector calculation API’s, which use SSE on intel, to save them some work.
… how easy it is to provide drivers due to the *STABLE NATURE OF THE API* <stares in the direction of the penguin>…
What penguin? I only see a guy in a red suit with a trident…
if this is to be believed then an emulation of “greater” chip on a “lesser” chip should be *much* slower, yes?
Not neccessarily! I’m sure Rosetta does JIT (just-in-time) compilation, rather than instruction-by-instruction translation. This way you actually run **real** x86 code – it is just a matter of how well the PPC-x86 JIT compiler works. In theory, on could get even faster code in some pathological cases
if this is to be believed then an emulation of “greater” chip on a “lesser” chip should be *much* slower, yes?
Not neccessarily! I’m sure Rosetta does JIT (just-in-time) compilation.
Exactly. It’s not an emulator at all, at least not in the traditional sense of interpreting PPC instructions one by one. What it does is binary recompilation.
Rosetta turns the PPC code back into an intermediate representation, recreating the data flow and control flow graphs. It won’t be able to recover all the information that the original compiler of the program had, but it won’t be too far away either. It helps that the PPC is a fairly clean architecture with few side effects.
It then turns that intermediate representation into x86 code, much like any other compiler would, including instruction selection, register allocation, and optimisations.
Because of all that, PPC instructions don’t need to map exactly onto x86 ones and the lower number of registers isn’t much of a problem.
Of course you get the recompilation overhead at startup, although I think Rosetta uses a caching scheme, thus trading a bit of disk space for improved application startup times.
this 8F1111″A” is illusive and may be a lie.