The technology behind the G4 and G5’s AltiVec (AKA Velocity Engine) has much to do with the performance advantage that Apple hardware had over its x86 PC competition in certain tests. Apple, along with Motorola and IBM co-developed the PowerPC processor, and each entity has some rights to it.
But what about Altivec/Velocity Engine? Does this three-way ownership extend to this this technology as well? Norman Shutler submitted the following editorial to osOpinion/osViews, which theorizes that Apple may bring its Altivec/Velocity Engine along for the ride for the company’s move to Intel processors and thus retain that same speed advantage for apps that utilized the technology.
Bogus. Numerous tests have been ran and suggested no real advantage to Altivec/Velocity compared to SSE. All Apple has to do is use the SSE in Intel CPU’s instead for any performance boosts.
I don’t think that will happen. Most developers that are concerned about the switch are concerned because of the loss of Altivec. If Apple would want to provide Altivec on their Intel Macs, they would have announced this. Even the developer documents show how to change Altivec code to SSE2 code.
I can’t believe that Apple would tell its developpers to switch to SSE2 only to reintroduce Altivec later.
The last I heard of Velocity Engine was when I got my powerbook in early 2002 after almost 4 years is this still a modern technology. It seems there was pleanty of time for Intels offering to meet and or exceed Velocity Engine. It is like the same time the Powerbook was one of the fiew laptops that offer a Gig of RAM, a Gigabit eathernet, and Wireless Networking. Today we can easilly find laptops with those options.
Altivec has been around for a lot longer than that.
I concur. The author doesn’t seem aware of SSE (he only mentions MMX).
From what I’ve heard, yes, it’s still more powerful, elegant and featured then SSE2.
Can’t see the advantage of spending lots of cash to have intel build a specific mac centric x86 CPU. If Apple wanted to do that they could have contracted with AMD to have them design a custom CPU.
This guy got a little thesaurus happy when we wrote the article.
An Apple engineer wrote AltiVec for the PPC. I’m sure Apple already has something up their sleaves. They’ve been working with Mac OS X on Intels from its beginning.
This can be a good protection. If Altivec instructions will indeed be something in the processor core, the OS may not run without them, thus precluding OSX from running on a generic x86 Intel processor. Much better than hardware dongles and emulating it may be impossible, at least without serious performance penalty.
This stuff makes The Register look almost credible.
Once again we get editorials from fanboys that have absolutely no knowledge of processors.
OSViews…give me a break.
well i dont think apple would switch to intel unless they had SOMETHING up their sleeeve…. or should I say I HOPE apple has something up their sleeve that is really going to be a WOW factor otherwise it is just a yawner
The altivec cannot be use in a CISC architecture like pentium processors…
If you read documents about how the pentium processor is working with his pipeline an his “Microcode” like intructions, it is not possible to implement altivec like functionnality.
Sad to lost a beatifull architecture like the powerpc in profit to an architecture constructer with duct tape and some little idea like the pentium.
itanium would be great but…
AltiVec won’t be brought over. If it was, the development machines Apple has released would have had AltiVec on board. Otherwise Apple would have to ask their developers to re-write (again!) their software to re-take advantage of the AltiVec instructions only a year after having to write them out.
It’s just wishful thinking that AltiVec will be brought over. Business sense tells you it won’t happen.
Because Eugenia posting some fanboy review of enterprise fax software a few days ago was -so- newsworthy, right?
Apple isn’t going to bring Altivec over to the x86. Why?
1. The processors that Apple will use will have SSE3 on them. When I’ve talked with game developers, they all admit Altivec is cool, but it’s a pain to fit your code into it’s instruction set. They say SSE is *much* more flexible.
2. Apple has learned not to put itself in a position that will compromise its ability to get CPUs. That’s the whole reason for the switch. They couldn’t get anyone else to produce their CPUs because there wasn’t a big enough market – and now they’ve teamed up with someone who *can* deliver the goods. They aren’t going to screw it up by getting to create a custom CPU.
3. The transition for Apple is supposed to give them options. Using SSE instead of Altivec gives many more high-end developers a choice in porting their finely tuned SSE code. (IE, now you won’t have to). This will hopefully mean more developers will support OSX.
4. If they wanted Altivec so badly, they would have searched out another CPU vendor to produce their G5 chips – much easier than switching the architecture, and then creating a new extention for it.
5. Apple has too many other things to worry about. Right now when their sales are declining, I don’t think anyone is going to be saying “Hey, let’s convince Intel to run a new, completely untested chip with Altivec on it, and then let’s have them recode GCC to work with it, and then let’s get Intel to get their compilers to work with it, and then let’s make sure they never sell this technology to anyone else”…
It’s not going to happen.
Another thing – this article made it seem like the majority of the mac community is really upset about it … that’s just not true. Most of the people I’ve talked to are either really for it (ie, they see the massive benefits) or like the idea, but are just a little ancy about another platform switch. The few people that really hate the idea are the large-scale tech companies that have built render farms on the XServe (and even they realize that IBM messed up royally), and the crazy mac zealots that bought macs just to be snobby and different, but can’t admit that the machines that Apple is currently shipping are rather underpowered.
Because….just…no.
Intel and AMD are firmly behind the SSEx families for DSP-like floating point ops and x86-64 for 64-bit desktop computing. Personally, I would love Apple to go x86-64 exclusively. I mean, they are already shifting architectures ANYWAY, so why not go all the way? Isn’t Intel meant to have an EMT64 laptop chip (based on Banias/Pentium M, NOT Netburst) in the Mac-Tel timeframe?
I think Apple only owns the NAME and trademarks for Velocity Engine/Altivec. IBM just calls it “VMX” and touted it as a feature in the XBox 360’s custom CPU. On the other hand, back when there was an A-I-M alliance, Apple was codesigner with Motorola, so maybe they have IP rights. IBM at the time was more interested in high-end servers and didn’t wish to bother. Either way, whatever it’s called, Intel doesn’t want it, Intel isn’t getting it.
I wonder if the new “Power Everywhere” initiative of IBM’s will result in low cost PowerPC desktop systems now that Apple is gone? Freescale hasn’t gone anywhere last time I checked.
–JM
Bogus. Numerous tests have been ran and suggested no real advantage to Altivec/Velocity compared to SSE. All Apple has to do is use the SSE in Intel CPU’s instead for any performance boost
Altivec is very very fast on optimized applications. Take, for example the RC5-72 key crunching competition. The 2.7 GHz Power Macintosh G5 solves over 20 million keys/sec PER CPU… the fastest x86 according to the stats page can only do about 10 million keys/sec (Athlon 64 4100).
Why would they switch to a commodity CPU BECAUSE it’s a commodity and then customize it?
Not gonna happen.
AltiVec is a technology developed and continuusly enhanced by Motorola. Apple calls it the Velocity Engine because they have no branding license for AltiVec, only a technology license. For similar reasons, IBM calls it VMX. IBM’s AltiVec implementation is actually lagging behind Motorola’s; for this reason, a G5 can be slower than a G4 at AltiVec-heavy tasks.
Apple and IBM only have licenses to use AltiVec themselves, but not to hand it to a third party, ergo, AltiVec isn’t coming to Intel.
“which theorizes that Apple may bring //its// Altivec/Velocity Engine”
It’s not Apple’s.
and april fool’s has already passed.
@Arrakis: The Pentium-M and K8 have RISC cores. In the common case, neither microcode their instructions. There are x86 instructions that get microcoded in these processors (eg: the BCD instructions), but the compilers know this and don’t emit them. The P-M and K8, running properly optimized x86 code, aren’t much more microcoded than the G5 (which also microcodes uncommon instructions).
@Glenn: The distributed.net FAQ warns that the RC5 benchmark is a poor thing to use to characterize the performance of a CPU. It relies heavily on rotate instructions, which not all processors implement equally well (they’re not used that often). Altivec has a vector permute unit that has a 128-bit vector rotate instruction. SSE does not have such an instruction. That’s why the G5 performs so much better in that benchmark. It’s a nice thing if you’re software is really dependent on bit manipulation (crypto, stuff like that), but what most apps use the vector unit for is FMACs (multiply-accumulate instructions), and AltiVec and SSE do those equally fast.
Altivec on Mac is dead. Apple’s Universal Binary documents say so – and have said so since the day Macintel was announced. This is old news, already dead and buried.
I have no problem with OSNews posting articles like this, but can’t you put them in a special category or something – like “Poorly informed fanboys with no real technical knowledge blog about the way things might have been…”?
Now that Eugenia is out of the way, and this isn’t her own personal blogspace anymore, can’t we please have an effort to make OSNews more credible?
Move on, nothing to see here.
Thanks for explaining that! Since Freescale (formerly known as Mot. Semiconductor) does most of their business in the embedded space, this makes sense for them (it’s a general purpose CPU that can sometimes act like a DSP…save money on extra chips).
OF course, it is possible in SSE4 or 5 that similar instructions get added, only because crypto and other strange algorithms are trickling down to the average user. Accelerating them significantly would prevent Intel’s customers from buying support DSP’s from other customers.
–JM
Use SSE(1,2,3) AND Altivec 9}
Intel’s CPUs have been using RISC technology since PPro. CISC ops are breaked in smaller generic ops, and them fed to the appropriate pipeline. As Rayiner Hashem said, some of the CISC ops are microcoded, meaning they’re translated to the real simpler opcodes. And just for your information, Intel has been doing risc cores at least since the beginning of the nineties (i860). Check out the specs and you will find some “familiar” technology like “flexible” registers similar to those found on MMX/SSEx.
About the benchmark, that’s plain crap. There are some operations where G5 clearly outperforms Intel (radial blur is my favourite example), but on day by day use, you’ll find that modern Intel processors run quite faster than their G5 equivalents on “well behaved” code as constant cycles, predictable jumps and so on.
I think that the real advantage Altivec has over the intel SIMD commands is that the developers knew they had to use them everywhere possible to compete with x86 performance. There was this mentality that they must beat the mHz myth with the SIMD coding. If this mentality remains in the intel world we may just see what SSE2/3 etc are capable of when you have motivated developers using them. Or mac developers may just become lazy and not optimize since they are running on intel now and just wait for clock speeds to double… I sure hope not.
Can Altivec handle double precision operands yet? Forgive my semi-rhetorical question, in the off chance that it does, because it didn’t when I last looked.
I believe Jobs mentioned if your code is heavy into Altivec then you need to adjust for the pentium equivalent.
So that whole article is moot.
Let’s go on the assumption that Apple will somehow require OS X to run on their own hardware. Let’s also make the assumption that they will wish to have an Intel-based platform with technology not found on legacy PC motherboards, because they don’t need to. Let’s also go on the crazy assumption that they wish to provide AltiVec for better performance for certain things that SSE3 isn’t as capable of doing.
Now, let’s factor in the fact that it isn’t cost effective for Apple to have Intel make custom mutations of their processors, and that’s why Apple is using the future processors Intel is releasing. Barring the possibility that SSE3 will be implemented in a much more efficient manner (I don’t know enough details on the speed of it to comment) there’s always SSE4 that could be released on the processors Apple will START with. But let’s just ignore THAT for now.
Now let’s also assume (this is a safer assumption) that Apple doesn’t have the IP rights or means to readily have someone else stick an AltiVec unit on the same die as an Intel processor, and do a Borg assimilation. Let’s say Apple is being “Enterprising” and is combining individual strengths together. Ok, so Apple can’t afford to put AltiVec on the same die as a Pentium, for whatever reasons: but that doesn’t mean they can’t put it on a separate die, as a separate chip. Would they want to do this as a custom chip from FreeScale/IBM manufacture? No, I think not: customized versions of anything cost more to start with, so it’d better be worth the money. Let’s not forget how the Intel processors first got speedier floating point math: they used an external coprocessor. How much does a G4 cost? Would Apple be able to place one on the same package as an Intel Pentium? (not likely to be wise from a cooling point of view) But if nothing else, they *could* technically place it right next to the pentium on the motherboard, and have a dedicated bus for it expressly for doing AltiVec support. That would allow for AltiVec support without requiring custom chips.
Now, technically that’s feasible, but from a business standpoint, it doesn’t make enough sense for them to make dollars. An added processor that’s not contributing enough to supporting the typical workload will notably affect the cost per motherboard, as well as requiring such things as more cooling than going with a more typical solution. How many people REALLY need the supposedly-huge AltiVec performance advantage? Furthermore, even if a single current single core Pentium with SSE3 is little more than half the speed for those tasks than a G5, consider that a year from now, Apple will likely use at least one dual core Pentium variant per computer, making that a moot point, especially when you consider the fact that those future Pentiums will most likely run at a higher clockspeed with better overall performance (the latter being more important than clock speed, of course). Also from the business standpoint, being at the mercy of TWO processor providers for a single computer motherboard, especially in light of why they’re switching in the first place (couldn’t provide required processors fast enough) makes no sense from a risk management point of view. At least with going via Intel only (for now) if they have a reference AMD motherboard design, they can drop AMD processors into their lineup should Intel somehow not be able to provide the goods they’ve ordered in a timely manner. Thus, even if Intel doesn’t manage to provide enough parts to keep Apple competitive with every other Intel-supplied customer, they still have the option of a comparable processor second supplier in AMD that makes an ISA compatible chip, and there’s (in the dire emergency situation) other x86 compatible processor manufacturers (though perhaps not the 64 bit variants) they could run to.
So, the overall logic is that it makes far more sense for them to go with an All Intel solution for now, with perhaps differentiating with ROM on board, and the CPU ID perhaps being in a certain range, etc. or perhaps they could lock it to on-board NIC MAC addresses, using a range of addresses assigned to Apple (could happen).
Let’s go on the assumption that Apple will somehow require OS X to run on their own hardware.
This isn’t an assumption, it’s their stated intention.
Let’s also go on the crazy assumption that they wish to provide AltiVec for better performance for certain things that SSE3 isn’t as capable of doing.
This isn’t an assumption nor a crazy assumption, it’s a crazy hypothetical supposition – and the exact opposite of Apple’s stated intention. Read the Universal Binary document. Altivec on Mac is dead. It ain’t coming back. Game over. Next.
How many people REALLY need the supposedly-huge AltiVec performance advantage?
This advantage isn’t pure myth, but was always more hype than reality. Go ask an Apple developer with one of the x86 developer kit machines what happens when they recompile their veclib or Acceleration Framework code for x86. Guess what, it just works, faster than ever. It’s just not that big of a deal.
How much does a G4 cost? Would Apple be able to place one on the same package as an Intel Pentium?
If Apple were going to stick a G4 in every box, they wouldn’t have needed to spend millions of dollars licensing technology from Transitive to run PPC code on x86. Further, the object here was, partly, to drastically reduce costs, not to increase them. This supposition is pure fantasy.
Without going on further to see the conclusions and why I came to them, what you’re doing is taking things out of context and twisting things around to make me look like I’m smoking something. I’m not! I point out clearly that it doesn’t make sense for Apple to do what I throw out as a concept, precisely because of the things you are trying to beat me over the head with as being pure fantasy. I had a bit of fun building up all the possibilities and then shooting them down for the reasons Apple likely wouldn’t bother doing them. While I didn’t explain EVERYTHING in relation to what Apple announced, I didn’t feel the need to. It was an entertaining “What if?” rhetorical question
And no, I’m not an Apple FanBoy, or an Intel FanBoy, nor an AMD FanBoy: I’m a developer and a user, and don’t care to get overly worked up over a change like this. And yes, I do own Intel PCs as well as an old Mac (G3 upgrade level, ancient thing) but I’m not running OS X or Linux, either, or even Mac OS I use what I use because it does enough of what I want, the way I want it, enough of the time. And yes, it would be silly for a G4 on the motherboard of an IntelMac for many reasons Heck, if you want to do the computations someone else was posting were about twice as fast on a G5 as on a Pentium 4, has anyone looked at implementing that algorithm on one of the newer GPU’s? That would be an interesting project, though having not looked enough at the problem, it might not be doable at all to make a pixel/vertex shader program do encryption
Eugenia basically just posted flamebait, so I don’t think it’s much of a change in quality.
Eugenia basically just posted flamebait, so I don’t think it’s much of a change in quality.
True that.
Can Altivec handle double precision operands yet?
No. IBM added a second floating point execution unit to the G5 so that double precision would run at the same speed as if AltiVec had DP. It was much easier than redesigning AltiVec for an unsupported data type.
Wow. What a dumb idea. I thought I had read everything, but this is really dumb. With SSE, SSE2, and SSE3, the x86(-64) instruction set has SIMD features that are as good or better than Altivec. Altivec has more registers (32 vs. 8) but I’m not sure that’s really much advantage for most SIMD workloads. Adding Altivec would just duplicate functionality with no advantage. I’m not even sure it’s possible given the differences between the PPC and x86 instruction sets. The real kicker though is that anyone who thinks that Intel is going to spend its engineers time and die space on this is smoking some pretty fine bud.
PS, I’m a Mac user. I’m just being realistic about what switching to x86 is going to entail. There’s no getting around the fact that Mac OS X developers are going to have to rewrite assembly routines. If you read Apple’s Universal Binary guide, they are pretty upfront about it.
The current position Apple has taken in addressing the vectoring problem is far from ideal. This wouldn’t be the first time Apple has taken a wrong turn and ended up going down a dead-end road. Remember Copeland?
Unfortunately there is no ideal alternative. While adapting Motorola’s, now Freescale’s AltiVec™ technology to Intel’s processors would be the best overall solution, it’s downside is cost. Would its addition make the MacIntel processor non-competitive? Unfortunately no one outside of Apple, Freescale and Intel are in a position to know the answer to that intriguing question, so let’s take a look at what we do know.
We already know what Apple is doing and proposing to its developers will work. What we don’t know is how well the end result will perform as compared to existing PowerPC/AltiVec implementations. So the question arises: Will the MacIntel processors be able to make up any shortfalls in vectoring performance with an abundance of megahertz? My guess: probably will, but only time will answer that question with any certainty.
Either way the cookie crumbles however, it’s good to know that should the MacIntel processors fall short of the mark, (as did Copeland) at least there exists a viable alternative, albeit a more costly one.
People need to remember that Altivec is an ISA (or part of an ISA), not an implementation. Apple fudges the distinction by calling is “Velocity Engine”. It makes no sense to say “G5 is better on benchmark X so Altivec is better than SSE.” That compares implementations not ISAs.
For two reasons:
1. SSE is not just a vector unit: it´s a replacement for the x87 floating point unit.
2. Doing that would indicat that they recognize that Altivec is superior to SSE/SSE2/SSE3. That would be suicidal form a marketing point of view.
>The current position Apple has taken in addressing the vectoring problem is far from ideal.
I don’t think you realize how difficult it would be to “add” altivec instructions to an x86 processor. No such chip will ever be produced, as it would be a waste of man-hours, die space, and power.
On the other hand, given that Apple is switching to x86, rewriting a certain amount of assembly IS the ideal solution. Some of the code probably already exists, given that some programs, such as Photoshop, already run on x86 (on Windows)
There is no Plan B here. There doesn’t need to be. I think it’s wrong to compare this to Copeland. Building an operating system is a big task with lots of uncertainty. Porting assembly routines, while specialized work, is something that people know how to do. By the time Mac on Intel appears, software vendors interested in selling performant software will have gotten the job done. SSE(1,2,3) is a very good SIMD ISA and the implementations from Intel and AMD and pretty damn good too.
>1. SSE is not just a vector unit: it´s a replacement for the x87 floating point unit.
SSE isn’t a unit. It’s a set of instructions. There is a unit on the chips that executes the SSE intstructions. x87 instructions still execute on current Intel chips, using the same registers as MMX. SSE supplements x87 and MMX. It didn’t replace it.
>2. 2. Doing that would indicat that they recognize that Altivec is superior to SSE/SSE2/SSE3. That would be suicidal form a marketing point of view.
Are you comparing ISAs or implementations? Altivec and SSE(1,2,3) are instruction sets. There is little difference between them in terms of features. Altivec has some features SSE doesn’t have, like more registers, but SSE has some features that Altivec doesn’t have, like double-precision floats. All in all, neither has compelling advantages over the other though.
The thing that has no value at all, though, is making a chip with two SIMD ISAs. It would be an immense amount of work and would just produce a kludgy chip.
@Take, for example the RC5-72 key crunching competition.
Such gains are not evident in general applications e.g. refer to barefeats’s benchmark tests. Secondly, VMX doesn’t handle double precision floating points unlike SSE2.
As for RC5, vector rotate instruction is missing in SSEx command set. X86 does this by the old fashion scalar instructions.
I can’t wait to see Wine (and wine-dx9 patch) running on an OSX desktop!
First of all, there is no way in HELL that Intel is going to make a proprietray processor for such a low-volume customer as Apple. Can anyone think of anyone else that Intel has done this for? The only reason IBM ever did was so that they could use Apple as a testbed so they could turn around and sell their PPC tech to the likes of MS, Sony, Nintendo and any other interested parties. Secondly, if I’m not mistaken, it would be virtually technically impossible to attach Altivec instructions to a core that already contains SSE.
I must say, I pretty much only read OS News now for the user comments now, not for the informed and authoritative stories that are posted.
> SSE isn’t a unit.
Ever since the “Pentium III”, this P6 processor has additional SSE unit. In AMD’s K7 Athlon, X87 ISA gets translated into absolute reference model from a stack model. AMD’s K7 Athlon FP works on absolute reference model.
In AMD’s K8 (Sledge Hammer, C0 stepping) case, it has following SIMD units;
1X SSE2 FADD
1X SSE2 FMUL
2X SSE1 FADD
1X SSE1 FMUL
2X 3DNow
Refer to http://chip-architect.com/
> It didn’t replace it.
In X64/EMT64/AMD64’s “Long mode”, SSE replaces MMX and X87.
There’s always Intel’s “SSE4” contribution initiative.
Refer to
http://softwareforums.intel.com/ids/board/message?board.id=48&messa…
> Altivec has more registers (32 vs. 8) (SNIP)
No Bernard. In X64(1)/AMD64(2)/EMT64(3)/X86-64 “Long Mode”, “SSE Long Mode” has 16 SIMD ISA registers (not including register renaming tricks*4)). Apple Developer Box’s Pentium IV 6×0 processor is an EMT64 capable processor.
Notes
1. MS’s label for Long Mode.
2. AMD’s label for Long Mode.
3. Intel’s label for Long Mode.
4. A way to expand register limited X86 ISA transparently through micro-architectural improvements.
I rather see an array of unified shader units integrated in the newer X64 processors i.e. making them “Ready for Windows Longhorn”(1) designed processor (a follow on from “Processors designed for Windows” initiative).
1. One could include any OS that extensively uses GpGPU functions e.g. MacOS X 10.4.x. Note that Intel has licensed PowerVR Series 5 for its(Intel) undisclosed SoC (System On Chip).
The dumb person who suggested that Intel was going to make PowerpC processors for apple made the same mistake. It would be too expensive and Apple coun’t afford it. Plus there would be almost no way to reconfigure a P4 or M to use altivec. SSE is fine. Stop crying over AltiVec.
there is no way intel is adding altivec but they might improve their existing sse tech
is it possible to completely eliminate mmx and x87FPU and replace it with units that exclusively deal with SSE instructions?
Apple’s volume is too small for Intel to provide custom proprietary Pentio systems with Altivec. If that was even possible they could just license a real cpu (read: POWER) and provide Apple with a much better option. The roadmap for Pentio systems is pathetic, all next gen cpus will need more than 100watts to work.
I don’t know if such laughable post merits an answer, but here you go: Altivec is a small-vector unit, absolutely *nothing* in CISC prevents adding such types of unit.
Case to the point: SSE which is an equivalent vector unit made for the x86, Altivec is prettier sure, but does it really matter?
As for using Itanium instead of x86, you wouldn’t recognise a (slowly) dying architecture even if you saw it..
The future is x86-64 for PC and servers, you know, ugly as it is..
Seems to me this would have been a been a fruitful combination (in a perfect world). Solaris with an OS X interface (both have a lot in common through BSD). SPARC + Altivec (Maybe a pipe dream, but Apple is ditching IBM partially because they said the next generation of mainstream PowerPC/Cell CPUs wouldn’t have altivec and Apple won’t go back to Freescale). Apple’s form and function with Sun’s enterprise experience. AWESOME. Unfortunatly the corporate egos would never get along, so now were stuck with crappy Mactels and JDS.
Hence the reason I visit this place less and less; the quality is nose diving to FOX levels, and the number of fanboys make slashdot look like the epicentre of critical thinking.
I posted in an article regarding the future Mactel Macs, and silly me, my article plays second fiddle to a fanboy rant about faxes – I mean, Jesus H Christ, we’re in 2005, who the fudge uses Faxes these days? and an article thinking that Altvec would magically get ported to P4 and Intel simply throwing away SSE/MMX technologies.
Seems to me this would have been a been a fruitful combination (in a perfect world). Solaris with an OS X interface (both have a lot in common through BSD). SPARC + Altivec (Maybe a pipe dream, but Apple is ditching IBM partially because they said the next generation of mainstream PowerPC/Cell CPUs wouldn’t have altivec and Apple won’t go back to Freescale). Apple’s form and function with Sun’s enterprise experience. AWESOME. Unfortunatly the corporate egos would never get along, so now were stuck with crappy Mactels and JDS.
That doesn’t sound half stupid; SPARC64 along with VIS (SPARC worlds equivilant of Altvec) would be a great product – but they’d face the problem again, slow development, lack of a long term road map, lack of 3Ghz SPARC64 etc.
You read through his atrocious, convoluted prose waiting for some proof but in the end, all you get is “…Might this happen? Well, Apple isn’t saying!”
Great. Thanks for that.
And look up “acronym” in a dictionary already.
Why all this talk of x86-64/AMD64 Long Mode, Hammer? I’ve read the Apple Universal Binary document and the only thing they ever talk about is IA-32. We can talk about Long Mode when Apple decides that it’s ready to go 64-bit on x86. Until then, I don’t see the point in talking about features of processors that won’t be supported.
With regard to your claim that SSE replaced MMX for Long Mode, I’ll just refer you to page 238 of AMD’s AMD64 architecture manual, where it says that “64-bit media instructions can be executed in any of the architecture’s operating modes.”
SSE are the “Streaming SIMD Extensions”, i.e. extensions to an ISA. I’m see no problem with calling the units that execute those instructions “SSE units.” But that doesn’t mean that those units ARE SSE.