P.A. Semi, a 150-employee chip startup, wants to make name for itself through attention to detail. The Silicon Valley chip startup, run by chip legend Dan Dobberpuhl-Dobberpuhl, its CEO, presided over the development of the Alpha processor while at Digital Equipment Corp. lifted its veil of secrecy Monday. The company will begin offering a new family of low-power, multicore, PowerPC architecture processors in 2006.
Now this is a reason to say to developpers of Haiku/Syllable/SkyOS: “When will it be ported to the POWER architecture?”.
Quote:
“P.A. Semi’s first processor, a chip dubbed PA6T-1682M, will include two 2GHz processor cores, a pair of DDR2 memory controllers, 2MB of Level 2 cache, and an I/O subsystem consisting of eight PCI Express controllers, two 10-gigabit Ethernet controllers, and four gigabit Ethernet controllers, the company said.”
And THIS running on 13-25 watts..ideal for media centre type PCs, or perhaps laptops.
Too little too late. I don’t see where this CPU fits into the market. The only sucessful advocates of the Power PC CPU plans on switching to intel next year. Doesn’t matter how good the processors are.
Even so, the fact is, would Apple really rely on a company that has no fabrication facilities and no legacy of being able to supply, in mass, chips that Apple would require if they launched a product based on their chip.
There is also the issue of long term viability; if PA Semi wished to had get Apple onside, they would have to put a pretty damn good case to Apples board; in regards to the Intel move, if they see there is a viable alternative, they might push back the release of a Intel version of their Mac. Then again, this is all speculation.
Even after Apple goes, PPC market will not die. It will grow as never before.
I’m willing to bet that PPC reselling will exced Apple yearly sale in one month (or less) after PS3 comes out. Do not forget XBox 360 which is PPC too. And remember IBM servers. Next in line are Cell computers (which I personally can’t wait to lay my hands on one or two of those). And by the fact do you even know where Cell is planned to be used elsewhere?
Yeah, cause Apple was the only company to ever use PowerPC and IBM is still crying in their soup over Apple’s decision to move to Intel. Sure…
Hate to break it to you, but Apple was a small fry in the PowerPC world. They just popularized the name to drooling home desktop users.
According to their website, these PWR chips are intended for embedded systems, high performance ultra low power blade servers and clustering system…
So, PA Semi won’t need Apple: they are whole bunch of different product for different market.
It is becoming more and more apparent to me that Apple lied about their reasoning to switch to Intel. They blamed the lack of a future roadmap, and the lack of speed.
Lack of a roadmap? Freescale has plans to built 8-core G4s! IBM *just* put out those dual-core G5s and low power G5s. And even a minor company like this can produce these fast PPC processors. And I didn’t even mention Cell!
We can only guess after the true reasons behind the switch (probably something to do with Intel being able to supply chips at every level), but it has nothing to do with a lack of progress in the PPC world.
I wish Apple would have been more honest and open about this.
Apple didn’t complain about a lack of a roadmap, but about roadmaps with the wrong priorities. IBM doesn’t seem to have any plans for notebook-friendly G5s. Freescale sticks with its G4 CPUs and bad front side bus speed (compared to AMD/Intel).
I think what many people have said, and what has become pretty apparent, is that the switch was all about the laptops. IBM was focussed on server chips and embedded chips, but not a cool, low-power notebook chip. Intel’s making some good headyway in solutions for laptops (including both the processor itself and accompanying chipsets), and last I heard, Apple’s laptops were a key part of their business.
And yes, they’ve released a dual-core processor, but no, they still haven’t reached the 3Ghz mark that was promised for, what is it now, 2 years ago?
And yes, they’ve released a dual-core processor, but no, they still haven’t reached the 3Ghz mark that was promised for, what is it now, 2 years ago?
Why do idiots keep bringing this up? 3GHz G5s was speculation from Jobs, not a promise from IBM.
Just to put it in perspective, Intel did promise 4GHz P4s… so where are they?
???
Intel’s making some good headyway in solutions for laptops
???
Intel Centrino might be good, even I have to admit that. But OSX is 64-bit. And Intel sucks (in quality, speed and power consumption at 64-bit) there. For now, not even one decent 64-bit CPU came out of Intel, do not even think about laptop 64-bit Intel CPU.
So if they are not going back to 32-bit, Intel is the worst choice possible. AMD Turion on the other hand is 64-bit.
Only the BSD subsystem of OS X ships with 64-bit support. In order to have a graphical program utilize 64-bit addressing it needs to be broken into a 32-bit client (UI) and a 64-bit server (data processing) and use IPC to perform operations on its large data sets. The amount of 64-bit software that is used on the client is thus relatively small, leaving the performance of such things largely a matter of concern for server farms. That is unless you’re genuinely concerned about the overall performance of Mathematica at operating on datasets >4GB.
That said the processors Intel will be shipping to Apple for inclusion (post-Merom) will support x86-64. The quality of that implementation is indeterminate at this point (or at least I know nothing about it), but it’ll certainly be more than sufficient. Even Intel’s current x86-64 offering is sufficient, even if inferior to AMD’s.
Nope, but I do care about accessing larger memory blocks than 4GB.
The quality of that implementation is indeterminate at this point (or at least I know nothing about it)
Based on the quality drop Intel makes in last years it is quite obvious.
Even Intel’s current x86-64 offering is sufficient
It says all about you. For me? Compared to AMD? Intel is a piece of garbage whos only usefullness shows at winter (you don’t need to spend so much on heating, Intel heats room for you. But maybe power bill is still too high to be counted as reducing costs) and the fact that it is much slower than AMD allows you to learn knitting on those winter nights.
You can care all you want about 64-bit computing. You’re in the overwhelming minority.
> Based on the quality drop Intel makes in last years
> it is quite obvious.
This is perhaps the stupidest reasoning ever. Do you remember the K7 (pre-Barton) with its ludicrous power requirements? Do you remember the “burn up your AMD” video? I’m sorry, did this keep AMD from making an excellent architecture in Hammer? No, I don’t think it did. Apparently you conflate engineering with the performance history of your favorite sports team.
> It says all about you. For me? Compared to AMD? Intel
> is a piece of garbage
Well then you are incapable of a modicum of perspective. Not only are you ignoring that the performance differential is minor (and goes in both directions for different operations) but you rely exclusively on power consumption (which before the K8 and Intel ran into the same wall everyone else did while clocking-up NetBurst, AMD was the defacto loser in, and remains so in the mobile platform) which Intel will remedy completely with its next architecture (which will be what’s in Apple’s computers) leaving you jack shit to rationalize your position with.
In terms of x86 purchases, other than for my laptop and a few other things, I’ve purchased AMD products almost exclusively for my own personal use for about…nine years. They’ve offered a better cost/performance ratio, but they’ve definitely had several hits and misses in technology. Both in performance and power consumption, so you can just save your hypocritical power consumption comments for someone with no recollection of anything more than three years ago.
You can care all you want about 64-bit computing. You’re in the overwhelming minority.
I’d hardly say that’s the case. My current machine has 2GB of RAM, about as much as can comfortably fit in most 32-bit machines. That means my very next machine will need to be 64-bit. 1GB of RAM is a very common configuration for gamers these days (2x512MB sticks for Athlon64 machines), so that means within the next two product cycles (3-4 years), they’ll need 64-bit memory addressing.
Gamers as a dedicated segment of computer users is a serious minority of computer users. Of those almost all of them are Windows users, which basically has a 3GB address space limitation. I don’t know how or why you think video games are going to jump to more than 3GB of memory in three years. There will eventually be a need for more than 32-bit addressing for the average person at some point, however that point is not now.
Gamers are a numerically and financially significant (ie. high-margin) portion of the home computer market. PC game software sales were $1.1bn last year.
I don’t know how or why you think video games are going to jump to more than 3GB of memory in three years.
Historical data. When I bought a PC in 1991, it came with 4MB. When I bought another in 1998, it came with 64MB. Looking at dell.com, machines in the same price class ($1500-$2500) come standard with 1GB. If we fit these data to an exponential curve, we get a growth constant of 1.49 in the first case, and 1.41 in the second. Even taking the smaller constant, that implies that within 4 years, machines in this price class will come standard with 4GB. Since 32-bit machines can’t handle 4GB (some virtual memory must remain for the video card and other I/O devices), that means by that time, 64-bits will be necessary.
Another way to look at it is thus: the memory loadout for machines in this range will rise to 2GB when Longhorn comes out next year. Since people rarely upgrade to machines with less than twice the RAM of the previous one (768MB and 1.5GB machines are uncommon), that means that the very next upgrade cycle will necessitate 64-bits. If you consider an upgrade-cycle to be three years, you get the same 4-year figure.
> Gamers are a numerically and financially significant
> (ie. high-margin) portion of the home computer
> market. PC game software sales were $1.1bn last year.
Gamers are not a numerically significant portion of the computer market, where “gamer” indicates “has 2GB of RAM to play Quake IV and F.E.A.R.” Where “gamer” means “anyone that plays video games,” then most “gamers” by volume resoundedly do not have even 1GB of memory, nor do they possess a video card with more than 128MB of memory. If we just consider the dedicated PC game, then the vast majority of sales in shrink wrap PC gaming market are driven by games like The Sims, WarCraft 3 (before that StarCraft), XYZ Tycoon, Everquest, World of Warcraft, huge sales of Half-Life for the most popular online FPS CounterStrike, probably Half-Life 2 to follow because of its wide-range of hardware support and the popularity of the previous title, and Battlefield 1942. In case you’re missing a trend they have comparatively mild requirements, and this completely ignores any volume comparison of titles aggregated by genre and system requirements.
And $1.12B sounds pretty low for PC game sales. Where have you gotten that figure from?
High-end gaming systems are small market, and their margins are as irrelevant as are the margins on the server market for determining the demand for 64-bit computing in the general population. The ability to make money from a niche market has nothing to do with the existence of necessity in a larger market. You extrapolate from your own personal life a completely imaginary reality in which people largely buy $1500-$2500 computer systems in 2005.
You further make the claim that systems cannot be shipped with 4GB of memory which is incorrect. Though some of the 1-2GB (depending on operating system and operating system configuration) of address space will be mapped to the operating system and I/O devices, why you’d find having physical memory partitioned out for the operating system and thus not conflicting with the physical memory available to the usable address space of the absurdly few processes that would make use of these large data sets an inferior strategy is not apparent. Memory in excess of what can be utilized when considering I/O devices will simply go unutilized, which is still a performance win with dual-channel memory configurations.
Further your rationale for why you expect larger utilization lies solely in your perceptions of common configuration strategies and does relies not on the necessity or the projection of a future necessity for datasets in excess of 3GB. What you’re projecting is that within four years you will see more than a tripling of the expectations of the current generation of the highest-end engines utilized in games that have been in development for 1-5 years in active memory requirements. And we’re talking in terms of the most expensive and time-consuming assets to produce in game production at this level. The next generation of FPS engines (that is after Unreal 3, Source, and we’ll say where Splash Damage takes the Doom 3 engine in the near future) are themselves more than four years away. Not only is this just a fraction of the market of gamers, and an even smaller fraction of total computer sales, we’re assuming an enormous growth in datasets. How is this representative of the interest of anything but a serious minority of computer users?
Gamers are not a numerically significant portion of the computer market, where “gamer” indicates “has 2GB of RAM to play Quake IV and F.E.A.R.”
I didn’t say a gamer has 2GB of RAM. Most have 1GB. Most current PC games do *not* play happily with 512MB.
Where “gamer” means “anyone that plays video games,” then most “gamers” by volume resoundedly do not have even 1GB of memory, nor do they possess a video card with more than 128MB of memory.
Where exactly are you getting these statistics? And gamers “by volume” is a meaningless statistic anyway. Installed base really doesn’t matter here, what we’re interested in is new PCs shipped. When even a $600 emachines bargain box comes with 1GB of RAM, I find it very hard to believe that your average new gaming PC ships with less. Hell, I find it hard to believe that your average PC *period* ships with less.
You extrapolate from your own personal life a completely imaginary reality in which people largely buy $1500-$2500 computer systems in 2005.
No need to go to $1500-$2500. I simply used that figure so I could compare the same class of machine to derive my constants. Even a $750 Sony Vaio (RB43P) or $750 Dell Dimension (9100) comes with 1GB of RAM.
Though some of the 1-2GB (depending on operating system and operating system configuration) of address space will be mapped to the operating system and I/O devices, why you’d find having physical memory partitioned out for the operating system and thus not conflicting with the physical memory available to the usable address space of the absurdly few processes that would make use of these large data sets an inferior strategy is not apparent.
You claim that no single process will need that much space, but its not a claim that stands up to history. On current computers, people *do* have processes that use 512MB to 1GB of RAM. In four years, these same types of processes will use 2-4GB of RAM. Unless memory usage patterns change from their historical trends, this will be the case.
Memory in excess of what can be utilized when considering I/O devices will simply go unutilized, which is still a performance win with dual-channel memory configurations.
So future 32-bit machines will ship with memory that the user can’t utilize? That’ll go over splendidly!
Further your rationale for why you expect larger utilization lies solely in your perceptions of common configuration strategies
It relies on the reality of history, which is a function of the reality of marketing. Historically, odd-numbered configurations have not happened. Why? Because people don’t percieve an upgrade from 512MB to 768MB or 1GB to 1.5GB to be significant. Manufacturers don’t want a situation where people don’t feel compelled to upgrade. Once we get to the 2GB mark (sometime late next year according to Microsoft!), manufacturers will have to go to 4GB, for the sake of making the machines marketable.
and does relies not on the necessity or the projection of a future necessity for datasets in excess of 3GB.
Predicting future datasets is unreliable and unnecessary. Computer memories have been expanding at a remarkably constant rate over the past 15 years. I find it very difficult to believe that the trends will change significantly in the next four or five. This is especially considering that Microsoft will basically enforce the trend for 2006-2008 (as a result of upgrades to Longhorn)!
1. Install base is precisely the thing that matters for determining whether you are in the majority or minority of users.
2. You claim but don’t establish that ‘gamers’ are a significant portion of the install base.
3. You erroneously state “You claim that no single process will need that much space” which is categorically false. My claim is that the number of people that require 64-bit addressing is the overwhelming minority.
4. You assume datasets will not ony expand but expand at a rate outside of current common limitation, but your only reasoning is that they have expanded previously (though not outside of the limitations of what is normal).
5. You are incapable of differentiating between what is available and what is necessary for the user. People can, right now, buy an x86-64 with 8GB of memory if they really want. That however does not mean that they need 64-bit addressing. Until you can stop confusing availability with necessity, I’m really going to be puzzled here.
6. Longhorn in no matter necessitates 64-bit addressing, nor even if it did would its rate of adoption be sufficient to move the need for 64-bit addressing into anything but a minority.
In short you’ve taken issue with my comment and failed completely to support your position. Not only have you been completely unable to support your position, you have clearly articulated that you don’t even remember what mine was.
“Since 32-bit machines can’t handle 4GB (some virtual memory must remain for the video card and other I/O devices), that means by that time, 64-bits will be necessary.”
More than 4GB ram is no issue with ppc since the G4 is already capable to address 64 GB of ram, it has a 36 Bit addressbus.
Its obvious that you paid no attention to the IDF orgy that happened a few months ago; P4 is a dead end; they’re going back to the drawing board and basing their whole line up on a single core that scales from desktop to server, from laptop to workstation – they’re recycling the Centrino core, adding features, scaling it down and doing everything possible to make it the most efficient wattage to performance ratio on the market.
How about instead of bashing Intel, lets wait till next year when Yonah and others are release.
Intel is a piece of garbage whos only usefullness shows at winter (you don’t need to spend so much on heating, Intel heats room for you. But maybe power bill is still too high to be counted as reducing costs)
My experience has been the opposite, albeit with disparate generations of hardware. I have both a dual 1Ghz P3 system and an AthlonXP 2800, and the P3 system runs cooler and quieter than the AMD box.
Some of us actualy use todays CPUs. I have G5, Opteron-1.8, P4-3.6, Centrino-1.7 (and that would be only my workstations). Opteron runs good as it is. Intel will be replaced with AMD as soon as I’ll be replacing lower powered machines. Same goes for centrino. And if I consider the fact that Windows or OSX represent 3-5% of my work, well I think I’ll be waiting for Cell (to test its performance for my use) before I decide on my upgrade (best thing about Cell is that Linux is its native environment, and that is why Cell sounds more charming than AMD to me).
One more fact why my viewpoint on history is important to me. Maybe a fact that in last 6 years I had changed 13 notebooks (stoped changing them after being dissapointed with Centrino 1.8, now I just plan to wait for some fancy Cell or Turion notebook). Machines I don’t even count. But having a perspective point of view based on history has its good (and bad) points. Bad for example was my sticking to Intel instead opting for Opteron basing on my AMD32-bit tests.
Your viewpoint would be much different if you would actualy be forced to go for the speed as I am.
(Intel sucking, when) P3 1GHz was produced in Intels good times. No overheating, fast and nobody cared about power consumption at that time. Intels quality started dropping somewhere along P4 1.4.
(AMB getting better quality, when) I was mostly reffering to AMD64 times, before that AMD was only usable in kitchen for its overheating.
My experience has been the opposite, albeit with disparate generations of hardware. I have both a dual 1Ghz P3 system and an AthlonXP 2800, and the P3 system runs cooler and quieter than the AMD box.
A 1GHz P3 is NOT comparable to an AthlonXP, much less the 2800+ model. The AthlonXP is the same class as the P4, and a 2800+ is comparable to one clocked between 2.5 and 3 GHz.
Your comparison turned around would look like this: I have both an AthlonXP 2800+ and a 3.6GHz P4 Prescott, and the AthlonXP runs cooler and quieter than the Intel box.
To be more precise, OS X is 64-bit except for the GUI libraries. Tiger is has a 64-bit kernel and 64-bit versions of the BSD libraries. It’s just Aqua and the attendent GUI stuff that is 32-bit.
To be precise libsystem and accelerate are the only 64-bit system libraries shipped. There is no support for 64-bit Objective-C programs, and none of the other frameworks ship as anything other than 32-bit libraries. Having any 64-bit support at all without kernel support would be most impressive.
Tiger’s kernel is 64-bit. Panther’s wasn’t, but Tiger’s is. The reason you can’t have 64-bit ObjC programs is that all the libraries used by such programs is not 64-bit (likely because they aren’t 64-bit clean — which is entertaining given the fact that Quartz, etc are all very recently-written code!)
http://www.osnews.com/permalink.php?news_id=12382&comment_id=50865
The reason you cannot have 64-bit Objective-C programs is that the runtime isn’t shipped as a 64-bit library and has nothing to do with Quartz. The others issues with OpenStep have nothing to do with Quartz, either. The NeXT Objective-C runtime is pretty old.
The overwhelming majority of OS X is not compiled with support for 64-bit addressing. I’m basically correcting your misleading interpretation “To be more precise, OS X is 64-bit except for the GUI libraries.” Only two system libraries are shipped with 64-bit counterparts. And I’m perfectly aware that Tiger ships with support for 64-bit memory addressing, which was the point of my sarcasm; if it didn’t then supporting it transparently for the process would be most interesting. You mentioned it as if it were some body of evidence that OS X largely makes use of 64-bit addressing which is not the case.
Road map was but ONE consideration; the other was the crappy supply issue; like I’ve said before; anyone remember the XServer fiasco where by the top of the range one had waiting times of WEEKS because IBM couldn’t keep up with demand from Apple?!
Please; Motorola couldn’t keep up and IBM can’t either; the fact is, Apple is growing at a phenominal rate; IBM would rather get instant gratification via their Cell processor than spend time looking at the long term – but hey, this is IBM; bitch to the short term invester with the loudest, most uneducated mouth at the shareholders AGM.
Notebooks didn’t need a G5, what it needed as a G3 750GX coupled with Altvec, 533Mhz FSB – it would give them the low power, the decent level of bandwidth, and it wouldn’t require re-inveting the wheel, it would be merely a sucessor to an existing product they have – a product that could pay for itself, not just via Apples purchase but others in the embedded market.
“Notebooks didn’t need a G5, what it needed as a G3 750GX coupled with Altvec, 533Mhz FSB”
Well, what about a G4 then? The 8641 springs to mind. It has a fast FSB, a big 2nd L cache and consumes only low energy and scales up to two cores with a clock frequency up to the 2 ghz range. Those just announced PA6T based processors seem to be good for such a job too, but they will appear most likely a while after the 8641. Thus, the 8641 will be the next big step for low energy high performance ppc computing, the PA6T will follow that route.
“It is becoming more and more apparent to me that Apple lied about their reasoning to switch to Intel.”
Could it be Apple got nervous by MS’s selection of processors for the Xbox 360?
The first time I heard this “rumor” I dismissed it. Perhaps there really is something to it.
It is becoming more and more apparent to me that Apple lied about their reasoning to switch to Intel. They blamed the lack of a future roadmap, and the lack of speed.
Not more and more apparent. It was clear from day one.
Here are few pointers why:
1. Apple suddenly becomes one of smaller IBM customers, no more special treatment. Going with Cell as it is, would be loosing battle. You can’t resell overpriced desktops if PS3 for $399 has better hardware. You don’t give special treatment to a customer that would mean such low percentage of resale. And there goes Apple-performance myth down the drain
2. Apple decides on Intel (OSX is 64-bit, but Intel is the slowest in competition on that field). Seems that Apple is not sure in success, but it is a win-win situation. If their desktops fail, Intel will be glad to buy their computer department. Intel is just wishing to get some brand name as Apple to have at least PR opportunity against AMD (they lost on R&D as soon as AMD put out 64-bit, and with that fact Intel started following AMD and not the other way around as it was before). AMD was out of consideration because option was not win-win, but win-loose. If they fail AMD is too small to buy Apple
3. Assumptions about performance per Watt are wrong from day one again and missled most of the Jobs fans. Centrino has low power consumption, trouble is that it is 32-bit. Intel still has to put out single 64-bit CPU that doesn’t suck power like electric train and which would be even a bit competitive to AMD in speed. Apple is going all over on 64-bit, trouble is that they will be riding crippled horse.
4. Lack of roadmap when IBM is putting out completely new CPU?
5. No 3GHz, but basic Cell is 4.2GHz?
6. Supply problems? Well, I would remember if Apple would be needed preordered even for one day. Stock never runned out and every 6 months Apple put out a new line. Any day I choose I would like to buy Apple, I could just order, pay and take in one day
Now the reason why Apple could succed to fool people that it became faster with move on Intel
– If code is not PPC optimized, well then PPC sucks. Anandtech benchmark should be proof enough that OSX is everything but optimized and that PPC is everything but bad. But when you put software which is not optimized on CPU like Intel this software goes faster (Intel and AMD are faster in some operations). Good side for Apple is that now only a very low percentage of their users are in DTP. So higher percentage of happier (typical home user) and few dissapointed (professional workflow) is still a good PR.
Main trouble in this calculation for that unhappy few ones
– Software that was optimized for PPC like Photoshop will definitely suffer from this move, while badly written and low cost desktop will actualy feel faster. Please do not even consider SSEx here. PPC has already gone far away from vectorization with Cell.
We can only guess after the true reasons behind the switch (probably something to do with Intel being able to supply chips at every level), but it has nothing to do with a lack of progress in the PPC world.
Wrong. Apple lost special treatment and Intel being able and willing to buy their computer section in case of failure is the reason.
I wish Apple would have been more honest and open about this
If they would have been, they would have to admit they are not sure about future. Now, that would be a bad PR move, don’t you think;)
1. The Cell as it exists is inadequate for general-purpose computing. That is at least it would not compete favorably performance-wise with the PPC970FX or PPC970MP.
2. 64-bit computing isn’t Apple’s selling point. Their customers on average probably do not use or own any 64-bit programs, or even have more than 4GB of RAM. It is unclear how Merom and its descendents will perform with x86-64, but it will largely not matter. It is also ludicrous to suggest that AMD has surpassed Intel in capability for performing R&D simply because of recent implementation strategies utilized by Intel. Intel has been beating AMD at power-consumption on their mobile platform and all signs point toward the continuation of this trend in the future. They have both simply excelled thus far in different areas, and now Intel intends to move its successes in its mobile line into its desktop platform.
3. Apple has done rather little to really transition to 64-bit computing, despite its marketing otherwise. That said, Merom and further will provide the low-power requirements of Intel’s mobile line as well as an implementation of x86-64. By 2007, Apple will have no shortage of 64-bit x86 processors for its mobile and desktop platforms.
5. See (1).
The rest of your post seems crazy, so I’m just not going to address it. SSE-optimized Photoshop is extant and all of the filters will be dropped into place and used happily by Mac users everywhere. There is no Cell Photoshop, and there probably never will be.
I have no doubt that Apple was giving IBM a black-eye on its way out. Apple is all about image, and has been for more than a decade. Whatever their motivations were, it doesn’t really matter. There’s nothing wrong with the prospect of an OS X/x86. They aren’t losing out on anything in making the transition.
Eh? I echo the other two respondents. Apple isn’t lying about their motives. IBM still isn’t and doesn’t plan to give them what they want and need in a chip.
Apple isn’t lying about their motives. IBM still isn’t and doesn’t plan to give them what they want and need in a chip.
And that would be?
Lets summarize their claims
– no 3GHz? Cell is 4.2
– short on supply? Do you remember one day when you couldn’t buy Apple the day you decided? I sure don’t. Supply trouble would mean that demand exists but there is no product
– better power consumption? Intel and 64-bit? BS. Not even worth to mention?
Did I forget something?
And yes, they are not lying, they are keeping quiet about their motives.
“Cell is a breakthrough architectural design — featuring eight synergistic processors and top clock speeds of greater than 4 GHz (as measured during initial hardware testing)”
Yeah, this is still new technology and we don’t know what it will do regarding heat output at those levels and how it will be crammed into a notebook?
What is gained/lost? Instruction set changes? Altivec? I haven’t seen anything concrete addressing this? But I just may not have seen what you have seen. Whereas Mac OS X has been running on x86 architecture from day one.
:/
As far as demand/supply. Yes. My mini was a month late. That could be Apple’s fault – but you asked and I answered.
Intel HAS been demonstrating better technologies (well, better rehashing of OLD technologies) in it’s M and likely the M descendants.
Yeah, this is still new technology and we don’t know what it will do regarding heat output at those levels and how it will be crammed into a notebook?
Well, in any case it can’t be worster than Intel. I was planning to move from 1.7 to 1.8 Centrino, but it was too hot for my taste. Even 1.7 is.
Yes. My mini was a month late.
mini problem was only forst few weeks. There was simply too much demand on first days. Last mini I needed to buy for a customer of mine was a simple call and 20 minutes of driving.
What is gained/lost? Instruction set changes? Altivec? I haven’t seen anything concrete addressing this? But I just may not have seen what you have seen. Whereas Mac OS X has been running on x86 architecture from day one.
I give you that one. Still, it demanded constant extra work from Apple. No pain, no gain (although I disagree with using gain here).
Intel HAS been demonstrating better technologies (well, better rehashing of OLD technologies) in it’s M and likely the M descendants.
32-bit. OSX is 64-bit. Intel still hasn’t put out one single decent 64-bit CPU (talking about price-preformance, power consumption and speed) and there is a little fact that it is going for worster not better.
So, we haven’t seen how Cell performs, but on the other side we’ve seen how Intel sucks. (choice here is like: door nr.1 or guillotine?)
Personaly, I was done with Intel when I was trying to update my last notebook with 1.8. It will be only AMD or Cell decision for me. Last one and half year of crapware has droven me nuts.
Comparing the Cell and the PPC970 makes no sense. The Cell can run at 4.2GHz because it has extremely short, optimized circuits, as well as an 18 stage integer pipeline. Part of the reason it can have such short circuits is because it in-order schedule-execute loop is dead simple, it has limited instruction-level-parallelism, and uses static branch prediction. Meanwhile, the PPC970 does a lot of work in its 16 stages to support massively out-of-order, parallel execution. The two chips are completely different.
…the next Transmeta.
…the next Transmeta.
As in the promised Next Big Thing, but ultimately failing in selling their products, making their stock plunge, yet allow investors to make a tidy sum if they buy it right before the rebound?
Nahhh, I don’t think so. Embedded chips is a huge market, with a lower barrier to entry than processorchips. They’ll be able to get a foothold there, I’m pretty sure of that.
well, they could only make money in the embedded market or if they somehow make it run x86 apps as well. that is a nasty brand name too.. ew..
they might have more luck in the sparc market where its not as crowded at the ppc market.
well, on another note:
these chips would either be made at IBM or some forieng chip supplier.. so why would big companies like apple choose someone that has their chips made at a facility that has known production problems?
Just one thing. P.A. Semi doesn’t use PowerPC architechture but Power. Thus these chips are incompatible with G4 and friends.
Power Architecture is PowerPC.
Wes Felter
IBM Austin Research Lab
I am not absolutely sure, but as far as I know
POWER implies PowerPC AS
PowerPC AS implies PowerPC
Thus PowerPC is the lowest common denominator.
Carsten
>>”Power Architecture is PowerPC. ”
Your logic seems to be a little faulty.
In english, when one states “A is B”, they are saying that “all things that are A are things that are B” or “all things that are A are equal to all things that are B”: A is a subset of B.
When you say that “Power Architecture is PowerPC.”, you are saying that “All things based on the Power Architecture are also PowerPC. ”
As I recall IBM based PowerPC on the Power architecture, but some things were changed, others left out.
Logically it is correct to say-
PowerPC is a subset (and possibly a superset) of Power.
PowerPC is Power Architecture.
Power Architecture is not PowerPC.
Historically, the PowerPC is a Power architecture based chip with some I/O taken from Motorola’s 88000.
(source) http://www.mackido.com/History/history_of_aim_hw.html
The History of AIM (paragraph 2, “The Deal”)
Kyle
Philosophy Student
The only sucessful advocates of the Power PC CPU plans on switching to intel next year.
You mean besides the dozens of companies which use Power chips in their routers?
With “two 10-gigabit Ethernet controllers, and four gigabit Ethernet controllers,” did you ever think that this might be their target market?
Of course not…
You’re talking about an infinitesimally small market when it comes to Power CPU sales. Besides do you think they use the latest and greatest Power CPU’s in something like transport equipment? Or routers and switches?
two 10-gigabit Ethernet controllers to communicate with storage devices (iSCSI) and four gigabit Ethernet controllers to communicate with the clients. The two 10-gigabit Ethernet controllers allows either an active-passive or an active-active storage device cluster.
Carsten
Cell is not going to be a good general performance processor. It’s going to be amazing for some tasks, but not for common desktop use, and not appropriate for laptop.
The 970 was derived from a dual core chip (the Power4), yet somehow its dual core version came out after both AMD and Intel were shipping dual cores for months. The G4 class processors are still waiting on some decent memory bandwidth (maybe sometime next year?), and their dual core version just got pushed back further into “maybe next year, maybe later” territory. Apple’s not really missing out on the cutting edge by leaving PowerPC behind.
There are lots of problems with Intel processors, but Intel the company needs their x86 processors to be successful in desktop/laptop use in order to remain a significant company. That’s just not true for Freescale and IBM. Regardless of their proclaimed reasons, Apple’s changed to a company that needs to make the products that Apple needs.
I guess Cell wouldn’t be that bad for desktops. The “general performance” of the GPE should be sufficient for office applications and the 8 SPE would accelerate games as well as en-/decoding of media contents. Thus it would be sufficient for ~99% of the users. That ~1% of the users which develop applications, …
Carsten
If Apple hadn’t torpedoed them, maybe we’d have faster PPC Chips today…
I think that was an unmitigated disaster for Apple to have shut down the clones, and Exponential.
Both things would have driven PPC development past where it is today.
apple dumping PPC chips was 2 things.
1. the very best thing possible to happen for the expansion of PPC into the home and other computing areas.
2. the very worst decision apple has made to this point in its career. now this is absolutly no reason to buy apple hardware.
I think the person who wrote that the Xbox360 and PS3 were one of the reasons why Apple switched was right. It’s as simple as Porter. Their customer position would have been a lot weaker if IBM could just say “Yeah guess what we’re selling more chips to Sony and Microsoft so we won’t cut prices”.
Also, everyone seems to look at it from the “Apple-perspective”. Maybe Intel just came along and made an offer that was too good to resist.
Power Inc tried that.
Apple bought Power Inc. out with money Apple got from Microsquishy to rid the earth of any competition. They would do wall to just sell out now and not hurt the customers.
AMD64 isn’t the end-all people make it out to be. Will you ever see one in your cell phone? Probably not. And the other end of the scale – supercomputers – any processor can be clustered. Performance/watt and TCO is what matters.
..PCs.
Sometimes people seems to ignore the fact that there’re more than personal computers and small servers in computing world. These guys are for multimillion dollar business including super computers and clustering servers. So please don’t mention Mac or Apple here.
5. No 3GHz, but basic Cell is 4.2GHz?
Cell is a simple in-order design. And it won’t run at 4 GHz.
http://tinyurl.com/7d6cr (realworldtech.com)
“At the 2005 Electronic Entertainment Expo (2005 E3) in May, Sony released the preliminary technical specifications for its next generation game console, the PS3. Sony revealed that the CELL processor will run at 3.2 GHz in the production version of the PS3. Moreover, the PS3 will use the CELL processor in its present configuration with 8 SPE’s as announced at the 2005 ISSCC, but only 7 of the 8 SPE’s in the CELL processor will be functional.”
The reason why we are seeing more PPC is not becasue Apple is moving away from it. It is because IBM and friends made it open hardware of sorts about a year back, well at least they said that anyone can use PPC as long as you join our club $$$. That is why more people are making chip PPC chips then Freescale and IBM.
what?
that’s it the topic say it all: ppc rulez
3 next gen console are ppc
-playstation3
-revolution
-even the crap “from the other well known company” will use one.
who said it’s too lat for ppc ? it’s just starting up
oh and i forgot, my computer is using ppc too,
an amiga-one..
“Lack of a roadmap? Freescale has plans to built 8-core G4s! IBM *just* put out those dual-core G5s and low power G5s. And even a minor company like this can produce these fast PPC processors. And I didn’t even mention Cell!”
IIRC Freescale is nothing more than a pseudo spinoff from Motorola, so nothing small there, just a decided lack of interest in selling chips to anyone outside of the embedded market as evidenced by the complete inability of motorola to produce a >1GHz G4 until relatively late in the game. Also, IIRC IBM made the power architecture pretty much apen a few years ago meaning that these guys got the basic architecture from which they only had to tinker to get their desired power utilization, extra functionality on chip(IIRC IBM probably supplied that as well), and layout. Additionally since they aren’t fabbing their own stuff they don’t have to waste time developing manufacturing processes or build a plant. I also noticed a distinct lack in the feature set of their powerpc chips beyond that it is a powerpc, presumably with integrated basic MMU & FPU, but nothing about SIMD etc.
“We can only guess after the true reasons behind the switch (probably something to do with Intel being able to supply chips at every level), but it has nothing to do with a lack of progress in the PPC world.
I wish Apple would have been more honest and open about this.”
My guess on this is good ol’ pressure, FUD, etc from Intel plus a nice heaping helping of “analysts: with vested interests(various analysts articles/commentary pushing for Apple switch to x86 several years to switch) in Intel as x86 even with 64 bit extensions is nothing other than a 30+ year pile of hack & kludges that manages to approximately match more modern architectures. (Now, it would be different if they decided to say see ya to supporting anything pre-pentium hardwarewise or if they gott off their butts and got Itanium up to it’s original spec. (Too bad transmeta’s chips were never made/allowed to emulate other processors before they went under. I wonder what their performance would have been like emulating simpler architectures…)
I’ll be surprised if these guys are still in business 5 years from now, as I really think that they’ll go the way of transmeta unless they REALLY have some good power reduction methods other than what the article makes it sound like, or unless they’re uber cheap(unlikely) and profitable. Time to go patent hunting maybe…