Linked by Nicholas Blachford on Wed 9th Jul 2003 16:43 UTC
Talk, Rumors, X Versus Y This article started life when I was asked to write a comparison of x86 and PowerPC CPUs for work. We produce PowerPC based systems and are often asked why we use PowerPC CPUs instead of x86 so a comparison is rather useful. While I have had an interest in CPUs for quite some time but I have never explored this issue in any detail so writing the document proved an interesting exercise. I thought my conclusions would be of interest to OSNews readers so I've done more research and written this new, rather more detailed article. This article is concerned with the technical differences between the families not the market differences.
Order by: Score:
good!
by steve on Wed 9th Jul 2003 17:06 UTC

Excellent article, this is what osnews needs, not the blaring fanatical opinions of ogres and trolls, just simple, factual text.

Rather than compare which processor scores better on which benchmarks, it's good to read about the actual architectures of the different processors. That's the good stuff.

Good man!

holy crap!
by KOMPRESSOR on Wed 9th Jul 2003 17:08 UTC

I just finished a 600-level class in computer architecture and that was over my head. What a great article--truly an exemplar for other submitters!

Ultimate armchair enthisuist article.
by SteveToth on Wed 9th Jul 2003 17:08 UTC

Another article from for the armchair computer enthusist that has no real information. Sorry, but you either need to explain more about what you are talking about or use more technical verbage to make the article a higher level. You assume that the reader knows about registers, but don't talk about IPC (using that term). Your article also makes bold claims that are unfounded and unsupported within your article or by your references.

SMT will be twice as good as HT? Where is your reference?

Also at the end of your article you predict (as many others have in the past) that Intel will hit the 'heat wall' and that the future looks bright for RISC technologies but BAD for CISC. I can't wait to see you proven wrong again as CISC technologys continue to work as well or better then RISC. For all of RISCs 'advantages' (many of which you state in this article) CISC still seems to come out on top.

don't let facts get into your pet article
by goo.. on Wed 9th Jul 2003 17:14 UTC

This is how far I read the article until noticing a grave mistake: The x86 family of CPUs began life in 1978 as the 8086, an extension to the 8 bit 8080 CPU.

If you don't know shit about different CPU architectures, why do you feel need to write about them?

recommended read
by stew on Wed 9th Jul 2003 17:15 UTC

For those interested in RISC vs CISC (or especially why there is no RISC vs CISC any more), I highly recommend these articles at Ars-Technica:
http://arstechnica.com/cpu/index.html

Change
by Anonymous on Wed 9th Jul 2003 17:27 UTC

I particularly liked the reference to changing programmers.

Yes, that has been known to result in faster programs!

Re: various
by nicholas Blachford on Wed 9th Jul 2003 17:29 UTC

This is how far I read the article until noticing a grave mistake: The x86 family of CPUs began life in 1978 as the 8086, an extension to the 8 bit 8080 CPU.

If you don't know shit about different CPU architectures, why do you feel need to write about them?


"The Intel 8086, a new microcomputer, extends the midrange 8080 family into the 16-bit arena."

Intel Corporation, February, 1979


SMT will be twice as good as HT? Where is your reference?

IBM & DEC

Both IBM and the Alpha team announced the addition of Multithreading support was expected to give a 100% boost in performance.

When HT arrived it gave only 20% - 30%, I believe it is to be enhanced soon.

Also at the end of your article you predict (as many others have in the past) that Intel will hit the 'heat wall'

Microprocessor Report said that, not me.

Re: goo..
by bytes256 on Wed 9th Jul 2003 17:34 UTC

This is how far I read the article until noticing a grave mistake: The x86 family of CPUs began life in 1978 as the 8086, an extension to the 8 bit 8080 CPU.

If you don't know shit about different CPU architectures, why do you feel need to write about them?


If you don't know shit about different CPU architectures, why POST about them?

DUMBASS

Well done
by tinfoil on Wed 9th Jul 2003 17:37 UTC

Kudos to you, Nicholas, on a well written article. It was a well researched and factual piece, a welcome change from the opinion articles that have been becoming more common here. (Not that that is a bad thing, but it's nice to have a change once in awhile.)

RE : don't let facts get into your pet article
by dc on Wed 9th Jul 2003 17:37 UTC


instead of insults, could you provide pointers contradicting the article, and specifically that grave mistake you've noticed ?

Re: good!
by bsdrocks on Wed 9th Jul 2003 17:44 UTC

I second! Thanks for write this article, Nicholas Blachford!

Well researched and informative
by Mike Bouma on Wed 9th Jul 2003 17:48 UTC

I believe there is a great potential future for the PPC platform. Compared to x86 CPUs it has a generally cleaner design, efficient power consumption and very well performing vector units.

With PPC platform moving towards a solid 64-bit architecture and good multi-processing capabilities, IBM may well have a winner (hopefully it won't ake long until ther are good 64-bit OS solutions available). Sadly consumers aren't well informed about the MHz myth despite Apple's efforts. Many people still buy a Celeron mainly for the higher clockrate (instead of performance). Most people simply don't undestand that a 50 MHz 68030 isn't twice as fast as a 25 MHz 68040, but that it's rather the other way around. IMO emphasizing clockrates only misleads the general consumer and 3rd party SPEC stats would be a far more reliable source, but of course still not ideal. Hardware companies will most likely continue to try to use tricks to mislead benchmarking software and so to artificially produce higher figures or will do everything dispute results when they are not in their favour and further confusing the general public.

Anyway thanks to Nicholas for his IMO well researched article. ;)

Excellent article!
by Jay on Wed 9th Jul 2003 17:48 UTC

Really great article. We do need more of these on OS News!

Re: various
by whaaa on Wed 9th Jul 2003 17:49 UTC

First off, HT (Hyper Threading) is a form of SMT (Simultaneous MultiThreading), so stop all this nonsense of HT vs SMT!

Second neither IBM nor DEC (with their EV8) ever claimed 100% performance increase of a SMT implementationg vs. non SMT of the same architecture.

I also have a few bones to pick with the author, since he makes a lot of false claims, for example:

"The amount of voltage the CPU can use restricts the power available and this effects the speed the clock can run at, x86 CPUs use relatively high voltages to allow higher clock rates, "

This statement is so wrong, that I do not where to begin with the nitpicking! phew...

clearly a ppc is better article...
by lamo on Wed 9th Jul 2003 17:49 UTC

Not sure I by his ideas about this and don't think the references where good enough to prove it.

Perhaps it would have been better if it where a comparison of the two architectures and not a drift off into a poorly formated discussion of many chip architectures.

It is like trying to decide what is the best engine design for any application. If their was a best for "any" it surely would not be the best for each specific application.

It also does not consider the glue. So much of how well a computer works depends on the task and the parts surrounding the cpu and the tools to do the work. There are many examples of crappy cpus being very effective because the surround kit and code solve the problem better.

Process technology and price are important when you talk about the desktop market. But so are the artificial benchmarks.

I also question how much linux is really cross platfrom. Having used both the Itanium and the Alpha versions it become pretty clear that it is a x86 os with ports that less then optomized and stable.

Compliants asside it is to bad they put altvev in and kill the double percision mult-add instructions. For spec this counts for 2 ops. So if you have to fpu that can do double percision multadd you get 4 flops per clock. Power4 and Itanium both have this and it is how the win the Flop performance benchmarks and marketeering.

If they had left it in the ppc 970 would have been the Flop lead above the power 4 and everyone else. IBM probably did not want that... But apple would have gotten a lot of HPTC customers. Again IBM would not like that. Instead they have a thing with a poorly optimized compiler that does low percision floating point ok. Probably should have gotten IBM to have ported all the compilers to OSX at the same time. That would have helped too.

Re: Well researched and informative
by whaaa on Wed 9th Jul 2003 17:51 UTC

"Most people simply don't undestand that a 50 MHz 68030 isn't twice as fast as a 25 MHz 68040"

Well, the 040 was "double clocked" internally, i.e. the 25MHz 040 was indeed running at 50MHz internally (much like the R4000 for example)

sweet!
by tuna on Wed 9th Jul 2003 17:54 UTC

This is a great article, thanks.

Re: goo
by bobby on Wed 9th Jul 2003 17:55 UTC

True he may havg gotten it wrong - the x86 architecture actually goes all the way back to the 4004. The truth is when the PC choose the 8088, it was already somewhat handicapped by it's ties to the past.

very nice
by Boonders on Wed 9th Jul 2003 18:03 UTC

good article. Good references.

Thanks
by David Ferguson on Wed 9th Jul 2003 18:04 UTC

I am an armchair computer enthusist. This is the first time I have read anything that even remotly understood the differences in the two different cpus. I may not have understood everything but I got enough.

Thanks again
by David Ferguson on Wed 9th Jul 2003 18:06 UTC

I just read my own sentence and saw I made a mistake in my post. What I mean is that this is the frist time I have read something that explained the differnces in a way that made some sence to me. Most of the time is is more like listening to two preaches going at it over thier own particular beliefs.

a swell article
by pjm on Wed 9th Jul 2003 18:06 UTC

Thanks for the informative article. A very concise summary of a difficult topic.

Re RE Multiple
by SteveToth on Wed 9th Jul 2003 18:06 UTC


Also at the end of your article you predict (as many others have in the past) that Intel will hit the 'heat wall'

Microprocessor Report said that, not me.

Problem is that they assume that Intel will not change some aspect of their technology. Yes, Intel will have to change some part of their physical design or logical layout in order to compensate for the laws of physics. However, that doesn't mean that x86 code can not scale to higher and higher speeds. Just that the way it is executed will have to change. As it has already done multiple times over the x86 lifetime.


Both IBM and the Alpha team announced the addition of Multithreading support was expected to give a 100% boost in performance.

When HT arrived it gave only 20% - 30%, I believe it is to be enhanced soon.

http://meseec.ce.rit.edu/eecc722-fall2002/722-9-16-2002.pdf
This class lecture ( brought to you via google with "SMT DEC alpha speedup") proves that yes, in certain cases you can get a 100% speedup using SMT. Problem is that not all applications are going to be able to achieve that speedup (YMMV) and will have to be recoded (or at least recompiled) for SMT, as it requires the code to be processor aware.

epic architecture please?
by steve on Wed 9th Jul 2003 18:10 UTC

After having finishing the article, it does seem to miss some points, but still overall, the article is good and one of the better reads I've had on Osnews in a long, long time.

Thinking about the x86 strategy in terms of marketing is a pure wonder--however, if Intel had actually focused on creating a better architecture rather than one that had many parameters to tweak such as mhz, cache size, bus speed, hyperthreading, etc where some marketing guru could overstate again and again, where would we be today?
Don't get me wrong, the x86 is a true piece of engineering excellence, taking something that's not that great and inefficient and making it good enough to satisfy the current user base to fanatical points where they berate powerpc users on a common basis. But what if intel was less marketing driven, could they have come up with something better than x86. I guess that's where the alpha and epic architectures fall in. Makes me wonder about buying anything x86 in the future (i.e. x86-64).

Somebody please do an architecture overview on madison.

8080 and 8086 are unrelated
by goo.. on Wed 9th Jul 2003 18:17 UTC

They are neither binary nor source compatible. Their interrupt, memory segmentation/banking, I/O modes are completly unrelated. 8080 is completly hand coded while 8086 uses microcode. 8080 has no complex instructions, 8086 has plenty of them. 8086 is way more complicated than 8080 with 16 bit additions (8080 can use 16 bit adressing with BC, DE or HL register couples, btw.)

IOW, 8086's only relation to 8080 is that both were designed and produced by intel. That is it. Intel might have said 8086 extends their midrange to 16 bit, which was established by 8080 but technically, they are completly unrelated CPUs even designed by different people (original 8080 designers left to found Zilog) and philosophy.

Who is dumbass now?

Goo and the lot
by Eugenia on Wed 9th Jul 2003 18:21 UTC

>Who is dumbass now?

I suggest you all calm down in the way you talk over here, or I will mod all the rude comments down.

Clarification
by goo.. on Wed 9th Jul 2003 18:22 UTC

B,C,D,E,H and L are register names used in Z80 version of the 8080 asm. I can't recall the original, 8080 names right now.

I had to stop reading the article
by josh goldberg on Wed 9th Jul 2003 18:27 UTC

because it was making my head hurt. Can you say proofreading? spellcheck? A second grader writes with better grammar. Perhaps you need to put the pipe down a bit sooner before writing your next article.

Let me suggest looking two words up in the dictionary: effect and affect.

I have to agree that this guy doesn't really know what he is talking about. He seems to start with a conclusion and then look for ways of justifying it. The ArsTechnica article is VERY good, but it pretty much requires extensive knowledge of computer/processor architecture.

The author seems to enjoy making broad statements without providing real proof. The Power5 SMT vs. Pentium4 HT is particularly blatent (though I have no doubt that Power5 SMT will provide more improvement than Pentium4 SMT, I doubt it will double performance and even then it will only improve parallel stuff - much more important for servers than desktops).

The benchmarking section was also given a cursory treatment. He uses an OSNews post as justification for throwing out ICC results in favor of GCC even though the post doesn't even address that. The part of the post the author refers to correctly points out that SPEC FP performance is NOT indicative of overall system performance because most applications use mainly integer code. This does not invalidate the ICC SPEC FP results or justify Apple's use of GCC. I have read in other locations (Ace's Hardware forums) that ICC does drastically improve performance on real-world floating point intensive code.

I would dismiss this article as blatant fanboyism, but the author seems to believe everything he wrote. Guess he stepped too close to the reality distortion field. Please, throw out this article and look elsewhere (Ace's Hardware and ArsTechnica are both VERY good sites for this type of stuff).

Let's see...
by roybatty on Wed 9th Jul 2003 18:32 UTC

He builds powerpc systems. I wonder what his conclusions will be....hmmm..

Very well done
by Nate Downes on Wed 9th Jul 2003 18:34 UTC

Glad to see this, good job Nicholas.

Now the world may end, Bouma and I actually agreed on something. 8)

too many hostile people around here
by sajiimori on Wed 9th Jul 2003 18:42 UTC

Anyway, I think this article is best viewed as a brief summary (and a pretty decent one), not as proof of any hypothesis or as an argument.

There was one statement that stood out as being particularly wacky to me, though: The comparison of programming language, programmer, and CPU in their relative importance for the resulting execution.

That's like asking: What's more important for a good driving experience, the steering wheel, the pedals, or the engine?

The obvious answer: All are critical for good performance, and a deficiency in any can bring down the whole system.

Well...
by Richard Fillion on Wed 9th Jul 2003 18:42 UTC

While i liked the article, and i'm a die-hard anti-x86 guy, I have problems taking the article as a whole very seriously. I liked what he had to say, and i dont think he made many SERIOUS mistakes, but it was obvious from the beginning that he was going to make this a PPC 0wNz x86 article.

I liked the bit about the Alpha outgunning the P4 though, god i want an EV7 box!

Re: Roy
Yes, the Ars articles actually provide content intead of fanboyism. They are well researched and good reads.


The author seems to enjoy making broad statements without providing real proof. The Power5 SMT vs. Pentium4 HT is particularly blatent (though I have no doubt that Power5 SMT will provide more improvement than Pentium4 SMT, I doubt it will double performance and even then it will only improve parallel stuff - much more important for servers than desktops).

IMO, SMT will not speed up servers (file, web, DB, etc) that much. The tasks that most 'servers' do is more single threaded and not prone to parraleziation in the same way that will reap the benefits of SMT. Not to say that increased resource sharing that SMT allows wills not be goot, but the 100% speedup (or more) that is possiable with SMT in certain applications will not be achieved. Video games have more potential for improvement via parrallel algorithims (graphics rendering can be highly parrallized) (sorry about the spelling).

Web serving doesn't require much CPU and the output can't normally be generated in parrallel for example. Even dynamic content can't be generated in parrallel, most of the time Google is the shining example of a parrallel algorithim though, while BBS systems like this page here really can't be generated in parrallel as you can't generate a part of the page before another, at the end of the day it's a long string of html. Also don't forget that dynamic content mostly uses integer math, and the floating point units are left doing nothing.

blah
by df on Wed 9th Jul 2003 18:52 UTC

well actually this article tells the reader, little if anything about the PPC. its all about the x86.
as for ibm/alpha 100% SMT increase vs intels 30%.
i gotta laugh sooo hard here. to have a 100% increase, that would indicate the ppc architecture is so far below the x86 for parralellisng instructions and filling the pipe with uops that its beyond a joke. i'd bet realworld performace would be in similar ballpark as to intels 30%.

x86 is built on a 1979 legacy, ppc 1993, so much has been learned about processor design from 79<>93. so i would expect ppc to best x86 for its a new clean design.

so too would i expect ia64 to best ppc, since its again a new clean design with no cruft, postdating ppc.

and so on.

RE: Well...
by Roy on Wed 9th Jul 2003 18:52 UTC

Yeah, the Alpha rocks. Too bad it is now a dead end design. DEC just didn't know how to market it and Compaq didn't care about it. The Alpha (especially 21064 - is this right?) really was the furthest evolution of RISC.

Re: Roy
by Bascule on Wed 9th Jul 2003 18:52 UTC

The Power5 SMT vs. Pentium4 HT is particularly blatent (though I have no doubt that Power5 SMT will provide more improvement than Pentium4 SMT, I doubt it will double performance and even then it will only improve parallel stuff - much more important for servers than desktops).

HyperThreading is a hack designed to utilize execution units of the P4 which sit idle as a tainted trace cache is cleared and its pipeline is repopulated following a mispredicted branch. If you work the numbers on the Pentium 4, you'll find that the percentage of time its execution units sit idle is approximately equal to the percentage of branch instructions in the code it is executing.

SMT in the Power5, on the other hand, is designed to leverage the full power of a dual core processor by allowing the pipelines to pick and choose which execution units to send decoded instructions to, with the assumption that the entire pool of execution units on both cores won't be completely used at a given time when they are being fed by only two pipelines.

Re: Roy
by Anonymous on Wed 9th Jul 2003 18:56 UTC

Why is it that whenever we get an article on this site that praises PPC, Apple, Mac OS etc., that there are several which respond saying that its just a mac fanboy article... It seems that the signal to noise ration on these boards gets worse by the day.

Great Article
by stingerman on Wed 9th Jul 2003 18:57 UTC

The most accurate and sincere attempt to lay out the facts in an environment that is filled with so much fear uncertainty and doubt. I've been following the x86 processor family since the first PC was released. My first PC was a Radio Shack system, I learned to program using an IBM PC using BSD Fortran 77, then Pascal and of course I taught myself basic as well. I bought a 1984 Mac and marveled at the 68000 processor. I marveled at the 80286 and remember my excitement when I got one of the first 20 Mhz PC's, it made my dBase code smoke. I also had the privilege of working on the S/38 which eventually became the AS/400 and I marvelled as IBM converted it over to the Power platform. The 80486 was most excellent with its virtual 86 capability and of course I was thrilled when Intel finally got the Pentium done right with the Pentium III. I was disappointed with the Pentium 4 and still am because I felt that Intel sold themselves to the marketing side. We all know that the Pentium 4 was a bad deal compared to the Pentium III till it broke 2Ghz, AMD taught Intel a lesson for that blunder and took a major chunk of their marketshare with what now is the Athlon. The Itanium is also a big disappointment, and it appears that Opteron and Athlon 64 will once again get more attention that Intel. I believe this is because Intel's engineers strayed from their discipline when the compromised on the Pentium 4 and it has been a long road back to excellence.

In the meantime, Intel left the market wide open for IBM and their 970 processor is just amazing, it truly is one of the most exciting developments I have seen for some time in the desktop world. To think that we have a processor that is a superset of the Power4 core and even faster, makes me excited. I was also blown away with the G5's architecture, it really is a new generation of machine and not an incremental change.

I'm not surprised really that Intel's ICC compiler vectorizes Spec's FP intended instructions. It really is rigging to the nth degree. And, I am not surprised that journalists in general do not do their due diligence. But, you are starting to restore my trust that there are still those out there who are willing to do some research before writing an article. And, congrats to the OSnews eidtorial staff to have the courage to publish it. Great writing and looking forward to reading more from you.

Great reading!
by Bobthearch on Wed 9th Jul 2003 19:01 UTC

Great article; I'd like to see more like it. In particular I'd like to know more about the less-common processors, and their operating systems and software.

Obviously readers can find flaws with anything; that's what the "Comments" area is for after all. But there's no reason to be rude.

Best Wishes,
Bob

re: 21064
by df on Wed 9th Jul 2003 19:01 UTC

The Alpha (especially 21064 - is this right?) really was the furthest evolution of RISC

the 21064 was a REAL dog. but saying that it was the first generation (EV4).

Wintel zealots need to calm down!
by Jason L. on Wed 9th Jul 2003 19:02 UTC

Man!

You Wintel guys have ZERO credibility anytime you let your blatant fanboyism for an inferior system get the best of you.

The article was accurate for the level of depth it put forward. Sorry if it doesn't jive with your revisionist methods of viewing the history of personal computers.

iGeek columns...
by Mikey T. on Wed 9th Jul 2003 19:05 UTC

Along with Ars Technica columns, be sure to check out David K. Every's articles on the same subjects... http://www.igeek.com/articles/Hardware/Processors/

He's great at explaining how things work and why and which are better suited for specific applications.


By the way, there is no such thing as an unbiased opinion.

Funny..
by SteveToth on Wed 9th Jul 2003 19:06 UTC

I guess that you get what you pay for, as this website is free, you can't really expect much from it. The comments have more true information then this article. Sigh.

Wondrous Article
by Shawn on Wed 9th Jul 2003 19:07 UTC

This is the kind of Article I would love to read on OSNews all the time. Well written, referenced, and professionally done. Not like a lot of the fetid tripe that dares call itself a "review" that gets posted here (Eugenia's articles excluded of course).

sse2 vs. altivec
by Sagres on Wed 9th Jul 2003 19:07 UTC

i've allways liked 3dnow! better because it can do two operations per clock, because games don't need 128bit vectors, and like Tim Sweeney says: "Since register-memory instructions are as fast as register-register
instructions, I don't usually need to use more than 4 registers"


:-P

Its not a fair comparison
by stingerman on Wed 9th Jul 2003 19:12 UTC

I agree that it is not fair to compare the 970 to the P4 or even the XEON. Intel simply does not have a modern processor to compare against the more advanced design of the 970. The real comparisons will happen against the Athlon 64. The way I see it the categories of comparison look like this.

High-End Server:
Opteron vs. Itanium vs Power4

High End Desktop/Workstations/SMB Servers:
Athlon 64 vs 970 (G5) vs ? (nothing from Intel yet)

Legacy Desktops, legacy servers, current notebooks:
Pentium III vs. G3 vs. P4 vs. PM vs. Athlon vs. Xeon vs. Athlon MP

This is a generational shift and right now only the Athlon 64 and 970 are in play for the next generation desktop.

SPECint2000 and SPECfp2000 test methods
by Mark on Wed 9th Jul 2003 19:13 UTC


Nicholas Blachford,

Good article. I enjoyed reading it.

- Mark

RE: Bascule
by Roy on Wed 9th Jul 2003 19:16 UTC

Yup, you are probably right. My main point was that the guy was pulling numbers out of his arse. Like I said in my original statement...

"though I have no doubt that Power5 SMT will provide more improvement than Pentium4 SMT"

The 100% improvement just sounded inflated to me. There are certainly cases where SMT will provide large performance increases, but we aren't talking about a 100% improvement in most cases.

RE SteveToth: Yup, you are right too. I never meant web-servers, but it looks like SMT/HT helps more for heavy computation tasks (scientific, multimedia editing, games possibly someday). Servers (web/database) in general are more I/O bound.

iGeek agrees
by stingerman on Wed 9th Jul 2003 19:16 UTC

From a March 2003 article, it appears Apple beat even his optimistic forecast.

http://www.igeek.com/articles/Hardware/Processors/x86-64vPPC-64.txt

If you care about 64 bit, we're probably going to see it significantly effecting the Mac market around 2004-2005, and in the PC market around 2008-2009. Not because of technological issues (though there are some of those), but mostly business and market issues. I'm a technology guy, I wish the technology was all there was. (Technology is much more clean and pure than politics and business markets). But if you don't understand business and markets in this industry, then you don't understand jack.

PC advocates will talk about how they had 64 bit first; but ignore that they did it poorly, it doesn't effect much, and will take forever to actually gain momentum. Mac users will likely be seeing any benefits from 64 bit computing, far sooner. In fact, the most likely way that you'll see 64 bit x86 adoption is if it comes from Apple in the form of OS X ported for AMD.

Don't get me wrong; I hope I'm wrong for the PC markets sake. I have no problems with AMD, and I like their x86-64 implementation. It would be great if this summer AMD was ruled the winner and the entire PC market adapted x86-64, and Intel licensed it, and there was no more war or world hunger, and dogs and cats could live together in peace; but I just don't see that happening.

RISC vs. CISC
by jbett on Wed 9th Jul 2003 19:18 UTC

Many of you talk about CISC pulling ahead of RISC, but many of you forget that Intel had to basically make their processors RISC-like to compete. These days RISC is more like CISC and CISC is more like RISC, we confuse the too a lot. Take a look at the instruction set of a PowerPC processor compared to that of a X86 processor and tell me that I'm wrong.

Re: Its not a fair comparison
by Richard Fillion on Wed 9th Jul 2003 19:20 UTC

Right, cause the only people that make processors are IBM, Motorola, Intel and AMD.

Re: iGeek agrees
by goo.. on Wed 9th Jul 2003 19:21 UTC

It would be great if this summer AMD was ruled the winner and the entire PC market adapted x86-64, and Intel licensed it,

FYI, intel doesn't have to licence anything to use x86-64. They can just go ahead an implement it and lose nothing but face. That kind of x86 extensions are already covered by an old cross licensing agreement between Intel and AMD.

re: 21064
by Roy on Wed 9th Jul 2003 19:23 UTC

The early generations of Alpha really took the RISC principles (read KISS) to the extreme. Maybe I'm thinking of the 21164 rather than the 21064, but I know that later generations (21364 and possibly 21264) started using things like instruction reordering that are less in line with the principles of RISC. The 21264 and 21364 were great designs, but they weren't as "pure" RISC as the earlier generations. EPIC takes the RISC ideas of letting the compiler do the work one step further (though I haven't seen any evidence that this is paying off yet).

Roy, your wrong
by stingerman on Wed 9th Jul 2003 19:28 UTC

simultaneous multi-threading (SMT) is designed to convert threading to instruction level parallelism (ILP). That is its main purpose. It does very little good on Processors that have a low degree of parallelism and whose OS's and their development frameworks do not promote asynchronous processing. Windows and COM+ are not very well threaded. Though the COM+ environment allowed better threading, it was difficult to program in the unmanaged VS 6 environment. Only with .NET has Microsoft started to emphasize delegating of threads and asynchronous programming, but it is a very large framework and will take a couple of more years to mature.

This is not the case with OS X which is a highly threaded Unix based OS and the Cocao framework is very mature being in development since NextStep in the late 80's as a truly Object Oriented Smalltalk type environment. This coupled with the fact that both IBM and Apple have a long history of developing for multiprocessing systems; as well as providing a highly parallel processor in the 970 and future 980 designs clearly shows that it is not only possible but more than likely that many operations will achieve close to 100% performance increase in IBM's implementation of SMT. SMT thrives on ILP and P4 greatly lacks ILP. That's just a fact, it is not meant as a personal insult so get your emotions out of it already. Intel may be forced to step up to the plate with a competing design, and wouldn't that be a good thing? If I were you I would be promoting competition, its healthy and will benefit the Intel Zealots in the end as well.

By the way databases and transaction based systems thrive on multi-threading. It's games that currently prefer single-threading, but that is changing as well, take a look at Quake on an SMP Mac, it rocks.

Why CISC isn't so bad
by Anton Klotz on Wed 9th Jul 2003 19:37 UTC

As Nicholas pointed out in this article CISC-commands are hard to decode, they are more complex, have different length ... But this also means that a CISC command carries more information from the memory to the processor than a RISC command. Nicholas also stated that the bottleneck is the processor <-> memory connection. So you can regard CISC commandos as a kind of compression algorithm, so more information can be transported to the CPU, which has time to decode this information into something it can handle optimal.

I can't provide you a link, but IBM thinks about integrating a GZIP-unit at memory controller and at processor for its zSeries, so the data are compressed before transfer.

Anton

On a side note, Panther adds another level of Parallelism
by stingerman on Wed 9th Jul 2003 19:41 UTC

Unless you have completely closed your eyes, OSX 10.2 added the GPU as another processor to offload some of its OS duties for GUO in the form of Quartz Extreme. As it was OpenGL was already hardware accelerated, but Quartz Extreme allows all the compositing to take place in the GPU freeing up the CPU. In addition, since the Altivec unit is truly orthogonal, Shadows and Pattern fills along with 6 other desktop drawing functions are being handled by the vector unit while the rest of the processor core was free to do what it needed.

Now, with the release of Panther, Apple has added Windowing and scrolling to Quartz Extreme as well. It's never been faster or smoother and the CPU is even more free to handle the actual Apps. Panther will greatly benefit all Mac's with G4's and up. G3 Macs will also benefit from highly optimized Scalar libraries that now outperform the very well and time tested OS 9 libraries (Jaguar had previously achieved parity.) It should be a fun winter for hobbyists.

Re: Goo
by bobby on Wed 9th Jul 2003 19:43 UTC

Intel pushed the 8088 as the "next" 8080 while the Z-80 was Zilog (loaded with former Intel engineers) vision of what the next 8080 processor should have been. The 8086 was just an 8088 with a 16 bit data bus. I do not know if the binary's were compatible, and I know the mneumonics were extended, but the idea was to be able to use your 8088/8086. If there were such drasic differances, then it would be my guess that Itel missed the mark. But from what I have seen, the 80486 and 8080 appeared very similar at the assembly level (Sorry, I have not done much intel assembly to have a real feel for it). I only encountered the 8080 codd because we used a C compiler that generated 8080 assembly to run on our Z-80's 15 years ago.

But the bootom line is that Intel intended the 8088/8086 to be a 16 bit extension of their 8 bit 8080 which came from the 8008 that owed it's start to the 4-bit 4004 processor used in early calculators.

They transistioned to micro-coded architecture with everyone else in the 80's. But the architecture was not improved by it. It just allowed the architectuer to be extended one more time. The old Single accumilator design persists even in the P4. The x86 line is a 1970's arcitecture that has been tweaked into the future. The PPC is a 1990's architecture that is near the beginning of it's life. Designed from day 1 as 64 bits (The 32 bit processors are implimented as a 64 bit processor with the extra bits removed). The x86 is an 8 bit arcitecture that has been extended to now, so the A register was extended to 16 bits by renaming the A as AL and adding an AH, then when they went to 32 bits, they attached another 16 bits and call it AX.

You still have to shuffle the registers so that all math involves the AX register. You still have All the segment register nonsense to maintain compatability with the 80186/80286 attempts at 32 bit operation. With the 80386 they added flat 32 bit memory.

Do not get me wrong - Intel has done a wonderful job at keeping the platform going - I have been declaring it dead since the 80286 came out. But they keep tweaking the speeds up. But the Itainium is their concession to the eventual death of the architecture. AMD seems to want to keep it alive by broviding for effiecent operation of 32 bit code.

Bottom line - The x86 is like an old 60's muscle car. They have tweaked the engine so that it has the spead of a sleak new Porsche ... But the Porsche does it with an engine that is half the size and double the gas mileage.

The x86 is bigger requires twice the clock speed, generates 4 times the heat do do the same amount of work as the PPC. They may be about the same speed, but the PPC has a lot more room to grow.

Apostrophe Catastrophe
by toddhisattva on Wed 9th Jul 2003 19:46 UTC

Dude -- an apostrophe does not mean "watch out, here comes an 's' !!" Posessive pronouns, "its, hers, yours," do not have apostrophes. Use apostrophes when you are using a contraction, for instance "it's" means "it is" and the apostrophe stands for the (space and) vowell. And I wanted to read the article because I'm a big-time architecture geek and couldn't because of all these trivial errors --- whaaaaa! :-( Factually, you seem to understand x86 about as well as Hannibal over at Ars understands PPC so this might make a good companion piece but again I can't tell because of the frustration at de-skewing the apostrophe catastrophe --- whaaaa! :-(

Interesting article
by Nymia on Wed 9th Jul 2003 19:46 UTC

I wonder why some basic features were not covered like Out-of-Order execution and Branch Prediction which seems to be the major items commonly found on current IA processors.

What about Pipelining, any ideas on that one?

Context Switching Faster in PPC vs x86
by hansnyc on Wed 9th Jul 2003 19:48 UTC

Interested, I checked out the website of MorphOS, in a paper about MorphOS "in Detail" it said the below. I think this would have been a big point in the article but it was not mentioned. Is it true and how does it work that it is 10x faster? And, more importantly, is that fast enough to provide a speedy OS?!

Thanks for the good article.

Microkernel Vs Macro Kernel

A common problem encountered in the development of microkernel Operating Systems is speed. This is due to the CPU having to context switch back and forth between the kernel and user processes, context switching is expensive in terms of computing power. The consequence of this has been that many Operating Systems have switched from their original microkernel roots and become closer to a macrokernel by moving functionality into the kernel, i.e. Microsoft moved graphics into the Windows NT kernel, Be moved networking inside, Linux began as a macrokernel so includes everything. This technique provides a speed boost but at the cost of stability and security since different kernel tasks can potentially overwrite one another’s memory.

Given the above, one might wonder why Q can be based on a microkernel (strictly speaking it’s only “microkernel like”) and still expected to perform well. The answer to this lies in the fact that MorphOS runs on PowerPC and not x86 CPUs. It is a problem with the x86 architecture that causes context switches to be computationally expensive. Context switching on the PowerPC is in the region of 10 times faster, similar in speed to a subroutine call. This means PowerPC Operating Systems can use a microkernel architecture with all it’s advantages yet without the cost of slow context switches. There are no plans for an x86 version of MorphOS, if this changes there will no doubt be internal changes to accommodate the different processor architecture.

Why the insults? Debate on facts, PLEASE!
by Sabrina on Wed 9th Jul 2003 19:51 UTC


Why the heavy-handed treatment of the author by some here? If you disagree, do so in a reasonable manner.

Eugenia, you shouldn't even have to warn people about their tone and language with an article like this.

Lately it seems like Windows fans are worse than Mac fans in their worship!

good article!
by the arbiter on Wed 9th Jul 2003 19:53 UTC

A good article for the entry-level (me). I'd like to see a lot more of these informational articles.

A personal note: I come here a lot less these days. Why? The level of discourse on these boards has gone to hell. Most of you sound like second-graders, and that's being really generous of me.

Nice attempt
by Harbinjer on Wed 9th Jul 2003 19:53 UTC

This article isn't all that it looks to be. Check his sources, some are just a step above marketing speak. He has very few hard facts, and mostly opinions. You should really check out Arstechnica and Aceshardware, as others have suggested, if you want the real story. They have much more in-depth analyses with real facts, and even benchmarks to test cerain subsystems to make sure they are right. Mr. Blachford attempts to speak with authority, yet he just doesn't seem credible, especailly compared to all the better sources out there. An OSnews comment is hardly an authoritative source. This is more like a college freshman's lab report.

That doesn't not mean that he is totally wrong. He is quite right that the x86 is highly inefficient, and should probbably have died years ago, but it keeps getting more complex and faster.

Regarding ICC, yes it is somewhat biased, however it really can auto-vectorize code, which means that its benchmarks have much highler believability than Apple's old photoshop tests with the hand optimized assembly. I'm not saying that he's completely wrong, just that ICC CAN be that fast in real applications, and doesn't require hand coding assembly.

He seems blatantly biased towards the G3-G5 cpu's, but just because he's biased doesn't mean he's wrong. They are highly efficient, and low power cpu's. The P4 really does seem more market driven than engineering driven.

Power consumption is a very complex field. There are more than one or two facts which describe why a processor consumes more or less power. Nicholas writes that Intel uses high speed transistors which consume more power. This is true, that faster transistors can waste more energy. First, leakage current is higher and second, you have to overload the base of the transistor by using higher voltage for make it switch faster (oversaturation). But on the other side for reaching higher clock rate you can make transistors smaller, you can reduce your voltage, because a smaller transistor needs less electrons inserted into his base area for reaching saturation.

A great power consumer is the clock tree. Alphas are very power hungry due to their clock tree which is a mesh with a very high capacitance. So to make the clock tree switching fast, a lot of power has to be pumped. I don't exactly know which clocking structure uses Intel on his chips.

All I want to say is, that the reasons why x86s are power hungry have to be more diversificate than just the fact that Intel probably uses high speed transistors.

Greetings from Anton

Okay, still somewhat biased...
by Brian on Wed 9th Jul 2003 20:03 UTC

The article was okay, but still somewhat biased, especially in concluding that RISC processors have always been faster. In my experience in the past 10 years comparing scientific programs to different architectures, especially suns and hps I've always consistently seen average desktop x86 machines being able handle more than 2x the throughput than cutting edge risc boxes more than 20x more expensive.

It all comes down to real competition. The funny part is that everyone always predicts that linux will fragment. What Linux has been doing is defragmenting the hardware vendors.

Freedom from vendor lockin to hardware! Down with MS! Down with Apple!

Man I wish DEC would have gotten a clue and tried to push the Alpha into the consumer arena. But in those years MS truly had a ton of lockin...

Good article
by emagius on Wed 9th Jul 2003 20:03 UTC

As a fledgling computer engineer (computer systems and architecture), I enjoyed reading this article, despite the oddly colloquial writing style. Likewise, even though it's not directly OS-related, such an article is representative of what I'd like to see more often at OSnews instead of the non-informative, highly opinionated, unresearched drivel we unfortunately seem to get so much of. Kudos!

::sigh::
by Dawnrider on Wed 9th Jul 2003 20:05 UTC

Honestly.... this entire conversation is stupid. Most of you know very little about processor design and the merits of different schools of thought.

For example, stingerman's conjecture than Quartz (Extreme)'s use of the GPU as a secondary processing engine is a great idea is frankly daft. To put it simply, you don't need your windows to warp, spin, etc. No matter which way you look at it, you are creating extra system load and the idea of having an independent framebuffer per window in memory is insane and has predictable results.

Anyone who looks at PowerPC vs. x86 architectures will come to the conclusion that the RISC vs. CISC argument is a dead one. Effectively both architectures have reached a point where they rely on a RISC core with a translator and interesting caching and processing units to compensate.

Moreover, the heat output and speed of x86 and PPC architectures is much the same in mass-market products. The Pentium 4 is a high-clocked low-IPC architecture, and the Athlon and PPC head in the other direction. At the end of the day, however, the actual performance in inherently similar. Moreover, the heat outputs are substantially similar. Comparing the heatsink on my Athlon XP to that on my friend's G4 indicates similar levels of heat dissipation.

At the end of the day, I do appreciate that the Mac users here (and indeed the majority of posters seem to be Mac users) would like to crow about the 970, but as the recent benchmarks and more in-depth analysis has shown, it runs about 90% the actual performance of the current Athlons/P4s. There is nothing wrong with that, but it is not a revolution of any kind.

Ultimately, you will always find that the PPC architecture will perform around 70-95% of current x86 architecture in the consumer market and this will remain the case, simply because processor design is admittedly complex and we've not seen massively revelatory new designs in recent years. Enhancements yield limited percentage improvements in speed, but ultimately, that is that. We haven't seen a consumer (<- I emphasise this word) development emerge in recent years which has come from nowhere and doubled the speed over current systems. It doesn't happen in design, and to be frank, it will only appear due to entire process changes to take advantage of new materials or migration to quantum computing or the like. PPC will never see a significant lead over x86 due simply to economies of scale. More x86 are being sold and more people are working on enhancements. That's life, and it may as well be dealt with.

To be frank, Mac users need to work out that their machines are more than ample for the tasks they put them to, regardless. Even migration to the 970 will yield them a limited benefit over a high-end G4, in fact, perhaps not even massively noticable in many places. Software optimisations could easily be more worthwhile than the upgrade.

He's not writing a book
by stingerman on Wed 9th Jul 2003 20:08 UTC

How much do you want him to cover in a short article. He hit on the salient points, we all understood what he meant. Great article, easy for even the lay person to understand the gist of it and feel intellectually satisfied.

By the way my OS X is automatically spell checking everything I type in this form and actually allows me to context switch to the right spelling. It is so cool!

One more thing, Apple provides its developer a very nice vector library that will automatically downgrade to scalar if a vector unit is not present. It will even optimize between the different generations of vector units (its called veclib.) But Apple chose not to use its Veclib in the Spec test. Vector Unit vs Vector unit Apple wins hands down, so get off justifying ICC's auto-vectorization. No one uses ICC anyway unless your an intel engineer or obscure developer. We use VS or Borland in the Wintel world. ICC is mainly used for benchmarking.

Wow, Dawnrider, you must be a 3-D developer!!!
by ILoveWindows on Wed 9th Jul 2003 20:15 UTC

Otherwise you wouldn't be able to make such blanket statements such as "the PPC architecture will perform around 70-95% of current x86 architecture" with a straight face, and with some facts behind you, no doubt!!!

So tell me, why did the G5 smoke x86 here:

http://www.luxology.net/company/wwdc03followup.aspx

Now, since you're a developer, you should easily be able to show why this windows-predominant shop wasn't able to correctly gauge the speed of the relevant processors.

I await your informed, technical reply with great anticipation!!!

Dawnrider your wrong
by stingerman on Wed 9th Jul 2003 20:19 UTC

Your Quartz Extreme observations are wrong, offloading more and more processing to the GPU is state of the art in computer science circles and much research is being done on it at the university level. Every time OS X directs work to the GPU the more the CPU is free to do other work and Apple's implementation is not to wave windows around (like in the longhorn demos) but to actually speed up the whole system. And Quartz extreme does.

Your referring to old PC tricks to speed up screen draws, Apple's quartz extreme is implementing university level research for the future of computing. One MIT study showed that using the GPU for indexing a database can increase performance by up to 30 times with current GPU's. Apple promised at the 2002 WWDC that they have only begun to exploit the GPU for the OS and they are showing even more work in Panther. Why do you think Microsoft is so lauding similar technology for their future WIndows 2007.

More of the same old garbage...
by Megol on Wed 9th Jul 2003 20:22 UTC

Facts: CISC vs. RISC doesn't matter. All conventional processors are moving towards the "heat wall".
That a 3GHz P4 consumes over 100W (peak) and a 1GHz G4 only consumes ~10W is not relevant as:
(1) the G4 is a low power embedded processor and the P4 is a high-end workstation processor
(2) the P4 is clocked 3 times higher than the G4, have a higher bandwidth interconnection and have more cache.

Uhhh.....Megol???
by ILoveWindows on Wed 9th Jul 2003 20:24 UTC


G5?

Hello???

Wow, thats news to me. I always thought it was a desktop processor. You must be thinking of the Xeon.

two types of people here
by Harbinjer on Wed 9th Jul 2003 20:29 UTC

There seems to be two types of people here, the electrical engineer/computer engineer type, and the GUI/widgets/font/web designer type. The former may think that this author is simply wrong/incomplete. The latter don't know much detail about processor design. As an example, do you understand what pipelinig is and why it is good?

Therefore the Mac folks(mostly the second type) think that the engineers are full of it and simply flaming the author(and some are) while many (engineers) are pointing out that he is just plain wrong in some of the things he says.

re: sabrina & bobby
by goo.. on Wed 9th Jul 2003 20:30 UTC

Sab: I haven't read the article past the quote I made. I like wasting my time proving people wrong butI don't like wasting my time for absolutely nothing. The sentence was a sign of things to come and I didn't even bother to read the remaining hence didn't even know it was slanted towards PPC until reading comments. As such my windows fanboyness (which I don't use and not a fan of) is a misunderstanding on your part.

Bob: 4004 and 8086 are not related (except in the lame MS joke that ends with "... 1 bit of competition") either. 4004 has no architectural descendants. 8088 and 8086 are similar (even same on software level) but 8080 is a different beast. Compare:
8080: interrupt are handled via a specilized function call (1 byte long. except that identical to CALL)
8086: interrupts are handled via special all-register stack dumping instructions
8080: Flat 16 bit addressing with 8 bit GP registers.
8086: Segmented 20 bit addressing with 16 bit registers. No real general purpose registers except accumulator
8080: No integer arithmetic except ADD and SUB, no loop, floating point, indexed or string handling instructions
8086: You know what 8086 has
etc.
Of course intel didn't start over as if 8086 was their first CPU, there is bound to be more similarities between 8080 and 8086, say, compared to 6502 and 8086. However you can't even trivially modify 8080 code to compile on 8086. Names 808 0 and 808 6 imply a stronger link that doesn't exist. See if you can read the following 8080 code (CP/M operating system manual, 1982 edition, page 212-213, lines 186-199)
(cpmspt, noovf, unatrk are labels)

inr m
mov a,m
cpi cpmspt
jc noovf

mvi m,o
lhld unatrk
inx h
shld unatrk

xra a
...

My good man, I'm convinced
by Lovechild on Wed 9th Jul 2003 20:33 UTC

My next computer will be a Mac (G5 based if I can afford it at that time), not because of MacOS X, I couldn't care less about that, but because the CPU is low heat and high power - I'm going completely insane over the immense noise levels that my current 1600+ AthlonXP is producing.

I considered the C3 but it seems a tad underpowered for most of my tasks but not by much, I'm however concerned that the Eden platform is locked down so I can't replace say my gfx card should it be needed at some later stage.

I've always looked at the Macs and admired their clean solutions, and now I simply must own one...

Megol...."clocked 3 times higher than the G4"
by ILoveWindows on Wed 9th Jul 2003 20:34 UTC


Listen, Megol, I'm not the very best at math, and I certainly don't want to defend Motorola's clock speeds, but the fastest G4 right now is 1.42ghz.

Now to the math:

1.42ghz x 3 = 4.26

Is there a 4.26 P4?

Is this "new math"?

Re: Harbinjer.....
by ILoveWindows on Wed 9th Jul 2003 20:36 UTC


So which type are these people?

http://www.luxology.net/company/wwdc03followup.aspx




G4@1.42GHz
by goo.. on Wed 9th Jul 2003 20:39 UTC

The best G4 motorola produces is 1GHz. Apple overclocks them. You can *probably* overclock a P4 to 4.26GHz too, if you can cherry pick which P4 to overclock. With exotic cooling methods much higher frequencies have been achieved.

Alternatively, you can disregard out-of-spec frequencies. Then, yes, there exist 3GHz P4s (which is 3*1Ghz of G4.)

Well...
by LAsuit on Wed 9th Jul 2003 20:42 UTC

I did enjoy the article. But, the commentary has made my day.

goo says...."Apple overclocks them"
by ILoveWindows on Wed 9th Jul 2003 20:43 UTC


Do you have any evidence for this, goo?

Apple has said that they don't, so maybe you have some reference to back up your claim?

And please show me where a non-overclocked P4 at 4.2ghz is.

Facts, not "rumor", please.

This can't be done!
by jefro on Wed 9th Jul 2003 20:44 UTC

The PPC platform is so far different that this is a useless discussion. Some think of the PPC as only a Mac but IBM has been selling top of the line professional mission critical machines based on the PPC platform for many years. Even industrial machines are running PPC every day. This is a no contest. PPC can be one of the best computers for any task if so designed to do so. The x86 can never be designed for a mission critical task. It started as a toy and should remain that way. It has very basic design flaws.

Fact: Motorola doesn't sell >1GHz G4s
by goo.. on Wed 9th Jul 2003 20:48 UTC

Fact: IBM doesn't produce plain G4s.
Fact: Apple apperanly sells >1GHz G4s
Fact: Except Motorola and IBM, nobody produces G4 cpus.

Baseless speculation without evidence nor reference, fanboy yada yada yada: Apple overclocks 1GHz motorola G4s.

Feel free to draw a different conclusion from these facts. ;)

goo ;-)
by ILoveWindows on Wed 9th Jul 2003 20:56 UTC

Ok, facts are facts, right!

:-)

Hey, I think Moto has been sucking on the gas-pipe regardless of the "facts", and I am no fanboy of either platform.

The G5's however, are a whole new ballgame, and the competition is good for everyone.

Re: goo
by Nicholas Blachford on Wed 9th Jul 2003 20:56 UTC

However you can't even trivially modify 8080 code to compile on 8086.

Thats not what Intel said, perhaps I should have quoted the full release...

"The Intel 8086, a new microcomputer, extends the midrange 8080 family into the 16-bit arena. The chip has attributes of both 8- and 16-bit processors. By executing the full set of 8080A/8085 8-bit instructions plus a powerful new set of 16-bit instructions, it enables a system designer familiar with existing 8080 devices to boost performance by a factor of as much as 10 while using essentially the same 8080 software package and development tools.

-- Intel Corporation, February, 1979


Way I look at is it really argument of what instruction set you like to program in: ARM, MIPS ALPHA Sparc, PPC, or X86. All of the previous CPU architectures are capable of getting to the Nirvana CPU speed all CPU geeks seek. It just matter of a lot of money, 18 to 48 month of time, getting good experienced CPU micro-architects and Quantum Mechanics (Transistor Tweekers).

What we need to look at is Software Operating system have become stable if not boring, Window2000/XP and Mac OS X are based on research from about late 80’s. Also memory code size limitations of the past are distant memory in the desktop and server space. Which drove feature rich similar O/S Cores. Plus if you remember history True64 Unix (Mach) , Mac OS X(Mach), and Windows NT(Cutler Design ex Dec) are all based on modified micro kernels

What this has done is move all CPU Architecture off their polar position an began rationalizing their ISA to better meet the needs of software evolution to pretty stable foundation based on common C/C++ based Multithreaded, multi-user, operating system to almost a homogenization of feature (Core CPU Instruction, Floating Point Instruction(SP, DP, Parried Single) Debugging Instruction, DSP Like Instructions (Multiply Accumulate, etc), and Vector Instruction the need to better support the market segments and application where they were moving to support.

What this did is put more pressure on CPU Micro-Architects to innovate since their was going to be less innovation coming form ISA extensions. So they had two choice Fast Clock speed Narrow Super-pipelined architecture or wide slower clock high CPI Micro architectures. They had to look at innovative way to deal with memory latencies (Caches, Larger Register Sets, Instruction Buffer, etc) , also understand how best to deal to code control flow issue ( branch prediction) Here is were the visionaries evolved and ALPHA was one of the greatest CPU experimenting environments to emerge in the last 10 years and they tried all the variation ( In-order, Out of Order, Dual Issue, Multi issue, Multithreading, on chip memory controllers and more). Big Issue today all of these innovations drive Gate Count and chip complexity which reduce our ability to make bigger innovation beyond wait for the next process geometry

When compare and contrast the PIV and the 970 they both do something similar. If you want to crank up the clock on the CPU the best way to do this is go with a super-pipelined micro-architecture. And to do this at these new speeds you need to do some thing which Dec invented on the MicroVax Processor and that is to crack PPC or X86 instruction set into simpler instruction (micro-ops). What interesting is Intel been doing this since PentiumPRO. I would argue these Microcodes made Intel more RISC then the current classic ISA level RISC processor. So now that IBM made this leap in Processor design it now back to race to who the best process technology and do most innovative transistors, with minor micro-architecture tweaks . Also with the announcement of Power5/980 Architecture, IBM and Intel are parity of feature again around SMT/HT. Here some of the best research on the subject. (http://www.cs.washington.edu/research/smt/)

On the power issue of X86 core look know further then the Pentium M which is one incredible X86 CPU which matches PPC G4 10 Watts 1 GHz with and the bonus of an amazing branch prediction unit, and 1 megabyte of onboard L2 Cache. So this point is moot as well since this Micro-architecture and Quantum Mechanic issue (Transistor tweeker)

If you want to see innovation in CPU architecture look at following project since they are truly driving innovation into again CPU design, Compiler Research and Operating Systems and Application Design To MIT projects are based on MIPS like instruction set.

http://www.cs.utexas.edu/users/cart/trips/
http://www.cag.lcs.mit.edu/scale/overview.html
http://www.cag.lcs.mit.edu/raw/


(...psstt...Nicholas.....)
by ILoveWindows on Wed 9th Jul 2003 21:02 UTC


(don't worry about it, "goo" doesn't know what he's talking about, but you have to give it to him, he talks a good game!!)

And hey, Nicholas, that is one of the better, factual, non-troll articles that have been here in a while.

You have to expect the heat (sorry, bad pun) from the x86's when you point out the facts to them.

:-)

there's some rude f'ing ppl here
by Anonymous on Wed 9th Jul 2003 21:10 UTC

my sister is a nun and she'd love to take a switch to you!

(you know who you are)

i don't think one person can possibly keep up on moderation chores on the miserable lot of you.

good day.

8086 vs. 8080...
by WattsM on Wed 9th Jul 2003 21:17 UTC

...this is utterly a pointless argument, since it's coming down to different interpretations of what Intel's 1979 press release meant.

The two processors weren't opcode-compatible, but they were explicitly designed to have one-to-one translations from 8080 to 8086 opcodes so machine code could actually be translated simply, not reassembled. This is how the infamous QDOS, MS-DOS's ancestor, was created, and is part of why Digital Research eventually sued Microsoft: California Computer Systems (if I'm remembering the name right) ran CP/M's 8080 code through just such a translator, and then wrote a native BDOS for their development system. (The original releases of MS-DOS 1.0 actually had a Digital Research copyright string embedded in them because of this.)

So goo is technically right, but he's also being knowingly pendantic, since Blachford's point--that the 8086 was considered a descendant of the 8080 by Intel itself--is also correct.

RE:Lovechild
by Harbinjer on Wed 9th Jul 2003 21:23 UTC

You can get a quiet heat sink and power supply for that Athlon. It will cost you some money, but much less than getting a new Mac. Just check the decibel rating for something below 40 dB, below 30 if you can. I've heard of power supplies with a 28 dB rating, which is VERY quiet. Plus you can get anechoic tiles to absorb even more noise.

While it may not be quite as quiet as a new Mac, they can be very tolerable. I've heard(or not heard) several Dells that I could barely tell if they were on or off. (That depends on where you live, in urban areas that can definitely be true).

Fan technology is no reason to spend $3000 on by itself.

P.S. I would only get a G5 for OS X, because whether or not its faster than an x86 cpu, that is probably unnoticeable, except in benchmarks.

Polarised debate
by Nicholas Blachford on Wed 9th Jul 2003 21:28 UTC

It's strange how polorised the debate here is. Not quite as bad as the Mac Vs PC threads though. I gave up commenting on those a long time ago because although there are good and bad points to both platforms - and of course personal opinions - there were a lot of people who obviously couldn't see this and attempting to debate sensibly is a futile exercise.

Thank you to those who have made the very kind comments, it took a lot of time and effort and these make it worth the while. Thanks.

To those telling us that it's full of basic errors please be aware that this is not meant to be a technical reference manual, it's an article for OSNews. I expect I did make the odd error or explain things not quite perfectly but if I am making glaring errors please tell us where they are.

--

That said I note that not many have commented on or downright missed the main point of the article - that CISC processors are NOT the same as RISC, and unless Intel or AMD or someone else comes up with a *very* clever design they never will be.

RISC has the advantage and could outperform x86 CPUs if the effort was put in. Thats a very big If however. x86 has the market and is likely to have so for some considerable time to come.

Paul DeMone explains this much better than I can here:
http://www.realworldtech.com/page.cfm?articleid=RWT021300000000

And no realworldtech is not "just a step above marketing". It's the most technical of any of the sites I (or anyone else) has referred to.

You did a Great Job, Nicholas!!!
by ILoveWindows on Wed 9th Jul 2003 21:31 UTC

This place is always going to be a war zone in the comments, but your article was fair, balanced, and quite accurate.

Just like many things, the fact that PPC is more efficient doesn't mean that it is going to be better, or faster, or more wide-spread.

However, it seems to me that at some point x86 is going to have to have liquid-cooling to keep their processors cool enough.

Re: Lovechild, quiet Athlon etc
by Alastair on Wed 9th Jul 2003 21:34 UTC

Are you kidding mate? The new G5 is a lovely machine, but it requires *9 fans* and cannot be that quiet. Get yourself some Zalman bits for your Athlon (not expensive) and it can be completely silent. I'm running an Athlon 2400+ system with no case fans, and it runs stable and cool with virtually zero noise. It can easily be done - it's just that most white box builders don't bother, which is indeed crap!

Re:
by Dawnrider on Wed 9th Jul 2003 21:34 UTC

ILoveWindows: Without using SSE or Altivec, you are really going against the abilities of modern processors, and the results you get are not meaningful. Most especially, if you look at the P4, the classic FP performance is very weak, while using SSE literally will double performance in many cases. Go to Anandtech, Tom's Hardware, Ars Technica and other tech sites will show you that on intensive processes such as 3D rendering, for the first half year of P4 release, without SSE2 recompiles of the software, relying on x87 floating point, it got creamed by the Athlon. Once software became recompiled, performance was better. Apple's little notes about disabling SSE on the P4/Xeon benchmarks effectively cripple the floating point performance of that CPU, and they knew that. That is not to mention that due to the architectures of the P4 vs. the 970, they will perform differently depending on the detailed formation of the code, such as the sizes of matrices, fp precision required, formation of loops/conditions and a whole host of other factors. And no, I'm not a 3D developer, I'm a university researcher into image processing (2D, 3D, stereo vision) and I deal with this stuff a lot. You can't just throw a piece of entirely un-optimised code at a CPU and expect the initial response to be true of the capabilities of the chip. In the case of the P4, this is incredibly pronounced, due to the design decisions that Intel took. I disagree in many places with their implementation, and prefer Athlons myself, precisely because they are better at x87 rather than requiring the SSE2 optimisation, but there it is.

Stingerman: Firstly, "Dawnrider your wrong" should be "Dawnrider you're wrong". Just FYI. I won't argue that offloading processing onto the GPU is a bad thing, because it isn't, but that is only worthwhile if you intend to use some serious vector-based tasks on such a system. Current iterations of rendering software, for example, are using the GPUs in that way. The trouble with Quartz Extreme, which I was wishing to highlight, is simply that rendering basic 2D forms in a windowing environment is a minor task to a modern processor. In fact, OpenGL effectively offloads a degree of that to the GPU to start with, which is why the graphics card needs memory for more than just a look-up table, as opposed to simply streaming a framebuffer out to the screen. My points was, that it is wrong-headed to the point of being moronic to take such a ripe source of processing power and then create spurious tasks such as rendering shrinking windows to saturate it with. Longhorn is daft in trying to do that as well. Moreover, OSX users suffer, because having a framebuffer-sized chunk of memory dedicated to each window rapidly chews through physical memory once you start using more than a few applications. You add massive overhead to the system and quickly reduce responsiveness if the thing has to start paging to disk to support your graphical excess. You might have no problem watching your windows shrink, spin, etc. when you just have one or two, but if I have >20 windows open at a time (and I do), it would absolutely destroy the performance of my system. In short, get rid of quartz effects, save memory and use those GPU cycles for more useful work.

Mac/Pegasos Uses PCI
by Nymia on Wed 9th Jul 2003 21:38 UTC

I got news for you.

Please see the link below...

http://www.apple.com/powermac/architecture.html

http://64.246.37.205/tech_specs.php

Peripheral design is almost the same as the commoditized brands.

In short, you will pay too much for non-commoditized parts.

RE: stringerman
by Roy on Wed 9th Jul 2003 21:39 UTC

Where did you find out that the P4 lacks ILP? (not trying to be difficult, I just never heard that before) I agree that the programming frameworks provided by MS haven't pushed parallel processing. Still, the 100% improvement in general apps seems farfetched.

Also, please explain how ILP and SMT are related. Again, just trying to learn here. My understanding of SMT is that it basically allows a single core to execute multiple threads at once and share a pool of execution units. It would seem that threads that don't fully utilize the execution units would benefit most from SMT. I'm not sure what this has to do with ILP. I've mainly heard of ILP in relation to Intel's EPIC architecture.

I knew I'd read somewhere that HT provided good server performance improvements though my memory is failing me as to what type of server. I'll take your word on the database/transaction stuff. Sounds familiar. As for games, that is pretty much what I meant by "games possibly someday". Most games are not currently written to take advantage of any sort of parallelism, but they certainly could be.

BTW, before OSX, Apple's multiprocessor experience pretty much consisted of adding an extra processor to improve Photoshop performance (the second processor was not utilized by most applications). IBM certainly does have a lot of experience here though. Next certainly has a lot of multiprocessing experience too, though I'm not sure about threads and I don't remember seeing multiprocessor NeXT boxes. Anyone know about these?

As for the Intel zealot / competition thing, I'm not for Intel and I'm not against competition. I find AMD's Athlon64 design MUCH more interesting than Intel's Pentium4 (including Prescott). AMD's system architecture (outside the processor) is very cool. I just hope they don't stumble anymore with poor execution. I also think the PPC970 is a great processor (as well as the Power4/5).

Re
by david on Wed 9th Jul 2003 21:43 UTC

"That said I note that not many have commented on or downright missed the main point of the article - that CISC processors are NOT the same as RISC, and unless Intel or AMD or someone else comes up with a *very* clever design they never will be. "

That's what bothers me a bit. Because really, even if I am not a CPU specialist myself, in all technical articles I have ever read ( ars technica, etc... A good site, French only : http://www.onversity.com/cgi-bin/progdepa/default.cgi?Eudo=bgteob&N... ), it's said that the debate RISC against CISC is dead. For example, I don't understand how you can say that G4 is purely RISC with SIMD units, as it is rather a heavy "unit" in CPU ?

"nd no realworldtech is not "just a step above marketing". It's the most technical of any of the sites I (or anyone else) has referred to. "

I really don't think it. I don't know well realwordtech, but ars technica is really a good reference, even if it is a bit PC biased. And for me, it seems a lot better than realworldtech.

But I liked your article, though.

Read it again, Dawnrider
by Safari on Wed 9th Jul 2003 21:50 UTC

" You can't just throw a piece of entirely un-optimised code at a CPU and expect the initial response to be true of the capabilities of the chip"

Again, I'll post the link, and even throw in a quote for you:

http://www.luxology.net/company/wwdc03followup.aspx

Quote:

"

Luxology uses a custom-built cross platform toolkit to handle all platform-specific operations such as mousing and windowing. All the good bits in our app, the 3D engines, etc, are made up of identical code that is simply recompiled on the various platforms and linked with the appropriate toolkit. It is for this reason that our code is actually quite perfect for a cross platform performance test"

"In fact, the performance tuning was done on Windows and OSX. We used Intel's vTune, AMD's CodeAnalyst and Apple's Shark."


This wasn't un-optimized code, and if they had utilitzed alvitec and SSE, most likely the results would have been even more dispartate.

Again, show me why these developers, who are at the very top of their discipline, in a raw test of CPU performance, did not conduct a fair test, as they claim they did?


Re: Re: Roy
by TheClient on Wed 9th Jul 2003 21:55 UTC

Why is it that whenever we get an article on this site that praises PPC, Apple, Mac OS etc., that there are several which respond saying that its just a mac fanboy article... It seems that the signal to noise ration on these boards gets worse by the day.

more like signal to noise AND distortion...:)

Mission critical..
by GSC on Wed 9th Jul 2003 21:56 UTC

"The x86 can never be designed for a mission critical task. It started as a toy and should remain that way. It has very basic design flaws."

Actually, the space shuttle computers use old x86 technology. We all know how that story ends. Maybe its time they update.

Great article BTW. Thats the kind of stuff I like to read at OSNews. The horrid power consumption of x86 processors is really starting to make more people aware of some of the shortcomings.

Anyone know what the i960 was? Wasn't that an updated processor design?

lame ads
by mr fooz on Wed 9th Jul 2003 22:25 UTC


What is up with these lame rollover ads? Why is the word "computer" linked to a microsoft wireless ad?? These aren't even real links! What were you thinking?!

Re: lame ads
by bsdrocks on Wed 9th Jul 2003 22:38 UTC

What is up with these lame rollover ads? Why is the word "computer" linked to a microsoft wireless ad?? These aren't even real links! What were you thinking?!

What ads? I don't see anything.. Oh, it's because I paid OSNews.com for no ads. :-P

Here's more info about be an OSNews member:
http://www.osnews.com/story.php?news_id=3878 ;-)

i960...
by Ophidian on Wed 9th Jul 2003 22:40 UTC

if i am remembering correctly, the i960 is either intel's attempt at risc OR their early attempt at a somewhat EPIC like cpu (ie, a cpu that depended extremely heavily on the quality of the compiler and not its own ability to compensate)

as far as the article goes, the bias is horrible. yes, arstechnica is somewhat more biased on the pc side, but i give credit for it not screaming through in their cpu arch articles (one also has to consider that ars had some of the most in depth and extensive osx articles of any site i have seen, and IIRC none of the article staff of ars blatantly dislikes macs nor pcs) i also give kudos to realworldtech. both of these sites are really good places for technical comparasons and reviews.

Any article written by DKE (david every) should be reguarded as marketing propoganda. He isnt able to hold his own in an actual debate on any computer architecture discussion and is shown to be extremely biased, and alot of times blatantly wrong. I have had personal dealings with him in which he equated having drive letters to having "dos underpinnings". on 2k and xp, drive letters are superficial(sp?) for the purpose of software compatability. When he finally backed down from his position (in which he was obviously wrong) the backpeddling was astounding. by his logic, mandrake 9 has dos underpinnings as well because wine provides drive letters to applications in the name of compatability. this is obviously not the case.

this article, OTOH, doesnt really say anything, nor does it really tell you anything. at one point it looks like he is equating heat dissipation measurements power consumption measurements as though 1w for one means 1w for the other ... which is certainly not the case (although the actual relation is fairly linear). if you want to read an article with merit, go to realworldtech or arstechnica. skip the fluff that was this article.

as far as benchmarks go, the number of compilers used should have been increased as well as the oses used.

windows - gcc
linux x86 - gcc
windows - icc
linux x86 - icc
windows - vs
linux ppc - gcc
osx ppc - gcc
osx ppc - codewarrior

spec is cpu AND compiler dependent. they should have also noted the x86 config that is expected to be used more (windows - vs) and the ppc config expected (likely osx and gcc, but maybe cw, not 100% sure, dont know any mac developers who do it for more than a hobby)

for the record im not biased towards any particular cpu arch, im only anti via. i have always liked ppc architecture, if not the macos 9.x and below (and imacs, i really detest the imac look, how friggin gaudy can a computer get?)

x86 way back to 8080 and/or 4004???
by Fabio Alemagna on Wed 9th Jul 2003 22:42 UTC

> True he may havg gotten it wrong - the x86 architecture
> actually goes all the way back to the 4004. The truth is
> when the PC choose the 8088, it was already somewhat
> handicapped by it's ties to the past.

Guys, do you even have the slightest idea of what "x86" means? look, 8086, 286, 386, 486, 586, 686... Do you get what "x86" means now? Now tell me, do 8080 and 4004 fit in that scheme? Some people... ;)

RE: Mission critical
by drsmithy on Wed 9th Jul 2003 22:48 UTC

Actually, the space shuttle computers use old x86 technology. We all know how that story ends. Maybe its time they update.
Maybe you should tell NASA about your theory the last shuttle crash was because of x86 processors and not a chunk of foam smashing a hole in the wing...
Great article BTW. Thats the kind of stuff I like to read at OSNews. The horrid power consumption of x86 processors is really starting to make more people aware of some of the shortcomings.
The power consumption argument is bogus.
In environments where power consumption is a critical issue, neither the PPC 970 _or_ a plain P4 are going to be used.
In average environments, the additional electricity costs will be dwarfed by the higher hardware costs.
Added to that, there are intel CPUs that have low power consumption, for applications that demand it (like laptops). A Pentium M @ 1.6Ghz requires less than 25W. For comparison, IBM says a 1.8Ghz 970 requires about 42W (1Ghz G4 = 30W, 2.8Ghz P4 = 68W).

Laugh of the day!!!
by ILoveWindows on Wed 9th Jul 2003 23:01 UTC



" (and imacs, i really detest the imac look, how friggin gaudy can a computer get?)"

"gaudy"???

Why? Because it's white? Because it's screen is able to move in all directions? Because it has a round base?

I'm curious, because if not for it's design, you would not have:

a.) an all-in-one design (and perhaps you are just against an all-in-one design, fair enough, although very subjective).

b.) the only reason the monitor CAN move in every direction, is because the screen sits via a flexible arm onto a round body. Other designs would sacrifice the screen movement.

c.) I can't come up with a c.). Does a computer have to look like a Dell or HP to please you?

Needs to look more like a COMPUTER, right?

Sheesh

re: drsmithy
by ILoveWindows on Wed 9th Jul 2003 23:07 UTC

Actually, the G4 running at 1GHz is rated at 23 Watts per CPU maximum.

Compare that to AMD's Athlon XP 2000+ (which runs at a 1667MHz clock speed despite its name) that also uses a 0.18-micron copper process, but not SOI, and it consumes up to 70 Watts.

The PPC's are efficient......is that so bad?

re: I had to stop reading the article
by Captain Chris on Wed 9th Jul 2003 23:18 UTC

A second grader writes with better grammar.

Josh Goldberg, do you sit or stand in front of your computer? I'd guess you stand, because it must be difficult to sit with that huge stick shoved up your....

Oh, forgot...
by Captain Chris on Wed 9th Jul 2003 23:19 UTC

Great job, Nicholas! This type of material, explained at a level most of us can follow, is just what sites like OSNews need. Thanks to everyone involved!

re: Its not a fair comparison
by Captain Chris on Wed 9th Jul 2003 23:27 UTC

Several here have complained about the age of x86, saying that this comparison isn't fair. It most certainly IS fair, because these architectures are what the respective companies are currently offering for sale. Now, if there were a P5 around the corner with an all new architecture, perhaps these people would have apoint. But there isn't. The 970 and the P4 are both going to be on lots of desktops running head-to-head...there really is no other comparison (right now).

RE: Mission critical
by GSC on Wed 9th Jul 2003 23:32 UTC

"Actually, the space shuttle computers use old x86 technology. We all know how that story ends. Maybe its time they update.
Maybe you should tell NASA about your theory the last shuttle crash was because of x86 processors and not a chunk of foam smashing a hole in the wing...
"
Duh... I realize that the crash wasn't because of the the processors. I only made a comment that x86 are, in fact used for mission critical applications. ANd that even the 8086 is still being used and sought after by NASA:

http://www.geek.com/news/geeknews/2002may/chi20020514011664.htm

Re: Power consumption
by drsmithy on Wed 9th Jul 2003 23:32 UTC

Actually, the G4 running at 1GHz is rated at 23 Watts per CPU maximum.
Ars Technica disagrees:
http://www.arstechnica.com/cpu/02q2/ppc970/ppc970-1.html
The PPC's are efficient......is that so bad?
No, but they sacrifice performance to use less power. That's hardly surprising.

NASA and old technologies
by Captain Chris on Wed 9th Jul 2003 23:37 UTC

I'm not going to argue that the old Pentium chips that NASA uses in its shuttles are the very best for them, but they purposefully do not use the latest technologies. The reason for this is that before any processor can be trusted to run these systems, they must undergo about two years' worth of rugged testing to see if they will be reliable under such stressful conditions. Just thought I'd throw that in.

RE: Laugh of the day!!!
by Roy on Wed 9th Jul 2003 23:38 UTC

Hey, there's no accounting for taste.

I personally loved the G4 Cube design. I guess I was in the minority there. Never much cared for the iMac design because of included monitor. I prefer them separate, but I wasn't the target market for iMacs anyway. The newer flat-panel iMacs are definitley cool looking though (just my opinion).

The G3/G4 PowerMac designs were pretty, but not really earth shattering. I kinda like the new look of the G5 PowerMacs. I'm especially impressed with their "silent" cooling.

While not exactly competing with Apple on the design front, the new Shuttle XPCs are at least a step forward in the Intel world. Personally, I'm a bit more utilitarian and am interested in some of the "quiet" cases (Noise Control Stealth Tower and Antec Sonata).

Re: Power consumption
by Joe P on Wed 9th Jul 2003 23:40 UTC

Why do you use arstechnica.com to prove your point? Try looking at http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=03C1T...

Notice that the MPC7455 @ 1Ghz is 18.5W typical and 28W max.

The MPC74xx family is the series that Apple calls the G4.

re: Roy
by ILoveWindows on Thu 10th Jul 2003 00:19 UTC


I agee, the Cube was great looking, just overpriced for what it was.

As for the older iMacs, I though they were extremely ugly, but the new ones are not only cool, but functional as hell.



Silence is golden, and way overlooked, thus I love my 17in iMac, it is almost totally silent. If the Shuttle XPC is a winner in this regard, I will definitely take a look at it.

My Athlon spanks my iMac, and is faster than my dual G4, but for most things, the iMac is just perfect, it gets used more than the other two.



Here is why Apple does not make sense
by Mutombo on Thu 10th Jul 2003 00:32 UTC

Apple delivers a very efficient, logical and Easy to implement CPU, yet they cost to much. They need to get their cost down and they have all the tools in their hand to do it.

G 5 Cube
by spaceboy29 on Thu 10th Jul 2003 00:37 UTC

Great article! I think a G5 Cube would be rather nice. Don't know if thats possible, maybe a G6 Cube.

imac...
by Ophidian on Thu 10th Jul 2003 01:22 UTC

which model of imac looks gaudy? is it the ilamp? no
is it the translucent cased candy coloured all in one model? yes

geez, some ppl have to have it spelled out completely

i would be glad to own an ilamp, just not the original version of the imac

interesting article
by Kelly Samel on Thu 10th Jul 2003 01:44 UTC

Just want to say that I found this article very
interesting. It really helps clarify many of the
differences in design and some of the possible
advantages of the PPC platform...

Sheesh
by Safari on Thu 10th Jul 2003 01:48 UTC

"which model of imac looks gaudy? is it the ilamp? no
is it the translucent cased candy coloured all in one model? yes

geez, some ppl have to have it spelled out completely"

Geez, I must not have transported back to 1999 when the iMac was "candy coloured".

Maybe some of us assume you would be talking about the only iMAC BEING SOLD RIGHT NOW.

Why the hell would I think it was an older model?

Most people will assume that it is one THAT IS NOT DISCONTINUED.

LOL!!!!!

Re: Power consumption
by drsmithy on Thu 10th Jul 2003 01:52 UTC

Why do you use arstechnica.com to prove your point?
Because it was the first decent link my search returned.
That figure is also for a 1Ghz G4, which is significantly slower than both a 2.8Ghz P4 and a 1.6Ghz Pentium M (except for a few corner cases).
Incidentally, the page you link to is rather unclear as to which figures apply to which processor.
There are no power consumption figures for the (most likely overclocked) >1Ghz G4s Apple are currently using, so it's somewhat more difficult to give comparable performance/power usage figures. Undoubtedly the P4 is still going to be higher, but a Pentium M won't (the 25W figure is a maximum).
As I said elsewhere, the power consumption argument is largely bogus, as the tiny amount of money saved in power will be vastly overshadowed by the extra expense involved in purchasing Apple hardware (and that's not even counting the costs of a platform migration). Electricity is cheap, Macs are not.

Smooth transition, drsmithy!!!
by ILoveWindows on Thu 10th Jul 2003 02:01 UTC


I must say, I am impressed at the way that you so quickly shifted the focus from heat and energy to the vast expense involved in Mac ownership!!

A beautiful shift from one straw-man to the other.

(although I think California might disagree with you on the "Electricity is cheap" argument)

Again, a quite skillful sleight of hand, congrats!!!!

Re: Goo
by Encia on Thu 10th Jul 2003 02:22 UTC

@bobby
>x86 is bigger requires twice the clock speed, generates 4 times the heat do do the same amount of work as the PPC.

Regarding "X86 requires twice the clock speed" and the amount of work. Your generalised statement is generally not true with AMD's Opteron @1.8Ghz and nForce3**. Careful with any generalisation.

**Engineering release of ASUS SK8N. Reference:
http://www.amdzone.com/articleview.cfm?articleid=1304

Re: Power consumption
by drsmithy on Thu 10th Jul 2003 02:27 UTC

I must say, I am impressed at the way that you so quickly shifted the focus from heat and energy to the vast expense involved in Mac ownership!!
That would be because 90% of the post was about power consumption while half of the last sentence pointing out why it's not really an issue happened to mention Apple, right ?
A beautiful shift from one straw-man to the other.
None of my points (that there are low power x86 chips, that x86 chips using more power are also faster (as are ones using less power), that the figures reported on the motorola website are difficult to interpret and that there are no power consumption figures for the fast G4s Apple are using) could be described as strawmen.
Neither is the Macintosh cost issue, given pretty much the only PPC systems out there are Macs. And this discussion is about an article comparing PPCs to x86s.
(although I think California might disagree with you on the "Electricity is cheap" argument)
Compared to the price difference between a Mac and a PC, it is.
Again, a quite skillful sleight of hand, congrats!!!!
Hardly. I wasn't trying to deceive anyone, so how could I have done it sucessfully ?

"RISC vendors will always be able to make a faster, smaller CPUs"
by J. Rollins on Thu 10th Jul 2003 02:27 UTC

Interesting read, but there are a few oddball statements in it, like "RISC vendors will always be able to make a faster, smaller CPUs". Huh?

We are dumping our Sun workstations left and right (Blade 2000s, no less) and replacing them with MUCH faster x86 Linux boxes at 1/3 the price. I don't simply mean faster clocks, I mean better performance. Simulation runs that take days on Blades run in LESS than a day on one of our 2.8G P4 boxes. Sure, Sun may one day offer a CPU that is faster than a P4 or Opteron, but they will want $10K for it and it will only be 2% faster. As far as I am concerned, Sun is dead, at least as far as 99.99% of the market is concerned. Hope the other RISC vendors can figure out a way to compete with x86, because at this point what they have to offer is both slower and more expensive.

Earth Resources & Microsoft wisen up.
by John Blink on Thu 10th Jul 2003 02:27 UTC

I for one want a processor that doesn't suck up the earth resources. Just think about the quantity of PC out there and that will grow with the population.

Since I read in one of Eugenia's article that MS will break some backward compatability in software, I hope they do this with hardware too. Hopefully .NET has been designed for this purpose.

I would love to have Apple hardware that can run both OS's extremely fast.

Nice article, but why do people complain about OSX unresponsiveness, although I have run a couple of high quality quicktime streams and multitasking was beautiful.

Space shuttle
by jefro on Thu 10th Jul 2003 02:29 UTC

Those x86's were a very special run of a few thousand and the simple computer was not a mission critical computer nor was it an IBM compatable design. I worked at IBM when they did that just for the fact that they could say they used those. You will never see one of those processors on the market. Every one was tested until only a few working ones were left.

NASA and old technologies
by drsmithy on Thu 10th Jul 2003 02:31 UTC

I'm not going to argue that the old Pentium chips that NASA uses in its shuttles are the very best for them, but they purposefully do not use the latest technologies.
I doubt NASA uses Pentiums in the shuttle. I'd be amazed if it was anything more modern than a 286.

another article hijacked
by macster on Thu 10th Jul 2003 02:34 UTC

smithey try to stay on topic. We already know you can't afford a Mac and it doesn't apply to everyone and its not really pertinent to this topic.

The 1GHZ G4 is not overclocked, a litte bit of not too hard research will prove this and its a full desktop processor that works well in a laptop case less than 1 inch thick.

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC...

Back to the topic, its a good article that is a bit biased but a good read nontheless. I prefer the way information was presented in the arstechnica articles concerning the G4 and G5.

s
by lamo on Thu 10th Jul 2003 02:43 UTC

s

Re: Goo
by drsmithy on Thu 10th Jul 2003 02:45 UTC

The x86 is bigger requires twice the clock speed, generates 4 times the heat do do the same amount of work as the PPC.
This isn't true for (at least) P3s, Athlons, Pentium Ms or P4s, except in a few *very* specific cases, so which x86 CPUs are you thinking of ?
They may be about the same speed, but the PPC has a lot more room to grow.
I remember this being said about PPC and x86 once before, about a decade ago. Since then, PPC processors had the better performance of the two for less than a year, around the release of the first G4s,
Intel have a much better track record for delivering than any of the members of the AIM alliance.

RE: NASA and old technologies
by macster on Thu 10th Jul 2003 02:48 UTC

486 on the Space Shuttle
http://www.spaceref.com/directory/exploration_and_missions/human_mi...

486 in the AirPort
http://freebase.sourceforge.net/hardware.html

I remember that these processors can run without a heatsink!

Environmental impact
by drsmithy on Thu 10th Jul 2003 02:55 UTC

I for one want a processor that doesn't suck up the earth resources. Just think about the quantity of PC out there and that will grow with the population.
This is another bogus issue, too. In terms of overall environmental impact by the average computer, the amount of electricity the CPU uses is barely even measurable.
Since I read in one of Eugenia's article that MS will break some backward compatability in software, I hope they do this with hardware too. Hopefully .NET has been designed for this purpose.
Microsoft has had a portable operating system running on multiple architectures since 1993. They've just never had a reason to consider anything but x86 since the early '90s (back then it really did look like x86 had hit the wall - then intel released the Pentium). NT *was* designed for "this purpose", fifteen years ago.
I would love to have Apple hardware that can run both OS's extremely fast.
Not going to happen.
Nice article, but why do people complain about OSX unresponsiveness, although I have run a couple of high quality quicktime streams and multitasking was beautiful.
Because compared to - well, just about anything else - it is unresponsive on anything short of the fastest machines available. Your example is not going to demonstrate unresponsveness. Try resizing some windows or switching between applications.

Re: another article hijacked
by drsmithy on Thu 10th Jul 2003 03:02 UTC

smithey try to stay on topic. We already know you can't afford a Mac and it doesn't apply to everyone and its not really pertinent to this topic.
Heh. You're posting about how much money you think I have and apparently accusing me of "hijacking" an article. The irony.
Incidentally, nothing in that link you provided suggest Motorola endorse a G4 at >1Ghz. Not to mention most of it emphasises using the G4 in embedded scenarios...

Re: NASA and old technologies
by drsmithy on Thu 10th Jul 2003 03:05 UTC

486 on the Space Shuttle
I don't think that's quite what people are thinking when they talk about the computers that run the space shuttle - but still, useful info.

Pot calling the kettle black
by danwood on Thu 10th Jul 2003 03:30 UTC

I enjoy reading the mac zealots poking fun at the x86 because it's a frankenstein of a processor (tons of additions to make up for previous generations of shortfalls, you know).

Yet, we have Quartz Extreme, whose sole purpose in life was to make up for the shortfall in the OS X rendering system. Apparently megs of PDF are hard to draw in real time, so lets make the GPU do it! Look at that window slide down!

I'm not a fan of pc's in general, but anyone who thinks that QE was a credit to Apple's ingenuity needs to realize that 4% of the entire computer market was going to throw their macs out the window if they couldn't move a window without the mouse leading it by 3 seconds.

re: safari
by Anonymous on Thu 10th Jul 2003 03:52 UTC

"Why the hell would I think it was an older model?"

maybe because i also made reference to an older version of macos? maybe because between the different models carrying the name "imac" only 1 style was considered by MANY to be extremely gaudy?

grrr
by Ophidian on Thu 10th Jul 2003 03:56 UTC

forgot this box doesnt have my name cookie set :

VHS vs Beta
by Sikosis on Thu 10th Jul 2003 03:59 UTC

Well Beta was better than VHS - but which won ? and what's around today ?

Beta is/was used in TV studios (probably more digital these days).

VHS in the home.

No!!!!
by Hakime on Thu 10th Jul 2003 04:02 UTC

"That figure is also for a 1Ghz G4, which is significantly slower than both a 2.8Ghz P4 and a 1.6Ghz Pentium M (except for a few corner cases).
"

I don't think so!!!!, the G4 at 1.0 ghz is very effective as effective and even more than the pentium M 1.6 ghz (a pentium without very high frequency is nothing, and here the more advanced arcitecture of the G4 makes the difference ), and can beat in a a lot of applications the P4 2.8 ghz.

Anyway!!!!

Interested, I checked out the website of MorphOS, in a paper about MorphOS "in Detail" it said the below. I think this would have been a big point in the article but it was not mentioned. Is it true and how does it work that it is 10x faster? And, more importantly, is that fast enough to provide a speedy OS?!

Thanks for the good article!

From the PDF:

Microkernel Vs Macro Kernel

A common problem encountered in the development of microkernel Operating Systems is speed. This is due to the CPU having to context switch back and forth between the kernel and user processes, context switching is expensive in terms of computing power. The consequence of this has been that many Operating Systems have switched from their original microkernel roots and become closer to a macrokernel by moving functionality into the kernel, i.e. Microsoft moved graphics into the Windows NT kernel, Be moved networking inside, Linux began as a macrokernel so includes everything. This technique provides a speed boost but at the cost of stability and security since different kernel tasks can potentially overwrite one another’s memory.

Given the above, one might wonder why Q can be based on a microkernel (strictly speaking it’s only “microkernel like”) and still expected to perform well. The answer to this lies in the fact that MorphOS runs on PowerPC and not x86 CPUs. It is a problem with the x86 architecture that causes context switches to be computationally expensive. Context switching on the PowerPC is in the region of 10 times faster, similar in speed to a subroutine call. This means PowerPC Operating Systems can use a microkernel architecture with all it’s advantages yet without the cost of slow context switches. There are no plans for an x86 version of MorphOS, if this changes there will no doubt be internal changes to accommodate the different processor architecture.

Linuxtag and Debian Developer Conference...
by bbrv on Thu 10th Jul 2003 04:35 UTC

Good document Nicholas!

Come see Nicholas and the Pegasos at Linuxtag. There will be a Pegasos at the Linuxtag in Karlsruhe this weekend. Linuxtag is Europe's largest GNU/Linux exhibition. The Linuxtag is at the Karlsruhe Convention Center from 10-13 July. Nicolas will be joined by Genesi's newest team member Sven Luther. Sven is a developer known to both the MorphOS and Linux development communities. Sven Luther's LinuxPPC kernel was recently announced on MorphOS-News here:

http://www.morphos-news.de/index.php?lg=en&nid=371&si=1

At Linuxtag, the Genesi Team and the Pegasos will be found in the Debian Booth.

The following weekend (18-20 July 2003) Sven will also be attending the Debian Developers Conference, Oslo, Norway with the Pegasos. Details about that conference can be found here:

http://debconf.org/debconf3/

Sven has joined Genesi to manage the LinuxPPC development effort for the Pegasos. Sven is a PhD candidate at the University of Strausbourg and an Associate Professor in the Computer Sciences Department. The University of Strasburg was founded in 1621, with a long tradition of academic excellence. Louis Pasteur, John Calvin, Marc Bloch and four Nobel Prize winners have studied or taught there. We can now add Sven to the list...;-) Welcome Sven!

Of course, Nicholas and Sven will be able to demo MorphOS too!

Have a great day!

Raquel and Bill
bbrv@genesi.lu

Beta...
by Kevin on Thu 10th Jul 2003 04:45 UTC

"Well Beta was better than VHS - but which won ? and what's around today ?

Beta is/was used in TV studios (probably more digital these days). "


Beta hasnt been used in most major tv stations in a really really long time. It was never super popular. For lowend use SVHS was used (Which came out a while after beta, and was better quality) but Betacam (no relation to beta) was used almost exculusivly in all major stations untill recently where we have professional DV vtrs, digital betacam, and HDCAM (digital high definition)...

Apple FUD
by Glenn Sweeney on Thu 10th Jul 2003 05:16 UTC

A) There is no diff between RISC and CISC now.

B) The law of diminishing returns actually means that because there are no new instructions added the amount of space required to decode the SAME instructions gets relativly smaller and less important every year. (The artical is illogical .. either cause the writer is or because his trying to manipulate the reader.)

C) Altivec makes G4 a slow RISC computer. Thats why it cant scale well because of the COMPLEX Alitvec instruction set.

D) x86 benchmarks are invalid.. gawd what a load of bull....
This artical is a transparent piece of tripe .. trying to sell PPC chips and whatever else he is interseted in.


Now if only PPC 970 was an open platform
by Piers on Thu 10th Jul 2003 05:21 UTC

I am wondering, would IBM ever consider making their new chips the beguining of a PPC open platform where people can build their systems and other hardware vendors other than Apple can pump out these systems for the desktop?

This coupled with Linux or something would really give the consumer some choice and add more competition into the pot for Apple, Intel and AMD.

Just a thought.

Re: Sikosis
by Bascule on Thu 10th Jul 2003 05:33 UTC

Well Beta was better than VHS - but which won ? and what's around today ?

Did you ever watch anything on Beta? Do you remember having to switch tapes in the middle of a movie because each Beta tape could only hold an hour of video?

The difference in picture quality was virtually undiscernable. Meanwhile, any movies over an hour in length had to be provided to the viewers on multiple tapes.

So please tell me, how is Beta better than VHS again?

Beta vs VHS
by Captain Chris on Thu 10th Jul 2003 05:42 UTC

Ok, we're getting WAY off topic here, but...beta certinly, was, by all accounts (including my own) superior to VHS, in both video and audio. However, it was shorter in length (which might have been corrected had the format lived longer) and slightly more expensive because of the more complex cassette architecture. Had Beta survived longer, R&D would have come up with improvements, as is always the case with a financially successful technology. Don't compare old Beta wares with current VHS...compare them with old VHS.

RE:VHS vs Beta
by rooster on Thu 10th Jul 2003 07:33 UTC

Beta was a home video format as was VHS. I forget what format most TV stations used but it was these mammoth 1 or 2 inch tapes. Beta & VHS are 1.5 inch.

Re: various
by Nicholas Blachford on Thu 10th Jul 2003 09:31 UTC

A) There is no diff between RISC and CISC now.

B) The law of diminishing returns actually means that because there are no new instructions added the amount of space required to decode the SAME instructions gets relativly smaller and less important every year. (The artical is illogical .. either cause the writer is or because his trying to manipulate the reader.)

C) Altivec makes G4 a slow RISC computer. Thats why it cant scale well because of the COMPLEX Alitvec instruction set.

D) x86 benchmarks are invalid.. gawd what a load of bull....
This artical is a transparent piece of tripe .. trying to sell PPC chips and whatever else he is interseted in.


Did you even read the article?
The entire point is that A) is not true.
It is true to say that CISC uses the same techniques as RISC but the inefficiency cannot be hidden by an instruction decoder and consequently CISC has to expend a great deal more energy getting up to high performance rates.

B) What are you on about?

"LAW OF DIMINISHING RETURNS
An economic principle asserting that the application of additional units of any one input (labor, land, capital) to fixed amounts of the other inputs yields successively smaller increments in the output of a system of production. (Krippendorff)"

C) Perhaps you should tell IBM that.

D) If Apple produces a benchmark it automatically assumed to be fake - which I guess is only to be expected given their record.
However If Intel produced a benchmark it's "gosh look how fast they are". They should both be treated for what they are: benchmarks produced by a manufacturer of a product - no matter their intention - are marketing.

--

Heat:
Yes there are low power x86s but the latest G4s (7447) go down to 7.5 Watts at 1GHz. There's nothing in the x86 world that even comes close to that level of performance at such a low power consumption.

VCRs
It's quite correct that the best or most advanced technology does not always win the market, if it did we would all be Alpha users running BeOS with an Amiga AAAAA chipset...

If you want *REAL* technical reading
by Cloudy on Thu 10th Jul 2003 09:38 UTC

Just hop to: http://www.aceshardware.com/list.jsp?id=4

Till then, why don't you stop spamming the boards with irrelevancies ?

Interested, I checked out the website of MorphOS, in a paper about MorphOS "in Detail" it said the below. I think this would have been a big point in the article but it was not mentioned. Is it true and how does it work that it is 10x faster? And, more importantly, is that fast enough to provide a speedy OS?!

Thats what I was told yes.
However...
I was later told that actually deciding which contect switch to make takes a lot longer so it doesn't have that much effect in the grand scheme of things.
However if you were running a lot of tasks and were switching rapidly between them it could then make an impact.
Also PowerPC is mainly used in embedded / real-time applications so this could be beneficial there.


Yay!
by Lennart Fridén on Thu 10th Jul 2003 09:55 UTC

Nice reading! Perhaps not always as objective as it could've been, but overall a very nice read.

Whoops
by Dave on Thu 10th Jul 2003 10:31 UTC

"C) Perhaps you should tell IBM that. "

I agree the guy is wrong but perhaps you should go look and see how many altivec equipped G4s IBM uses in its scaled up offerings. :-)

Good, solid read
by Fed on Thu 10th Jul 2003 10:36 UTC

Thanks for writing the most decent article around here in ages. About all this complaining though, you have to wonder if they could do any better. Im putting bets on no....

Re: Bascule
by John Blink on Thu 10th Jul 2003 13:03 UTC

So please tell me, how is Beta better than VHS again?

They didn't suffer from the Y2K problem ;) A Joke okay!!!

Why Lie?
by Matt Parsons on Thu 10th Jul 2003 13:05 UTC

With All Due respect Nicholas Blachford, you article was full of half truths (selectively using out of date information!), and some really big errors.

I don't understand it!! When will you learn that you don't have to lie to show how good the PPC is.

Let the damn thing stand on it's own two feet, I'm sure it will run...

Good Article
by Peter Moss on Thu 10th Jul 2003 13:27 UTC

A very good article and thoroughly enjoyable. It’s just a real shame that a minority of people on OSNews seem to have massive insecurities and are nit picking tiny little holes because they want to be the cleverest.

Well get over it, grow up and I await reading your absolutely perfect in every way articles that you manage to explain the entire concepts and history of CPU in full detail, never making one mistake (not even a typo) and fit it into a 5 page document. Go on – I dare you.

Oh, and don't forget to reference all of your sources in a comprehensive bibliography making sure that none of the websites you referenced are ever going to be down!

Intel / Dell "Marketing" with SPEC
by Mike on Thu 10th Jul 2003 13:27 UTC

Just want to bring up the point that Intel/Dell when producing there SPEC scores,
used the Microquill SMP Heap management library. A $1200 US Dollars piece of software, and,
ah, Dell isn't packing that Microquill library with every P4 it ships.

So, don't expect to get those SPEC scores on your machine.
Which, in my definition of a benchmark is Cheating.
Have you been Duped today?

enough Whining :)
by bigmaC on Thu 10th Jul 2003 14:17 UTC

There are always people who will concentrate on the most trivial points of a subject, and distort it to their own individual certainty- as depicted by a majority of the comments on this topic.
Most comments I have read are merely personal justifications of insecurities, combined with some factual information….

To the author; a fantastic read, thank you

Re: another article hijacked
by linuxlewis on Thu 10th Jul 2003 14:23 UTC

"Incidentally, nothing in that link you provided suggest Motorola endorse a G4 at >1Ghz. Not to mention most of it emphasises using the G4 in embedded scenarios..."

http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=03C1T...

Smithey look at the chart a little closer. It shows righ there that the G4 is running at 1GHZ. Apple uses the MPC7455 and MPC7457. Why is it so hard to admit you are wrong and in turn learn something? Now if you were to mention that G4s at 1.24 and 1.42 GHZ are overclocked then there might be some truth to your statement.

re: another article hijacked
by Mike on Thu 10th Jul 2003 14:42 UTC

Motorola has a long history of being months behind in their web site doc. What Moto tells the world, and what they tell their real customers, are two different things.
Only Moto can tell us why.

Cherry picking: Intel also has a problem supplying the P4 3.2 ghz processor, i.e. cherry picking. So, what's your point?

x86 = global warming
by Pondo Sinatra on Thu 10th Jul 2003 14:43 UTC

With California (and now Ontario) facing rolling blackouts due to power shortages you have to wonder how much power hungry PCs are contributing to the problem, power management may help, but when todays systems gobble up 10x the power that a system from 5 years ago did combined with more and more PCs being bought you know we're headed for trouble.

My prediction: much like the auto industry is regulated for emissions, the PC industry will become regulated for power consumption.

Didn't like it
by Erwos on Thu 10th Jul 2003 14:55 UTC

The article is just not very good. How can you take an article seriously when it says something like "the x86 floating point unit is very weak"? Sir, it's quite possible to rig up an x86 CPU with an extremely strong fpu. No one's done it because the consumer market space doesn't need it, and high clockspeeds can disguise it in consumer apps. Don't confuse instruction set with good CPU design.

Another good one: "you can't make a low-power consumption, low-end x86 CPU". Try the C3.

Sorry, but the article is riddled with inaccuracies like those. The ArsTechnica articles BLOW IT AWAY.

-Erwos

RE: Nicholas Blachford
by Roy on Thu 10th Jul 2003 15:21 UTC

"Heat:
Yes there are low power x86s but the latest G4s (7447) go down to 7.5 Watts at 1GHz. There's nothing in the x86 world that even comes close to that level of performance at such a low power consumption."

Thermal Design Power of a ultra-low voltage PentiumM at 1GHz is 7 Watts. While TDP is not equal to power usage, it does scale fairly linearly, so I doubt its power usage is above 10 Watts (could be wrong). The PentiumM performs better per clock than a P3, so it shouldn't be too far off from the G4 (though certainly not quite as fast per clock). While not quite as fast and possibly not quite as energy efficient, this is certainly in the same ballpark as the G4.

RE: G5 Cube
I too would like to see a G5 Cube. It would probably need a fan (no convection cooling), but I think it could be kept fairly silent. BTW, I've heard that the Shuttle XPCs are NOT terribly silent. :-( You could probably mod it to be silent, but it might require lowering the clock and possibly the voltage of the CPU. Apple pretty much has the market cornered on cool looking AND quiet.

RE: Didn't like it
by Roy on Thu 10th Jul 2003 15:36 UTC

Actually, the general x86 FPU (referred to as the x87 because it used to be a separate coprocessor) pretty much sucks. The programming interface to the FPU is a REAL problem. This issue is largely fixed by SSE(2). With the P4, Intel designers pretty much decided to abandon the x87 FPU. It still works, but is REALLY slow (significantly slower than an Athlon). This is one reason that the P4 specifically is VERY dependent on SSE(2) usage.

Oh, and the C3 is able to be low power, but it sacrifices a LOT of performance to do this. I agree with your point about it being possible. The PentiumM is really the best example of this that I know of.

The instruction set DOES have an impact on the design of a processor, but that impact has been reduced through tricks suck as out-of-order execution and register renaming. Also, other factors such as memory bandwidth multiprocessor scalability have become more relevant over the years (these are more dependent on system rather than processor architecture). Designing a fast x86 processor is "harder" than designing a fast PowerPC processor. Intel has largely been able to stay ahead of Apple in the last few years because their fabrication and design capabilies were so far ahead of Motorolla. They still have an advantage in fabrication over IBM due to volume (higher volume = lower per unit cost), but technology wise, IBM is MUCH more competetive than Motorolla.

And again, yes, the ArsTechnica article rocks. THAT writer really knows what he is talking about.

Apple pretty much has the market cornered ...
by jrj on Thu 10th Jul 2003 16:08 UTC

Roy stated: "Apple pretty much has the market cornered on cool looking AND quiet".

What? I have a dual-proc G4 here that rivals Larry Ellisons GulfStream V jet in noise.

It is cool looking, but I don't agree that they have the market cornered.

well done
by Shane on Thu 10th Jul 2003 16:15 UTC

a very good read, and full of actual information. shockingly so for some wintel trolls i notice, but that is their trouble, not mine.

again, thanks for the great read.

G4 = G3 + Altivec
by SirDrinksAlot on Thu 10th Jul 2003 16:16 UTC

"This was added in the G4 CPUs but not to the G3s but these are now expected to get Altivec in a later revision."

The G4 is simply a G3 with Altivec. Currently the G3 is the "consumer" processor while the G4 is the "Professional" processor. This is similar to the P4 vs Celeron.

Now that the G5 is around the G4 will become the "Consumer" processor.

I'm really not sure what role the G3 will have after the G5 ships but i wouldnt expect it to disapear untill apple gets a portable G5 on the market.

>>Dude -- an apostrophe does not mean "watch out, here comes an 's' !!" Posessive pronouns, "its, hers, yours," do not have apostrophes. Use apostrophes when you are using a contraction, for instance "it's" means "it is" and the apostrophe stands for the (space and) vowell. <<

You obviously do not understand English grammar then.

Jenny's house. The house belonging to Jenny.

An apostrophe also represents posession. I could bore you with the reasons why, but will leave it at "English was formerly an inflected language; the >'s< is a hangover from this." Suffice to say, some other languages of a Germanic origin do not use the apostrophe in this case (Swedish comes to mind), but propper English does.

I only wish that some of the posters worried more about the difference between 'Your', 'You're' and such like rather than picking on an otherwise readable article.

You are correct that Nicholas has a few cases where he meant to write 'Its' rather than 'it's', but on the whole his usage is good.

>> [I had to stop reading the article ] because it was making my head hurt. Can you say proofreading? spellcheck? A second grader writes with better grammar. Perhaps you need to put the pipe down a bit sooner before writing your next article.

Let me suggest looking two words up in the dictionary: effect and affect. <<

This is simply being picky. In British English (as I beleive Nicholas is originally from the UK) the difference between the words 'Affect' and 'Effect' is only in the written language. We don't prescribe to the greater English speaking worlds tendancy to say 'Ah-fect' and 'Ee-fect'. Both sound like the former to my ears (actually, technically we use the Schwa sound for the initial syllanble's vowel sound, '@-fekt' with the stress on the second syllable.) This would be like asking a middle wenterner to acknowledge the difference between Marry, Mary and Merry. Believe me all three sound different to my ears ;-)

If you look at the root of the word 'effect' you'll find that it actually is directly related to 'affect' and the fact that we choose to use both words is an anomily. Much like the use of both Dispatch and Despatch... both of which have exactly the same meaning. Whilst I realise that affect and effect have divergent meaning, it's on a similar path. This happens a lot in liguistics ;-)

Also, baring in mind British spelling and grammar were mostly in use, please realise that we do not speak in the same way over here.

I have gotten a card from Mary - I got a card from Mary.
I could use a cold one - I could do with a pint.
July 10, 2003 (july 10 [th]*) - 10th July 2003 (tenth of july)

* some US speakers seem to insist on dropping the ordinality of the date.

Ah well... tom-ah-to tom-ay-to.



Ugh
by Athemeus on Thu 10th Jul 2003 16:47 UTC

The article starts off declaring PPC's larger number of registers and then fails to say *why*. I find this disturbingly biased. RISC architectures are supposed to simplify code, and one of the ways it does this is by using register to register operations. With a CISC architecture, you can add two numbers together and put them in memory in one instruction. For a RISC architecture, you add two numbers, put them in a register, then do a second instruction to move the register contents to memory. This important fact was wholly omitted. Nor is it mentioned that the reason micro-ops are used is for pipelining purposes! Pipelining is an important feature of RISC. Of course, it isn't mentioned that one of the reasons RISC and pipelining are associated is because the instructions need to all be of the same length, and how micro-ops gets around the fact that long CISC instructions impede pipelining. What's even more is that RISC is there to help people write compilers, but it isn't mentioned that because Intel is so large, they can throw plenty of money at the compilers. Microsoft also has no shortage of people to put to work on it. And then SMT is mentioned, but not explained in basic terms how it works.

This article is nothing more than biased junk. And I even like the new G5s. They are good processors, speedy, I'm sure.

Give me a break...
by Athemeus on Thu 10th Jul 2003 17:15 UTC

"D) If Apple produces a benchmark it automatically assumed to be fake - which I guess is only to be expected given their record.
However If Intel produced a benchmark it's "gosh look how fast they are". They should both be treated for what they are: benchmarks produced by a manufacturer of a product - no matter their intention - are marketing. "

Exactly why you should talk about benchmarks from real-world applications instead of SPEC. But then again, Adobe is inching away from PPC, and a number of benchmarkers have found x86 to be giving higher numbers in applications, including traditionally Mac ones. But don't take my word for it.
http://www.macnn.com/news/18887

The sad thing is, I was eagerly looking at the tech paper for the G5, and Apple did a music software comparison, using Logic on the Mac and Cubase on the PC. Logic 5.1 works on the PC; in fact I use it on this very PC all the time. Why wouldn't they compare results using the *same* software, since comparing results from two different programs is useless? My only conclusion can be that it didn't give the results they wanted.

Funny to read
by sillyness on Thu 10th Jul 2003 17:18 UTC

I thought the article was great. It summed up a lot of facts that obviously offend MANY people that are at the butt end of the article. But even more entertaining was reading the comments left by those people.

I bet the president of Intel himself could publicly declare that RISC is superior to CISC and the wintellies will still be unsatisfied. But for no real reason other than the fact that they laid down hard earned cash for their machines, and are almost obligated to defend their decisions, in whichever way they can. So its understandable.

And maybe they DO have real reasons. I know a lot of people do. They're required to run software that runs only on intel machines, or RISC machines (we'll name none specific) are too cost prohibitive. That too is understandable.

But after 12 years of working with Intel powerd machines (and a few AMD) I had nothing really tying me to them, other than years of experience. So I moved on.

heh...
by Athemeus on Thu 10th Jul 2003 17:23 UTC

"My prediction: much like the auto industry is regulated for emissions, the PC industry will become regulated for power consumption."

You ever wonder how much juice those hybrid or electric cars suck up? Are you aware that the lead acid from those batteries is sure to destroy the environment?

Rehash of other articles with bold but iffy conclusions
by Comatose51 on Thu 10th Jul 2003 17:26 UTC

The article is decent as far as an overview goes. However, it seems like a rehash of common computer architecture knowledge. In other words, it doesn't give any special insight or even the depth one might expect. My intermediate level computer systems and architecture class went into much greater depth that this. This is not such a problem if not for the fact that the author is trying to compare two architectures and settle a very important (and heated) question. The article seem biased in many ways (one has to wonder if the author's profession have something to do with this). The author undermines the credibility of the x86 claims by campanioning the PPC's abilities. It is not so much a comparison as "Why the PPC is better than x86". Lastly, the conclusions at the end, even if there is a source or two to support them, are very bold. The lack of substantial evidence does not warrant such strong claims. The author would do well to consider that DEC no longer exists, Sun is having trouble in the server market, and the Linux + x86 combination is gaining market shares. With these considerations in mind, one can hardly justify the claims made at the end. In conclusion, this article seem nothing more than another opinion piece inspired by Jobs' recent claims disguised as a comparison of the two architecture.

Ugh
by stingerman on Thu 10th Jul 2003 17:34 UTC

That is the point, the CISC instructions are fat and require the CPU to break them down further in micro-code. In addition CISC instructions are not uniform in size which means that scheduling, out of order processing, decoding, etc. are a lot more expensive. The RISC vs CISC debate is long dead. RISC won a long time ago and x86 has been adopting as much RISC type design over the years as possible. The problem as the Author and even Intel has pointed out, is that the current generation of processors require compatibility with the original x86 CISC platform. Intel's Itanium design finally walks away from that, but it has been a failure, last year selling only 700 units. It is going to be a herculean effort for Intel to break free from its legacy processors. IBM does not have these kinds of handcuffs on them, the PowerPC Architecture had the foresight to be well designed as a 64-bit RISC instruction set since its inception.

That is why the G5 is such a clean and elegant design.

Advantage PowerPC
by stingerman on Thu 10th Jul 2003 17:52 UTC

It is important to note that PowerPC Specification does not dictate the actual design of a PowerPC processor. So Apple, IBM and Motorola are free to develop their own processor designs while being confident that binary compatibility will be maintained. Thus the G5 can have a different caching mechanism, different pipelining, scheduling, branch prediction algorithms, staging and pipelining. A high degree of parallelism (almost 20 to 1), significantly greater functional units, hardwired instructions such as square rooting which motorola performs in code. Yet, although designs are radically different, they are binary compatible. That is an incredible engineering feat, and illustrates why the PowerPC processors will continue to be advanced at a faster clip than legacy processors.

RE: heh
by Pondo Sinatra on Thu 10th Jul 2003 17:56 UTC

"You ever wonder how much juice those hybrid or electric cars suck up? Are you aware that the lead acid from those batteries is sure to destroy the environment?"

You're right we're screwed either way. So why bother doing anything.

The real problem is that there's too many friggin people. We need a return to the good 'ol days when entire generations would get wiped out by war or disease. These modern conflicts just ain't cutting it.

Re: Roy
by Bascule on Thu 10th Jul 2003 17:57 UTC

Intel has largely been able to stay ahead of Apple in the last few years because their fabrication and design capabilies were so far ahead of Motorolla. They still have an advantage in fabrication over IBM due to volume (higher volume = lower per unit cost), but technology wise, IBM is MUCH more competetive than Motorolla.

Umm... earth to Roy...

IBM is the #1 manufacturer of integrated circuits in the world

IBM just opened the largest, most advanced chip fabrication facility in the world:
http://www.internetnews.com/infra/article.php/1437171

IBM is already manufacturing ICs using a 90nm process:
http://albany.bizjournals.com/albany/stories/2002/12/16/daily6.html

Re: stingerman
by Bascule on Thu 10th Jul 2003 18:03 UTC

Intel's Itanium design finally walks away from that, but it has been a failure, last year selling only 700 units.

I'm glad others are aware of what a dismal failure Itanium was. I always get a laugh out of people saying that the lingering big iron RISC vendors (i.e. Sun, SGI) should drop their designs and standardize on IA64.

Except SPARC and MIPS together outsold Itanium last year by two orders of magnitude. Way to go, Intel!

RE: Bascule
by Roy on Thu 10th Jul 2003 18:24 UTC

The fabrication disadvantage that I was referring to was ONLY volume, which affects cost per CPU. Are you suggesting that IBM will produce as many PPC970s as Intel produces P3s? As I said, technology wise, IBM and Intel are competetive. The G5 is significantly smaller on an equivalent process though, so this may make up for the volume difference. Also, some of pooling of resources from Intel competitors (IBM and AMD) may help alleviate this issue.

Itanium certainly got off to a bad start. Merced (Itanium 1) sucked big time. McKinley (Itanium 2 - largely an HP design, I think) is much more promising though it certainly isn't taking the world by storm. Deerfield (very low cost version of McKinley - ~$800) can either be seen as interesting value or as a last ditch effor for IA64 to gain some marketshare, depending on whether you think Itanium has a future. I personally don't know.

correction
by Roy on Thu 10th Jul 2003 18:29 UTC

"as Intel produces P3s"

oops. Meant P4s.

You haven't lived....
by Drew on Thu 10th Jul 2003 18:40 UTC

...until you've designed your own processor!!! 8-bit, all the way!

Go CS curriculum, GO!

Nice article, by the way.

opinion, not fact
by GunFodder on Thu 10th Jul 2003 19:34 UTC

There are many good facts in this article but they are all in support of the opinion that the PowerPC architecture is better than x86. I have a couple of nits to pick with the implications of this article.

The author goes into great detail about power consumption and heat. The author neglects to point out why power consumption and waste heat are important. Any modern power supply can provide for even the most power hungry processor. So the only problem with high power consumption is the generated heat. If a cooling solution exists to dissipate the waste heat of the processor then there is no problem. That is currently the case for even the highest performing x86 processors.

I don't think PPC has much of an edge in the heat problem; the new G5 case is optimized for high airflow. It is an excellent example of how engineering can work around a heat problem.

The other problem I have is that the author attempts to justify the Apple benchmarks of a Dell system. He states that the optimizations achieved by the ICC are unrealistic for normal software, and that GCC is at least as suboptimal for PPC as it is for x86. The author neglects to point out that the ICC is commonly available to software developers, who would be crazy not to take advantage of the provided optimizations.

If the author wants to argue that SPEC benchmarks are not relevant because they don't compare to modern applications or that current PPC compilers aren't very good then be my guest. These are good points, which is why SPEC is looking for new benchmarks right now. But there is no defense for how Apple crippled that Dell system to get benchmark figures they liked.

Please
by Athemeus on Thu 10th Jul 2003 19:51 UTC

"That is the point, the CISC instructions are fat and require the CPU to break them down further in micro-code. In addition CISC instructions are not uniform in size which means that scheduling, out of order processing, decoding, etc. are a lot more expensive."

No, not in addition. You've got it backwards. Micro-ops exist to get around these issues. The way micro-ops are done in hardware, the translation process is not the bottleneck slowing down the CPU.

RE: Advantage PowerPC
by Roy on Thu 10th Jul 2003 20:08 UTC

Maybe I'm missing the point of your comment, but I don't understand how this is different from the x86 world. The Pentium, Pentium Pro/2/3, Pentium4, PentiumM, Althon/AthlonXP, and the Via C3 all run compatible code, yet have all the same differences. This is true of ANY ISA with multiple generations of products. No doubt, the PowerPC ISA is superior to the x86 ISA, but I don't see what that has to do with most of your comment.

Advantage Consumer
by Daniel on Thu 10th Jul 2003 21:15 UTC

IBM has a definite advantage in its ISA in that is doesn't have to worry about binary compatibility with ancient procs (like the i386). Yet, this is the very reason why the x86 is king: backwards compatiblity sells chips. End users are eager to see better performance from existing applications and developers prefer it because it doesn't disrupt the tool chain (compilers, profilers, debuggers, linkers, etc). Developers can gradually improve the apps as the user base migrates to the new iteration of chips. The P4 is heavily reliant on compiler vectorization and instruction scheduling and as a result initial P4 performance was poor due to the immaturity of the compilers. There is an obvious tradeoff in the form of the compiling for the most common arhitecture because releasing apps for specific x86 architectures is a pain (granted Red Hat and other Linux vendors do this with RPMs).

I'm a big fan of the PowerPC architecture but it's not the second coming. It will give Apple a new lease on life in the PC market and provide Intel with its most serious non-x86 competition in years. My biggest question about the Apple benchmarks is why Apple failed to compare its top-end workstation (and they are workstations, people) with the top-end AMD x86-64 workstations? After all, Athlon MP workstations were demolishing the very best that Apple had in G4 workstations. I suspect the same would apply to x86-64 workstations. Apple's reluctance might be attributable to AMD's relationship with IBM. I like Apple but its highly selective and highly suspect benchmarking techniques are discouraging (not to mention all of the macaddicts who would do well to read some books by Patterson and Hennesey).

Finally, for a good article on the RISC/CISC debate take a look at: http://www.embedded.com/story/OEG20030205S0025

Re: another article hijacked
by drsmithy on Thu 10th Jul 2003 22:10 UTC

Smithey look at the chart a little closer. It shows righ there that the G4 is running at 1GHZ. Apple uses the MPC7455 and MPC7457. Why is it so hard to admit you are wrong and in turn learn something? Now if you were to mention that G4s at 1.24 and 1.42 GHZ are overclocked then there might be some truth to your statement.

Which part of the ">" in "[...] a G4 at >1Ghz" are you having trouble with ?

Re: Removing compiler from factors indeed
by blah on Thu 10th Jul 2003 22:29 UTC

For those defending the article, I'd like you to read these two paragraphs:

"By using GCC Apple removed the compiler from the factors effecting system speed and gave a more direct CPU to CPU comparison. This is a better comparison if you just want to compare CPUs and prevents the CPU vendor from getting inflated results due to the compiler.

x86 CPUs may use all the tricks in the book to improve performance but for the reasons I explained above they remain inefficient and are not as fast as you may think or as benchmarks appear to indicate."

Excuse me while I remove the bias from the factors effecting (sic) this post. okay, I'm done, wow that was easy. glad that's over with.

PPC 970 versus x86
by KoenigMKII on Thu 10th Jul 2003 23:42 UTC

I found that article by Nicholas Blachford a very informative read.

IBM's web site confirms your power dissapation figures for the various 970 chip frequencies. Impressive work from Big Blue.

My only problem with the article is not the answers it provides in comparing PPC 970 v x86 (IA-32) but why make such an odd comparison in the first place?

Intel knows that IA-32 is nearing the end of its useful life, that is why they spent so much time and money together with HP to create the VLIW IA-64 arcitecture.

The correct comparison is Itanium 2 (and especially the upcoming "Deerfield" version) versus PPC 970. The reason is that these two have more similar 64-bit addressing, memory bandwidth, SMP scalability and floating point math capabilities, and both are manufactured in 130 nanometer copper processes.

Deerfield will have a much more similar cache size to PPC 970, but we will have to wait until the Intel chip comes out before the complete system testing could be done.

Why don't Apple compare the Dual G5 (970) Mac against the allready shipping HP ZX-6000 Dual Itanium 2 Workstation??

I also think your implication about the IA-32 software base not making an eventual migration to IA-64 does not take into account the "IA-32 execution layer" (code name: btrans) feature coming out for linux-64 and Windows XP 64-bit.

Intel claims that IA-32 software run on an IA-64 system via the "IA-32 execution layer" will provide a GHz equivalent, ie. 1.5 GHz Itanium 2 will run IA-32 software at the same speed as a 1.5 GHz Pentium 4. Thats good enough IMHO.

When memory technology improves to the point when an 8GB memory module is the minimium size commonly produced, then that is the time for the 64-bit chips to take over in new desktop computer systems sold.

I think that by the time that happens then the current IA-64 chip together with the "IA-32 execution layer" will be very, very fast indeed. Then IA-32 software will migate with little or no resistance.

If necessary the Intel cash reserve will be partly used to use differential pricing to boost IA-64 and kill off IA-32 in the same way it killed off the 486 with aggressive Pentium pricing. Since its a battle Intel cannot afford to loose, they WILL go all out.

Then IMHO we have a "Final Unity" in which Intel can drop IA-32, and go all out on IA-64 production in an attempt to crush the PPC with sheer economy of scale.

At the end of the day, software availability is the key factor. PPC is a great technology, but without plentiful native software its just another Betamax.



Koenig
by stingerman on Fri 11th Jul 2003 00:08 UTC

The Itanium 2 should be compared to the Power4 not the 970. The Power4 and the Itanium 2 are the natural competitors. And they have been compared and the Power4 whips the Itanium in every test. Nothing emotional here, those are just the facts. In order for the Itanium to match the Power4's performance, they require a 128-way Server against IBM's 32-way server.

Really AMD has not released a competitor for the 970 either. The Opteron competes also against the Power 4 and Itanium families. Athlon 64 is the competing product and that will not be out till mid-September, and I hope it does provide healthy competition for the 970,

Software availability is a key factor and since there is now Windows XP 64 yet and Microsoft has no plans to develop a 64-bit desktop OS, with longhorn still being 32-bit, it looks like Athlon 64 will be running some form of Linux. Even if it is capable of running a modified version of Windows XP, it will still be crippled to the IA-32 instruction set. The 970 contains a special 64-bit instruction bridge that allows OS X to access the entire 64-bit address space and all of the 970 instruction set, even the 64-bit instructions. In addition 64-bit Apps will run fine over Apple's current OS X version (10.2.7) and 10.3 (Panther).

When it comes to software OS X has an abundance of native software, and all the Unix programs you can stick a fork in, plus is compatible with OS 9 software and with a software emulator runs most Windows software. And, if the software emulator bothers you, that is exactly what Intel is doing with Itanium to run Windows 32-bit apps!.

OS X also has some of the most innovative software around today in almost every space. Apple's development frameworks have stimulated one of the most exciting development environments, popping out software faster than you can blink. Stay up with the times.

This is perhaps the most thoughtful, knowledgable, fair and balanced article that I have come across on this subject. This is an excellent article and a must read.

Other fanatical and totally biased and propagantised articles don't help me as a consumer, but just lead me astray. This objective and incisive article is what I need.

Thank you for a great job on this article. I hope there will be more objective pieces like this in the future.

Marketing Monkey
by Dan on Fri 11th Jul 2003 00:31 UTC

I'm not even here to defend the Intel and MS world but this article is ridiculous. All he did was hit up corporate websites and quoted them, and even worse all he did was hit up the websites of people who agreed with him in the first place (that RISC is great). "The IBM site says", my god man, thats called marketing, theres nothing factual about it, writing an entire paper based on marketing is the most absurd thing I've ever seen. And the author REALLY CLEARLY has a bias going into this article, its glaring from the beginning.

Here's a tip. Get out your stopwatch and perform common tasks between the most powerful Mac and the most powerful PC. Load a web browser, load a web page, load a document, etc etc and you'll notice something. They will perform very similarly, yet one costs a good lord lot more than the other, and one runs a good lord lot more software than the other. Its just economics, the tasks are performed similarly, but one is a LOT less expensive.

And finally, those "small groups" who really need alot of CPU power aren't small at all. Office/Internet software has run just fine since the 1ghz days. Games are driving the x86 world for the most part, and they're not a small group at all (outsold the movie industry JUST considering software).

In conclusion, I can't believe you wasted your time writing this piece of propaganda instead of sticking to the technical +'s and -'s of each one. If you had stuck to the technical differences this could have been a mildly interesting read, even if its just a rehash of what other people have already written. Throwing in your blatantly biased opinion turned this into a grade school report, where the child copies quotes from the internet and pastes them as fact. Shame on you.

Can I buy a G5 cpu & mobo?
by er pejcao frito on Fri 11th Jul 2003 00:39 UTC

Whaterver is better, whether RISC or CISC, why can't I look at some hardware shop and buy a RISC cpu + mobo ?
If i'd like to buy a RISC arch, i'll have to look at SGI (mips) or apple.com (ibm/motorola), wich ARE extremely expensive!.
Might happen that Motorola or IBM sell their CPUs like an AMD/INTeL one? What 'bout mobos?

Bad stingerman!
by Athemeus on Fri 11th Jul 2003 01:12 UTC

"Software availability is a key factor and since there is now Windows XP 64 yet and Microsoft has no plans to develop a 64-bit desktop OS, with longhorn still being 32-bit, it looks like Athlon 64 will be running some form of Linux."

So which 64 bit Windows OS is it that's not in the making? This one for x86-64:
http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_88...

Or this one for IA-64:
http://www.microsoft.com/windowsserver2003/64bit/overview/default.m...

And the x86-64 Windows has been spotted running on a demo machine. How functional and stable it is, who knows, but it does exist.

Please stop making statements about things without doing even basic research.

CHRP
by jefro on Fri 11th Jul 2003 02:19 UTC

IBM was pushing CHRP for the PPC line. The bios is openstandard and the board can be openstandard. Only thing needed is a company other than IBM or Apple to build it. Osnews posted a photo of the booting of a PPC a few weeks ago and the bios said chrp enabled. Hummmm.

Actually an Alpha vs G5/970 would be better to compair in terms of real numbers. I think we all suspect the PPC would trounce the pants off the Alpha. Shame, I liked Alpha's but with no native OS the design couldn't win.

who cares about architecture?
by money on Fri 11th Jul 2003 02:26 UTC

unless you are a ee developing processors, you really shouldn't care about the architecture. to argue one way or the other is pointless. want to know why? because NONE of you have access to the proprietary multi-billion dollar research and development that has been invested by corporations in order to determine the most optimal way to process information. the fact remains that different companies took different development paths, each architecture does have its strengths and weakness when compared. people drum up the relative importance of the strengths and weaknesses while discounting others to make one seem better than the other. the fact remains that they are both fairly equal. if one was in real world situations vastly superior the industry would move to it, but that is not the case. that is also why apples and x86 both perform about the same in the real world - with differences due mainly to development decisions and not architecture i.e. heat vs power vs performance vs die size vs intended market etc. you need to decide what to buy based on real world performance and price. because of economies of scale and because apple is 100% proprietary the x86 will remain the cost/performance leader.

Things that could've made it better
by DirtyPunk on Fri 11th Jul 2003 05:38 UTC

Really needed comparisons on things like cache efficiency, branch prediction etc (some of the major areas of performance on modern CPUs). Also, it used the old model of the RISC vs CISC argument, which has long been doubtful (CISC instructions are more expensive, but RISC often has a higher instruction count). Didn't go into things like register renaming etc (if you're going to compare register count, going into register renaming on the x86 is very important as it gets around the register limits).

Also, MMX uses 64 bit registers (it re-uses the floating point registers), but SSE and SSE2 are 128bit and don't reuse.

Finally, I would've liked to have seen a few words about VLIW.

Otherwise good and fairly well done - especially the power and heat consumption bits.

RE; Koenig
by Encia on Fri 11th Jul 2003 05:45 UTC

@stingerman
>Really AMD has not released a competitor for the 970 >either.

With X86 market, product targeting occurs at the retail end e.g. ASUS nForce3 + Opteron 1xx series. K8 Opteron ultimately replaces K7 Athlon MP i.e. AMD’s multi-processor product lines. Athlon 64 ultimately replaces K7 Athlon XP i.e. AMD’s uniprocessor product lines.

As you can see, AMD’s product lines don’t quite map 1-to-1 with PPC 970(i.e. multiprocessor capable).

> Microsoft has no plans to develop a 64-bit desktop OS,

Refer to http://www.neowin.net/comments.php?id=12389
AMD64 support will be integrated in the next Windows Server 2003 X86-32 edition (a.k.a NT5.2) SP1 update**. **Note that WS2K3 includes a desktop 'Windows XP' GUI/WIMP system.

>it looks like Athlon 64 will be running some form of >Linux.

False, AMD64(X86-64) support will be integrated in the next Windows Server 2003 (a.k.a NT5.2) SP1 update**.

Try again...

My 2 cents worth
by Robo on Fri 11th Jul 2003 05:47 UTC

I have programmed both X86 and PPC in Assembly language and the PPC is by far superior.

Ultimatly I think it's all about the allmighty dollar. Whilst I would prefer a nice PPC based system, I cannot afford one.

If there was some very rich company.. they could start churning out decent spec PPC systems for cheap.. eat a huge loss for a few years.. then have a decent userbase. Further systems would become cheaper, etc.

That's how a lot of companies do business.. eat loss today.. to maximize profit tomorrow..

Anyway, agree or disagree that's my 2 cents worth.

Re: Robo
by drsmithy on Fri 11th Jul 2003 06:35 UTC

If there was some very rich company.. they could start churning out decent spec PPC systems for cheap.. eat a huge loss for a few years.. then have a decent userbase. Further systems would become cheaper, etc.
They wouldn't need to be rich. Generic PPCs could be "churned out" even in (relatively) low quantities for little more than x86s cost, as the only really different parts would be the CPU and motherboard chipset.
Macs aren't expensive because they cost (appreciably) more to make. They're expensive because it's a monopoly market.
The problem with selling "generic PPC" machines is not making them cheaply, it's finding someone to buy them.

Macintosh Propoganda
by mochz R on Fri 11th Jul 2003 06:50 UTC

"RISC may be technically better but it is held in a niche by market forces which prefer the lower cost and plentiful software for x86. Market forces do not work on technical grounds and rarely chose the best solution."

"Rarely choose the best solution..."

Just because RISC is more elegant doesn't mean it's the best solution. The economy operates on a performance/price basis. That's why SUVs sell in America. Granted, I'm not an SUV fan, but they do "perform" for soccer moms. If the inner workings are less "elegant," it doesn't make it inferior if it is cheaper to produce and, in the end and actual results, it performs.

Why would you buy a Macintosh with less available software, much more expensive hardware, and "debatable" but definately still slower speed, when you can get a cheap, fast, and easy to use (tons of software) PC. It just doesn't make any sense. Apple really needs to start licensing again if they really want Macs to flourish. Cost is a major prohibitive factor in any product, so Apple must start licensing again.

How many people really need a computer that's even over 1GHz?
by mochz R on Fri 11th Jul 2003 06:58 UTC

"How many people really need a computer that's even over 1GHz?"

Is it just me or...

There is a massive difference between PCs of 1Ghz and 2Ghz and 3Ghz. Granted, the inner software and hardware maybe comparitevly unsynchronized and unorganized, and some may say extremely inefficient, the simple fact is that end user benchmarks show that there is still a huge difference.

Even without Hyper-Threading and the such, there are a huge amount of applications that tax todays modern computers. Games might be the first to pop up for many of today's adolesents, but what about audio/video encoding/decoding? Speech recognition? Scientific factor analysis, theoritcal math, physics. The world is moving to more complicated tasks than just word processing and email. Universities lead research. Companies make breakthroughs.

Innovation gives people choices. If computers were still at 1 Ghz, a 1 Ghz computer would cost nearly $2000. Unlikely that companies will shell that out for their employees. Top-level innovation trickles down to lower-rung tasks, and it only makes things more efficient.

The phrase "How many people really need a computer that's even over 1GHz?", I find, is quiet naive.

And Apple, I'm sorry, those IBM 970 systems your selling are not Desktop Computers, those are workstation and servers. And they're not the fastest either. Check your benchmarking ethics again.

Where do you get your information?
by Mike on Fri 11th Jul 2003 07:58 UTC

I find this article to be poorly written, researched and reasoned. If OSNews wants to be taken at all seriously they’re going to have to raise their standards for submitted articles.

computer for the user
by NIck_t on Fri 11th Jul 2003 10:27 UTC


Everyone comes on here dissing this article with thiere own naive opinions. It doesn't matter which is faster in theory, it matters which works best in the real world. I am an audio engineer and i have never come accross an x86 based machine that operates protools as well as even my 400mhz G4 powerbook.
Nicks 2 cents - and mochz-R the G5 is built to run as either and why dont you read over the benchmarks again cause i think you'll find that the 970 (G5) performs a serious amount of wooparse on its x86 compeditors.

PPC Overclock
by DO on Fri 11th Jul 2003 12:26 UTC

My tiny contribution :
- Great article

- Motorola can probably sell higher clock rates CPU to selected companies ( read Apple ) as well as then sell custom extended temperature range PPC CPUs ( 70° ambiant ! ) to some customers ( read My corp. ).
Remember that PPC are not reserved to Apple ( by far ), whereas most x86 are used in PC-compatible platforms.

Good article but...
by Jason Lockhart on Fri 11th Jul 2003 14:17 UTC

The content of this article was great. The bad grammar, spelling and sentences that make no sense at all hurt its credibility. How hard is it to read something before you post it or send it out? I am very knowledgeable in the technical areas covered in this article and I found it painful to read.

Again, the content was fantastic. The presentation was lacking.

- Jason

______________
Jason Lockhart
Director of Research Computing
College of Engineering
Virginia Tech

Higher clock speeds and heat output...
by 6502 on Sat 12th Jul 2003 04:01 UTC

I could be wrong here, but isn't there at least some connection between higher clock speeds and higher heat output?

I seem to remember reading somewhere that if a G4 was to run at (say) 3 GHz, then it's heat output would be almost as high as a 3 GHz P4.

But hey, I might be wrong, it wouldn't be the first time and I doubt it'll be the last...

Higher clocks peeds and heat output
by JtX on Sat 12th Jul 2003 11:23 UTC

http://www.eetimes.com/story/OEG20030623S0092

Each G5 dissipates 97 watts @ 2 GHz

Intel Pentium 4 3.2 - 82 watts
AMD Barton 3200+ - 76.8 watts

And that wasn't the only mistake in this article...

Regarding Power
by stingerman on Sun 13th Jul 2003 04:51 UTC

From eWeek article reference:

Power consumption of the G5 looks to be about three-fourths that of a P4 when clocked for comparable performance—the G5's efficiency and speed may quickly spread its use from Apple's top tier throughout the rest of its line and may also make Apple a top-tier option for even the most performance-conscious users.

Bad Athemeus!
by stingerman on Sun 13th Jul 2003 05:02 UTC

>>So which 64 bit Windows OS is it that's not in the making? This one for x86-64: http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_88... >>

Read my original post: " Microsoft has no plans to develop a 64-bit DESKTOP OS" (All caps added for emphasis). Everyone knows about the 64-bit Server version, but there is no Windows desktop version! And, there doesn't seen to be one planned for the near future. The nice thing about the 970 (G5) processor is that IBM added Bridge instructions to allow OS X to quickly be ported and OS X can physically address the entire 64-bit memory space, though its VM will still be 32-bit for now but with the added advantage that each 4GB RAM bank can be addressed by each processor separately at the same time. Apps can use every 64-bit instruction available to them in the no-supervisor (OS Exclusive) mode of the processor, which non-OS apps only have access to anyway.

RE;Bad Athemeus!
by Encia on Sun 13th Jul 2003 10:36 UTC

>Read my original post: " Microsoft has no plans to >develop a 64-bit DESKTOP OS" (All caps added for >emphasis).
Win2K3 runs DirectX 9 games pretty well...
It’s code base is actually similar i.e. one could transfer some of Win2K3’s .msc applications and run it on WinXP (due to NT 5.x code base).

Win2K3’s is a full Windows XP desktop OS with advance server features. These features can be turned off.

>Everyone knows about the 64-bit Server version
So you think Linux is a desktop OS?

The main difference between Win2K3 (a.k.a NT5.2) and WinXP (a.k.a NT5.1) is it’s Internet related services and the number of usable processors.

>but there is no Windows desktop version!
How could you know this?

Refer to "http://www.betanews.com/article.php3?sid=1048903653" for Windows XP 64bit (for Intel's Itanium 2(IA-64) processor).

Refer to "http://pc.watch.impress.co.jp/docs/2003/0506/winhec1.htm" (In Japanese) for Windows XP 64bit (AMD's AMD64/X86-64).

Focus on "http://pc.watch.impress.co.jp/docs/2003/0506/winhec03.jpg". This picture shows a beta version of Windows XP AMD64/X86-64 running UT2003.

Try again...

Why Bother!
by Azz on Sun 13th Jul 2003 12:56 UTC

Why? At the end of the day I will be playing Half Life 2 on a PC - And that as they say is THAT!

Thank You Encia
by Athemeus on Mon 14th Jul 2003 21:55 UTC

Stingerman, I just ask that you stop and think about what you are saying before you say it. Do you really think AMD would go ahead with a 64-bit desktop chip without getting Microsoft onboard from the get-go? Statements like that really hurt the credibility of the arguments you are making.

Encia and Athemeus
by stingerman on Tue 15th Jul 2003 02:26 UTC

Some sort of circular reasoning. The point was Desktop not server, and it still is. How much does Win2K3 cost! You seem to have left that out (conveniently). I like AMD and I hope they succeed with the Athlon64, As I said earlier, we will see in September more about the Athlon64. However, the issue is the accuracy of this article and the conclusions the author reaches.

This article is dead on and it clearly illustrates the advantages of the PowerPC Architecture and in particular the 970 (G5) implementation. If anything, the G5 will only cause Intel and AMD to better their products.

@stingerman
by encia on Tue 15th Jul 2003 08:20 UTC

>Some sort of circular reasoning. The point was Desktop >not server, and it still is.
The mention of Linux leads to my mentioning of Win2K3. Note that MacOS X was based server OS. What marks the divide between desktopOS and serverOS?

>However, the issue is the accuracy of this article and >the conclusions the author reaches.
Not when you claimed "Microsoft has no plans to develop a 64-bit desktop OS". Such as claim is unsupported. From my references (without breaching any NDAs) above one can claim, "Microsoft has plans to develop a 64-bit desktop OS".

>If anything, the G5 will only cause Intel and AMD to >better their products.
Not quite, the current PowerPC 970 market place doesn’t even register in mainstream desktop market**. Only AMD is considered as a direct threat to Intel's bread butter markets.
**Not considered as a serious threat since PowerPC market is missing the vital distribution/logistic/ISV/support/education/knowledgebase infrastructure for mainstream competition.

Apple never has guts to post an AthlonXP3200+/nForce2(current product) vs PowerPC 970(future product) let alone Opteron/Athlon64-Socket940/nForce3 vs PowerPC 970.

Refer to "http://www.overclockers.com/tips00408/" AthlonXP/nForce2 match against Apple's PowerPC 970.

David Cutler's statements regarding AMD64
by encia on Tue 15th Jul 2003 09:06 UTC

>Stingerman, I just ask that you stop and think about what
>you are saying before you say it. Do you really think AMD
>would go ahead with a 64-bit desktop chip without getting
>Microsoft onboard from the get-go? Statements like that
>really hurt the credibility of the arguments you are
>making.
To add to your position;

Click to
http://www.amd.com/us-en/Weblets/0,,7832_8366_7823_8718^7839,00.htm...

Statements was given by David Cutler** (Senior Engineer of Microsoft, the father of Windows NT).

"Over the last ten years, the applications we've put on PCs have grown. They've grown in size and computational demands. And 32-bits of address space just isn't enough anymore. The size of databases has grown to the point where we just can't get the performance out of the 32-bit address space that we need to get to continue to support these applications. Over the past few years, we've added a few features to extend the life of the 32-bit system, but it's not enough, and we need to move to 64-bits to continue to support these large databases and high-end desktop applications. Over the past couple of years, I've been working with AMD on their next-generation K8 processor. What's really exciting about the K8 is that it has both 32-bit and 64-bit capabilities. Furthermore, the 64-bit systems will be able to run the existing 32-bit applications so that will protect customer's investments in software and hardware. Currently, we have the 32-bit Windows XP and Windows 2000 server systems running on the K8 for silicon, which has proven to be very stable. We also have a developmental 64-bit version of Windows XP and Windows.net server running on this very same hardware system. I'm really excited about this chip." - David Cutler of Microsoft.

@stingerman

There's my proof in regards to Microsoft's plans for 64bit desktop market. A video file tells a billion words.

encia get back on subject
by stingerman on Thu 17th Jul 2003 08:01 UTC

Still, you sidestep the issue of Desktop. That is all I'm talking about and in the context of this article it was desktop vs desktop. So when you look at the high-end desktop processors, we are talking about 970 vs Athlon64 vs Xeon. Recent benchmarks show the 2GHz Athlon64 barely beating a Pentium at 2.6 GHz, so that suggests that Apple has the advantage.

I doubt we will see a 64-bit version of XP anytime soon, but we will probably see the 32-bit version shimmied to work on the Athlon64. If you want to compare server systems, the Power4 beats the Itanium and Opteron hands down. The PPC architecture is simply a more modern design and IBM engineers have a lot of flexibility in implementation while maintaining binary compatibility with other PPC implementations based on completely different designs. Intel engineers do not have the same luxury.

Code Density
by GloomY on Sat 19th Jul 2003 12:22 UTC

I would like to carry on the argument of Anton Klotz in comment 56, that x86 has better code density due to inconsistent length of instructions. Of course this makes instruction decoding more difficult, but this yields some other advantages for x86 which have not been mentioned by Nicolas:
At first memory bandwidth required for instruction fetch from main memory is reduced (Anton Klotz pointed that out). Secondly the hitrate for the L1 instruction cache and L2 cache with both the same size is noticeably higher for x86, because of the greater code density. Thus, performance for RISC architectures is lower due to more cache misses when fetching instructions. This means that RISC processors need larger caches to acchieve the same hitrate when execution the same (!) type of application as x86. As everybody knows, caches eat up an enormous amount of transistors on the die. Therefore Nicolas' argument that you can save die space due to less complex decode circuits is simply ridiculous. The high code density of x86 is one of the greatest advantages of that architecure and not a weakness.

What I'm missing here is a fair consideration of both architectures. Only counting the advantages of RISC and the disadvantages of CISC is not what I call a good article...