Linked by Thom Holwerda on Sun 3rd Jan 2010 20:32 UTC
General Development Here's something you probably don't know, but really should - especially if you're a programmer, and especially especially if you're using Intel's compiler. It's a fact that's not widely known, but Intel's compiler deliberately and knowingly cripples performance for non-Intel (AMD/VIA) processors.
Order by: Score:
After many years...
by big_gie on Sun 3rd Jan 2010 20:46 UTC
big_gie
Member since:
2006-01-04

Wow I remember more then 5 years ago on that topic. I even used a (perl?) script that removed that check from a compiled library or the compiler itself. I still use intel's compiler, but each time I use it I'm aware that the compiler is a "Defective Compiler" on anything else then Intel. Luckily for me (or for intel...) most developpement was on intel machines.

I kind of gave up hope on a real fix for that. I am wonderfully surprised to see intel migth be forced to fix it! It is good news.

On the other hand, since the time I've read on this, GCC has come back from crappy to almost par with icc. And it's always a good thing to compile/run on many compilers; you'd be surprised to see how many errors slip through which a certain compiler accept!

Reply Score: 2

CPUID flags
by bhtooefr on Sun 3rd Jan 2010 21:01 UTC
bhtooefr
Member since:
2009-02-19

Would it be illegal for AMD and VIA to just put "GenuineIntel" in the CPUID, and use another field for AuthenticAMD and CentaurHauls?

I look at it as comparable to Internet Explorer putting "Mozilla/x.x (compatible; MSIE x.x)" as the User-Agent. (Or Opera's "Mozilla/x.x (compatible; MSIE 6.0; Opera x.x)" user-agent from a few years back.)

Reply Score: 1

RE: CPUID flags
by WereCatf on Sun 3rd Jan 2010 21:08 UTC in reply to "CPUID flags"
WereCatf Member since:
2006-02-15

I highly doubt it'd be legal as "GenuineIntel" is most likely a registered trademark and all. Besides, doing that wouldn't help for any of the processors already out there, only for new ones.

This compiler "defect" has been a really shitty move from Intel and it gives me yet another reason to stay away from their hardware. Just for the sake of lining their own pockets they intentionally cripple the performance of millions of end-users all around the world..

Reply Score: 3

v RE[2]: CPUID flags
by tylerdurden on Sun 3rd Jan 2010 22:45 UTC in reply to "RE: CPUID flags"
RE[3]: CPUID flags
by cerbie on Mon 4th Jan 2010 01:16 UTC in reply to "RE[2]: CPUID flags"
cerbie Member since:
2006-01-02

Microsoft's compiler supports them well.
GCC supports them well.
LLVM supports them well.

Binaries made with the Intel compiler are AMD's only issue, and making their own compiler would not fix that problem.

Reply Score: 4

RE[3]: CPUID flags
by systyrant on Mon 4th Jan 2010 04:50 UTC in reply to "RE[2]: CPUID flags"
systyrant Member since:
2007-01-18

Try getting those developers to use it.

They never should have put it in place to begin with. Personally, I think they aught to have the shit slapped out of them.

Reply Score: 2

RE: CPUID flags
by bile on Sun 3rd Jan 2010 21:09 UTC in reply to "CPUID flags"
bile Member since:
2005-07-08

It shouldn't be though I suspect Intel would claim it was trademark or copyright infringement or perhaps fraud. So long as AMD, Via or whoever was clear they did so to the customer and/or allowed them to toggle the 'feature' it should be allowed.

Reply Score: 1

RE: CPUID flags
by Drumhellar on Sun 3rd Jan 2010 21:28 UTC in reply to "CPUID flags"
Drumhellar Member since:
2005-07-12

It would undoubtedly be a violation of the x86 licensing agreement AMD and Via have with Intel.

Reply Score: 3

RE: CPUID flags - Palm
by jabbotts on Sun 3rd Jan 2010 23:23 UTC in reply to "CPUID flags"
jabbotts Member since:
2007-09-06

I think AMD spoofing Intel's hardware identifier is much closer to Palm spoofing Apple's hardware identifier. In both cases, the reason for needing the spoof is pretty scummy.

The real solution would be to fix icc so that it's no longer leveraged to impose intel chip lockin. Apple at least has some grounds for bundling iTunes/iPod though it really should be music player separate from media manager if it was really about the end user. Intel modifying a generic code compiler to cripple non-Intel; I'm not seeing any grounds for that.

Actually, I think the developers out there who do have to work with icc should all call into Intel once a week if not once a day until it's fixed. Overwealming the call center should eventually get the point across.

Reply Score: 3

RE[2]: CPUID flags - Palm
by Scali on Sun 3rd Jan 2010 23:29 UTC in reply to "RE: CPUID flags - Palm"
Scali Member since:
2010-01-03

Intel modifying a generic code compiler to cripple non-Intel; I'm not seeing any grounds for that.


That's a misunderstanding and/or misrepresentation of the facts.
Intel's compilers are Intel's own work, and never were 'generic code compilers', they were always aimed strictly at Intel's own products.
Intel didn't 'modify' anything, nor did they 'cripple' anything.
Optimizing for anything other than Intel's own CPUs was just never part of the goals of the Intel compiler suite.
You can't 'modify' or 'cripple' something that has never been different anyway.

Reply Score: 0

jabbotts Member since:
2007-09-06

Intentionally using misinformation is for kids.

I'm open to my misunderstanding the situation though. Still reading along through the various discussions with an open mind.

(now, a thread between the very first poster, the one with a shovel and yourself could be interesting given the poster's history of modifying icc.)

Reply Score: 2

Scali Member since:
2010-01-03

Intentionally using misinformation is for kids.

I'm open to my misunderstanding the situation though. Still reading along through the various discussions with an open mind.

(now, a thread between the very first poster, the one with a shovel and yourself could be interesting given the poster's history of modifying icc.)


Short version:
Intel Compilers are a closed-source product, not based on any other products.
Hence they are not other products modified by Intel, and other people cannot modify them either.

What is described above, with perl scripts, is modifying some of the code that the Intel compiler generates for checking CPUs.
So it's not the Intel compiler itself that is modified, but rather some of the code that it generates.

Reply Score: 1

RE[3]: CPUID flags - Palm
by Bill Shooter of Bul on Mon 4th Jan 2010 21:51 UTC in reply to "RE[2]: CPUID flags - Palm"
Bill Shooter of Bul Member since:
2006-07-14

I understand your point now, after having read all of your other posts in this topic. You could have saved us all a lot of confusion by clearly explaining in this comment ( your first one) why simply just checking feature specific flags might not be the best way to optimize processors.

You're actually two steps ahead of most posters, but because of that you seem like you're a step behind.

Reply Score: 2

RE[4]: CPUID flags - Palm
by Scali on Tue 5th Jan 2010 09:02 UTC in reply to "RE[3]: CPUID flags - Palm"
Scali Member since:
2010-01-03

You're actually two steps ahead of most posters, but because of that you seem like you're a step behind.


Story of my life ;)
People don't seem to be listening anyway. Agner Fog is actually saying the same thing as what I'm saying, but it seems that people skip over the details of family/microarchitecture selection, and cry foul.
They just want to hate Intel, they don't want to understand the problem.

Basically I'm saying just three things here:
1) I agree that Intel's CPU dispatching isn't the best solution for non-Intel CPUs.
2) Since Intel's compiler doesn't have a significant marketshare, I think it's Intel's right to do what they do, and no government or other organization should have the legal pull to change that. It would basically mean that any company can sue any competitor over anything they don't like, and I don't think anyone should want that.
3) I think it's a big problem that tech/news sites such as this have editors who don't check their facts properly, and just post false accusations and lies, and even try to defend them when someone points them out. The editor of this article has clearly not understood Agner Fog's article entirely, and has drawn false conclusions, and has already convicted Intel for it.

To all editors of all tech/news sites everywhere:
Guys, lose the ego. You can't be an expert in every field, so nobody expects you to fully understand everything tech-related. It's okay to consult people who are experts in a specific field.
In fact, I think it is your RESPONSIBILITY to have your stuff checked by others who can verify the technical details in your article BEFORE you post it online.
Now if there need to be any lawsuits, I think it's this. Far too many websites just throw rumours and lies around, and sling mud towards various large companies (the companies people love to hate, such as Microsoft, nVidia, Intel).

Edited 2010-01-05 09:08 UTC

Reply Score: 3

v Don't like it don't use it
by bile on Sun 3rd Jan 2010 21:06 UTC
RE: Don't like it don't use it
by WereCatf on Sun 3rd Jan 2010 21:13 UTC in reply to "Don't like it don't use it"
WereCatf Member since:
2006-02-15

If you don't like the code generated by the Intel compiler... don't use it. Why should they be forced to pay attention to competitor's products and make *their* compiler compatible with them unless ICC customers demand it? Using the most generic path is the only practical option when not knowing the specifics of the architecture.

You're totally off-the-base here. First of all, Intel themselves market ICC as being compatible with AMD and Via processors. Secondly, even if the compiler didn't do any kind of architechture-specific optimizations it could still choose the most appropriate path based on CPU's reported capabilities, ie. f.ex. if it supports SSE3 choose the path which uses SSE3. ICC intentionally chooses the slowest path for any other than GenuineIntel processors, even when the CPU reports its capabilities correctly.

Marketing it as completely compatible compiler and then pulling off such tricks actually IS anti-competitive.

Edited 2010-01-03 21:15 UTC

Reply Score: 9

tylerdurden Member since:
2009-03-17

No such thing. Intel markets their compiler as being compatible with the X86 and EMT64 (and IA64) instruction sets from Intel processors. Have you even used icc?

Technically they still produce compatible code. Nowhere in intel marketing/literature they claim to produce optimized code produced for AMD microarchitectures.

The continual moving of the goal posts in order to fit some narratives can be fascinating.

Intel develops compilers for intel processors. Is that such a hard thing to comprehend? Or is there any sort of entitlement from your part that would bind intel to spend money and effort to schedule instructions for a competitor's part?

Such attitudes are even the more ridiculous, if you consider that there is a perfectly viable (and in most cases quite competitive) alternative like gcc which is completely free. Good grief....

Edited 2010-01-03 22:51 UTC

Reply Score: 1

bert64 Member since:
2007-04-23

If they developed a compiler that produced code optimized for intel cpus, but which would execute exactly the same code on compatible non intel cpus...

Such an example, would be a version of gcc where the cpu type options for non intel cpus have been removed.

What the article talks about, and what people have a problem with, is the fact that the intel compiler intentionally chooses a less optimal approach when dealing with non intel cpus.

Reply Score: 5

drahca Member since:
2006-02-23

Intel only guarantees to produce optimized code for intel microarchitectures.


Where is this mentioned explicitly in the compiler documentation? I am not saying it is not there, just that I have not found it.

In their product brief they have this quote “The Intel® compiler generated faster code than other compilers for most of our tests on both IA-32 and x86_64 platforms, which helps us deliver the performance our customers demand."

See how it mentions x86_64 platform and not Intel EMT64 processors only? Granted this quote is not by an Intel employee but it is in their marketing material!

For starters, AMD will most likely refuse to share a lot of internal microarchitectural info with Intel.


Why is this relevant? They do not need to! Just using the right ISA extensions is all developers are asking for! You are deliberately twisting the facts here. Intel does not have to invest anything because code generated for Intel processors which is just using the right ISA extensions would do. Nobody is asking for micro-architectural optimizations, just that they use the right ISA extensions.

Heck, part of the reason why GCC will always lag in certain performance scheduling is because they don't have access to the same privileged internal information that intel themselves have about their microarchitectures.


GCC has very different problems than this.

All I see is a bunch of comments by people who most likely have never used ICC in a production environment.


Yeah right, the "everyone is a moron except me" metality. I have used the ICC compiler in commercial products (care to find me another decent FORTRAN compiler?) and I really do not get why customers are standing for this. It is not as if the compiler is free or anything. Intel is locking in their compiler customers into their architecture. Recompiling older software is mostly something to be avoided, so when system upgrades get discusses, guess which platform scores best?

Is it a douche move, probably, but nowhere does intel claim to produce optimized instruction schedulings for non-intel microarchitectures. Note that I used the term microarchitecture, not ISA, I assume a lot of posters in this site don't fully get the difference between the two.


I understand the difference quite well and I am not asking for optimized instruction scheduling on AMD processors, just that they use the right friggin ISA extensions.

As I said, nobody is stopping AMD from producing their own compiler optimization (although I am aware they provided a lot of support to the GCC folks) suite, or paying intel to support their architectures in ICC.


That would be great. An AMD compiler for AMD processors and an Intel compiler for Intel processors. Guess x86 is not such a great "standard" after all. The lesson to be learned here is do not use the Intel compiler and support GCC and LLVM. I have a very high regard for Intel's technological achievements but I despise their business tactics.

Reply Score: 3

RE[6]: Don't like it don't use it
by Scali on Mon 4th Jan 2010 11:53 UTC in reply to "RE[5]: Don't like it don't use it"
Scali Member since:
2010-01-03

"Intel only guarantees to produce optimized code for intel microarchitectures.
Where is this mentioned explicitly in the compiler documentation? "

In the product brief, see my first post in this thread.
They specifically mention Intel platforms.

Reply Score: 1

RE[6]: Don't like it don't use it
by Scali on Mon 4th Jan 2010 12:17 UTC in reply to "RE[5]: Don't like it don't use it"
Scali Member since:
2010-01-03

Granted this quote is not by an Intel employee but it is in their marketing material!


So a client not mentioning Intel nor AMD, but just IA-32/x86-64, is your argument?
Not very convincing. It could stil be in the context of Intel CPUs only (which it most probably is).

Guess x86 is not such a great "standard" after all.


Not in the way you think it is.
x86 standardizes the ISA, but there are tons of completely different x86-compatible microarchitectures.
You can't optimize code 'for x86', you can only optimize for a specific microarchitecture, as the x86 ISA doesn't say anything about how instructions are supposed to be implemented, let alone how they should perform.
There are plenty of examples of instructions/operations that are very fast on one x86 march, but very slow on another.

Reply Score: 1

drahca Member since:
2006-02-23

There are plenty of examples of instructions/operations that are very fast on one x86 march, but very slow on another.


I am very much aware of this problems and this is exactly one of the problems of x86. There are so many ways to do exactly the same thing, which all have different performance characteristics, that in order to get good performance out of x86 you have to optimize for a specific micro-architecture much more than you have to on some other ISAs. For example, Core2 is much better in Read Modify Write instructions than K8/9/10, which is faster if you do a Load/Store approach.

But again, this is not what I was referring to. If a CPU reports to have SSE3, then use it. I do not think Intel should optimize their code generator for AMD CPUs but their CPU dispatcher should not look at vendor string but use the available ISA extensions as reported by the CPU.

Reply Score: 1

RE[3]: Don't like it don't use it
by cycoj on Mon 4th Jan 2010 03:23 UTC in reply to "RE[2]: Don't like it don't use it"
cycoj Member since:
2007-11-04

Please do us all a favour and read the linked article. The article clearly states that Intel deliberately disables the fast code paths on non-GenuineIntel CPUs. So they disable SSE2, SSE3... although these CPUs are perfectly capable of it, and have the respecting flags. Also with respect to Intel only making a compiler for Intel processors:
from http://software.intel.com/en-us/intel-compilers/ :


Intel® Professional Edition Compilers include advanced optimization features, multithreading capabilities, and support for Intel® processors and compatible processors. They also provide highly optimized performance libraries for creating multithreaded applications.

(my emphasis).

So yes Intel is claiming that the compiler is compatible with AMD and VIA processors. And that does not mean deliberately crippled.

Reply Score: 7

eriwik Member since:
2009-07-31

Please do us all a favour and read the linked article. The article clearly states that Intel deliberately disables the fast code paths on non-GenuineIntel CPUs. So they disable SSE2, SSE3... although these CPUs are perfectly capable of it, and have the respecting flags. Also with respect to Intel only making a compiler for Intel processors:
from http://software.intel.com/en-us/intel-compilers/ :

"
Intel® Professional Edition Compilers include advanced optimization features, multithreading capabilities, and support for Intel® processors and compatible processors. They also provide highly optimized performance libraries for creating multithreaded applications.

(my emphasis).

So yes Intel is claiming that the compiler is compatible with AMD and VIA processors. And that does not mean deliberately crippled.
"

It is compatible, the code generated runs on compatible processors. What they do not claim however is that the compiler will optimise for those processors.

Reply Score: 1

bannor99 Member since:
2005-09-15

The following snippet is taken from http://software.intel.com/en-us/articles/intel-c-compiler-111-relea...

Click "English" under Intel C++ Compiler Professional for Windows to get the PDF

1.3 System Requirements
For an explanation of architecture names, see http://software.intel.com/en-us/articles/intel-
architecture-platform-terminology/
A PC based on an IA-32 or Intel® 64 architecture processor supporting the Intel®
Streaming SIMD Extensions 2 (Intel® SSE2) instructions (Intel® Pentium® 4 processor
or later, or compatible non-Intel processor), or based on an IA-64 architecture (Intel®
Itanium®) processor
o For the best experience, a multi-core or multi-processor system is recommended

Notice where it says "compatible, non-Intel processor"?
Compatible means supporting the instruction set so the only check the compiler should make is for the existence of (sets of) instructions.
An intentional check for the CPU manufacturer is discriminatory unless it's a workaround for a known bug, hey like that famous Pentium bug.
But why stop there - how about not enabling advanced features for non-Intel NICs and WLAN adapters. And what about SSDs? No need for the non-Intel ones to operate optimally, right?

Reply Score: 1

RE[4]: Don't like it don't use it
by Scali on Tue 5th Jan 2010 08:30 UTC in reply to "RE[3]: Don't like it don't use it"
Scali Member since:
2010-01-03

Notice where it says "compatible, non-Intel processor"?


Notice that these are the system requirements? That's what it takes to RUN the Intel Compiler, not the architectures it TARGETS.
Intel is perfectly right in stating that their compiler works on non-Intel CPUs.

Reply Score: 2

RE[3]: Don't like it don't use it
by mkone on Fri 8th Jan 2010 01:30 UTC in reply to "RE[2]: Don't like it don't use it"
mkone Member since:
2006-03-14

No such thing. Intel markets their compiler as being compatible with the X86 and EMT64 (and IA64) instruction sets from Intel processors. Have you even used icc?

...

Such attitudes are even the more ridiculous, if you consider that there is a perfectly viable (and in most cases quite competitive) alternative like gcc which is completely free. Good grief....


The problem is if a company, say Oracle for example, uses Intel's compiler to create one set of binaries, these binaries will run slower on AMD machines rather than on Intel's. It is in companies' interest to produce one set of binaries that run on all platform, and Intel is now using their monopoly position to ensure that those binaries are crippled when run on AMD machines by providing a faulty compiler.

This is like Microsoft releasing a version a Windows on which a previous product won't run. Wait, did anyone say Lotus?

One could argue Microsoft doesn't have to make sure that Lotus can run on its operating system, but once Microsoft is a monopoly, then the rules change. And so do they for Intel.

Reply Score: 1

RE[4]: Don't like it don't use it
by Scali on Fri 8th Jan 2010 09:10 UTC in reply to "RE[3]: Don't like it don't use it"
Scali Member since:
2010-01-03

It is in companies' interest to produce one set of binaries that run on all platform, and Intel is now using their monopoly position to ensure that those binaries are crippled when run on AMD machines by providing a faulty compiler.


Nope, because Intel does not have a monopoly on the compiler market.
I don't think you can ever really have a monopoly on a compiler anyway.
With an OS or CPU it's different. An application for Windows/x86 simply won't run on other systems.
With a compiler... it doesn't matter. Whether I use the Intel compiler, Microsoft compiler, gcc, or some other option, as long as it generates code for the intended OS and CPU architecture, it will work.
So you can't have any kind of lock-in with compilers. Even if the rest of the world uses the Intel compiler, nothing would stop me from using gcc.
That's different from an OS or a CPU, where you will be locked out of software that isn't compatible.

Reply Score: 1

mabhatter Member since:
2005-07-17

most importantly, AMD and VIA are LICENSED to use the SSE2/3 etc. technology from intel and design their processors specifically to perform well with ICC generated code that the largest software vendors use. Intel is basically filching on it's own cross-license agreements for the technology by sabotaging performance of the products of other companies.

Reply Score: 5

RE: Don't like it don't use it
by Praxis on Sun 3rd Jan 2010 21:27 UTC in reply to "Don't like it don't use it"
Praxis Member since:
2009-09-17

Well there are two major issues that make this wrong, first Intel isn't telling its customers that their compiler deliberately nerfs non-Intel performance, this is most certainly ethically wrong may violate some consumer rights, you do have a right not to be lied to by corporations. Second Intel is a monopoly under anti-trust investigations. Monopolies are held to higher ethical standards so that they do not use their power to squash competition by means other than legitimate competition, doing a vendor id check in their compiler would fall under that kind of violation, an x86 complier should be an x86 complier, if their chips are better they shouldn't have to cripple their competitors to have advantage.

If Intel had been honest from the beginning this may not have been an issue, but since they lied and they are currently under anti-trust charges, forcing them to stop vendor string checking in their complier would be a reasonable part of their settlement.

Reply Score: 3

tylerdurden Member since:
2009-03-17

Where, please tell us, where does Intel claim (or have they ever claimed) to produce optimized code for AMD processors with icc?

Reply Score: 2

cerbie Member since:
2006-01-02

If the same code happened to run slower on an AMD CPU, that would not be a problem.

What is happening is that they are giving the non-Intel chip different code to run. This is very different from just not optimizing for non-Intel platforms.

Reply Score: 2

RE[4]: Don't like it don't use it
by Scali on Mon 4th Jan 2010 14:06 UTC in reply to "RE[3]: Don't like it don't use it"
Scali Member since:
2010-01-03

What is happening is that they are giving the non-Intel chip different code to run. This is very different from just not optimizing for non-Intel platforms.


That is not entirely true.
*Certain* Intel CPUs will also run that same code.
It's not code that is *specifically* there to spite AMD.
It's just code that is there as a fallback for when the CPU is not recognized (which could be an older/newer/unsupported Intel CPU just aswell).

A properly commented disassembly of Intel's CPU dispatcher should show you how it works.

Reply Score: 0

cerbie Member since:
2006-01-02

...certain Intel CPUs 10 years old, generally, that do not support the features.

That's a good excuse for having fallback code paths, but not for checking for GenuineIntel to see whether a faster one should be used.

Reply Score: 2

RE[6]: Don't like it don't use it
by Scali on Tue 5th Jan 2010 10:25 UTC in reply to "RE[5]: Don't like it don't use it"
Scali Member since:
2010-01-03

...certain Intel CPUs 10 years old, generally, that do not support the features.


Or vice versa... new Intel CPUs running code compiled with an Intel Compiler that didn't support it.
It's not about the features, but about whether the compiler recognizes the particular CPU or not.
If it doesn't recognize the CPU, it can make no decision about which codepath would be most optimal. It's that simple.
Obviously it doesn't recognize non-Intel CPUs by default.
And obviously Intel tries its best to:
1) Recognize as many Intel CPUs as possible with compiled code
2) Make sure that new CPUs remain recognizeable.

It's very simple, really.
No matter how badly some of you WANT the Intel compiler to check features, this is not what it DOES. Never has, never will.
Checking features and checking microarchitectures are two different concepts.
Eg, PPro, PII and PIII share the same basic microarchitecture, but NOT the same features.
Conversely, Pentium D and Core2 Duo share (nearly) the same features, but NOT the same microarchitecture.
Code optimized for PPro will be optimal for PII and PIII aswell, although in some cases you may be able to use newer extensions.
Code optimized for Pentium D will NOT be optimal for Core2 Duo, and vice versa. In fact, Pentium D has quite a few performance hazards that a Core2 Duo doesn't. So not avoiding these hazards (in optimized Core2 Duo code) will cripple a Pentium D. In fact, most of its life, the Pentium IV/D was crippled by having to run code that was compiled for PPro/II/III.

Simple, isn't it? Repeat after me: it's not about features.

Edited 2010-01-05 10:41 UTC

Reply Score: 1

slight Member since:
2006-09-10

Stop using the 'optimisation' straw man.

People have repeatedly said here that the issue is the ISA extensions being intentionally disabled. Either you can't read or you're deliberately misrepresenting people's arguments.

Reply Score: 3

RE: Don't like it don't use it
by ngaio on Sun 3rd Jan 2010 21:30 UTC in reply to "Don't like it don't use it"
ngaio Member since:
2005-10-06

Why should they be forced to pay attention to competitor's products and make *their* compiler compatible with them unless ICC customers demand it?


Your analysis seems to assume that an extreme form of selfishness is good for society and good for Intel. Fortunately for the rest of us, very few people make this assumption.

Reply Score: 8

tylerdurden Member since:
2009-03-17

And your reply constitutes a massive red herring, and it is equally as invalid.


Since when has there been such a level of entitlement regarding compilers? Do you go around trashing The Portland Group because their compiler does not produce the code you feel entitled for your processor (even tough you haven't paid for their products). Should we trash IBM because their XL compiler suite does not produce optimal code for the latest embedded core by Freescale?

Reply Score: 0

RE[3]: Don't like it don't use it
by ngaio on Sun 3rd Jan 2010 23:40 UTC in reply to "RE[2]: Don't like it don't use it"
ngaio Member since:
2005-10-06

And your reply constitutes a massive red herring, and it is equally as invalid.


It's only a "massive red herring" if you happen to agree with the underlying philosophical assumptions and the social implications of behaving so selfishly. The rest of us are happy to see constraints on such behavior in the form of laws, social pressures, etc.

Using the analogy of an ecosystem, Intel acted as if they were dumping their pollution into another country's river system.

This kind of practice in the software and hardware industry should be both illegal and socially unacceptable no matter who does it.

Reply Score: 9

RE: Don't like it don't use it
by j.blechert on Sun 3rd Jan 2010 23:16 UTC in reply to "Don't like it don't use it"
j.blechert Member since:
2006-01-04

disclaimer: I am not a programmer.
but in the article there is a link to a benchmark (pcmark2005), apparently compiled with icc. now as far as I got it they tested via's nano against intel's atom and the funny thing is that in various tests after chaging the CPUID on the nano the performance-gain was up to about 50%, with no change of the compiler or compiled software whatsoever, simply by telling the compiler to use a path that was optimized for SSE3 and whatnot.
as far as I know those things are standards and if AMD wouldn't implement them correctly, they wouldn't be able to report the capability of using SSE3. the argument was that inspite of reporting those capabilities correctly and no additional work needed from the compiler, it would choose on the CPUID rather than the actual capabilities, resulting in Intel having to tweak their CPUID to be able to have best performance with software compiled on earlier ICC versions, again no recompiling neccessary, the code is all there, but which gets executed is choosen based on the CPUID, not the actual capabilities of the CPU (there are flags for those too as you should know).

Reply Score: 3

jabbotts Member since:
2007-09-06

The issue is not some oddity within the non-intel processors which Intel should be forced to recognize.

The issue is Intel intentionally writing icc so that it would introduce incompatibility in the resulting binary so that it run worse on other processors.

Think of a company that sells coffee makers. They also produce coffee and recommend it's use with there own makers of course (nothing wrong so far). They add a chemical identifier into there own coffee which can be recognized by the various models of maker the company sells. One can run any brand of coffee ground through the maker but it's designed so that it introduces a health risk into competitive brands. You want a refill on that cup a joe?

(edit): my analogy was a little off. It would be like the coffee maker intentionally taking twenty minutes longer to brew through grounds based on them being competitive brands.

Intel should just fix icc and make the issue AMD/VIA not supporting the optimized code rather than this "we get fast path, they get slow path" crap.

Edited 2010-01-03 23:37 UTC

Reply Score: 2

Scali Member since:
2010-01-03

(edit): my analogy was a little off. It would be like the coffee maker intentionally taking twenty minutes longer to brew through grounds based on them being competitive brands.


Still wrong.
I think what you're looking for is something like this:
Last year's models of coffee makers took 20 minutes to brew.
The new models have a special 'turbo' mode, which cuts the brewing time down to 10 minutes.
Competing brands have also added a similar 'turbo' mode. However, the coffee of this particular brand will only recognize the 'turbo' mode of their own brand of coffee makers, and other brands, like older models of their own brand, will only use the standard 20 minute mode.

There is a difference between 'not enabling' and 'disabling'.
The former is NOT taking action, and the latter is explicitly TAKING action.
Intel only checks to see if the CPUs are their own brand, and then selects an optimized path. That's different from checking to see if they are any other brand, and specifically selecting a crippled path.

Edited 2010-01-03 23:49 UTC

Reply Score: 2

jabbotts Member since:
2007-09-06

A fair enough analogy adjustment. I'm learning as I go here and open to correction.

Reply Score: 2

cycoj Member since:
2007-11-04

"(edit): my analogy was a little off. It would be like the coffee maker intentionally taking twenty minutes longer to brew through grounds based on them being competitive brands.


Still wrong.
I think what you're looking for is something like this:
Last year's models of coffee makers took 20 minutes to brew.
The new models have a special 'turbo' mode, which cuts the brewing time down to 10 minutes.
Competing brands have also added a similar 'turbo' mode. However, the coffee of this particular brand will only recognize the 'turbo' mode of their own brand of coffee makers, and other brands, like older models of their own brand, will only use the standard 20 minute mode.

There is a difference between 'not enabling' and 'disabling'.
The former is NOT taking action, and the latter is explicitly TAKING action.
Intel only checks to see if the CPUs are their own brand, and then selects an optimized path. That's different from checking to see if they are any other brand, and specifically selecting a crippled path.
"

What you are missing is that SSE etc. are standards and the processor has to advertise its abilities, but Intel is deliberately ignoring them unless the processor is an Intel one. So in your analogy it would be:
All coffee maker companies had agreed on a TURBO (actually TURBO1,TURBO2 ... ) standard and how the coffee tells the machine which of the TURBO standards they support. Now the Intel coffee makers checks which TURBO standard the coffee supports (note it also needs to do that for its own coffees because some of them only support TURBO1 but not TURBO2), however it also checks if the vendor string and only if the coffee is Intel coffee does it actually enable the TURBO standards.
Now if I bought that machine and it told me it supports coffees and the TURBO standard but it would only cook coffee at a time <20min with it's own coffee I'd be seriously pissed off because they clearly have been lying.

Reply Score: 2

Scali Member since:
2010-01-03

What you are missing is that SSE etc. are standards and the processor has to advertise its abilities, but Intel is deliberately ignoring them unless the processor is an Intel one. So in your analogy it would be: All coffee maker companies had agreed on a TURBO (actually TURBO1,TURBO2 ... ) standard and how the coffee tells the machine which of the TURBO standards they support. Now the Intel coffee makers checks which TURBO standard the coffee supports (note it also needs to do that for its own coffees because some of them only support TURBO1 but not TURBO2), however it also checks if the vendor string and only if the coffee is Intel coffee does it actually enable the TURBO standards. Now if I bought that machine and it told me it supports coffees and the TURBO standard but it would only cook coffee at a time


I'm not missing anything. I think most people here are missing what the goal of the Intel compiler is. The goal is not to deliver 'standard' code that runs 'reasonably well' on all x86-compatible architectures.
The goal is to generate the most optimum code for Intel processors.

Reply Score: 2

r_a_trip Member since:
2005-07-06

The goal is to generate the most optimum code for Intel processors.

Which would be fine if Intel existed in a vacuum, but Intel has licensed its ISA and extensions to others.

It would be nice if Intel would/could look beyond their own lawn and see that good performing code on non-Intel processors (who support the right instruction set) is beneficial to the x86 ecosystem (and in extension good for Intel itself).

Right now, they let their compiler collection create binaries which only run at maximal potential on Intel Processors and as such they burden end-users of non-Intel procs with extremely generic code. Only a problem with proprietary non-touch code, but still.

The other way around, I wonder how fast Intel would cry foul if GCC and LLVM started checking for AuthenticAMD and CentaurHaul and based on that would use the extensions on AMD and Via CPU's and serving only lowest common denominator code to Intel....

Reply Score: 2

Scali Member since:
2010-01-03

Which would be fine if Intel existed in a vacuum, but Intel has licensed its ISA and extensions to others.


Intel was FORCED to license its ISA. It wasn't and isn't an action they support.

It would be nice if Intel would/could look beyond their own lawn and see that good performing code on non-Intel processors (who support the right instruction set) is beneficial to the x86 ecosystem (and in extension good for Intel itself).


Yea, it would be nice if Microsoft also made linux ports of all their applications and libraries.
Obviously that is not going to happen.

Right now, they let their compiler collection create binaries which only run at maximal potential on Intel Processors and as such they burden end-users of non-Intel procs with extremely generic code. Only a problem with proprietary non-touch code, but still.


What is the problem with that? The Intel Compiler and optimized libraries are mainly aimed at scientific research, where hopefully people are smart enough to only use these products on Intel systems.
If end-users are burdened because someone uses the Intel Compiler for a commercial product that is to be sold to AMD users aswell, then it's the fault of whoever made the choice to use the Intel Compiler rather than a neutral compiler.

It sounds like most people here try to convict the storekeeper for selling the rope that someone used to hang himself.

Edited 2010-01-05 19:43 UTC

Reply Score: 1

A new low for Intel
by Morgan on Sun 3rd Jan 2010 21:25 UTC
Morgan
Member since:
2005-06-29

Wow. Just, wow. Normally I wouldn't even bother with commenting on an article like this as I'm not a programmer. However, this affects everyone who has ever used a program compiled with ICC on a non-Intel platform. As the majority of my non-Mac systems (past and present) have been AMD, this does indeed affect me.

This is really no different than a jockey drugging the food supply of his rivals' horses. I think Intel should be punished much more severely than it seems they will be.

Honestly Intel, if you have to resort to actively hobbling your competitors, what does that really say about your confidence in your own products?

Reply Score: 8

RE: A new low for Intel
by Rugxulo on Sun 3rd Jan 2010 21:29 UTC in reply to "A new low for Intel"
Rugxulo Member since:
2007-10-09

The only problem here is that it's hard to be too outraged. Everybody's known about this flaw for years. Yeah, it's dumb, and I can't think of a good reason for it (besides sloppiness or laziness). And yeah they should want to fix it, it's just too stupid to leave in. But if you think nobody knew about this forever, you're mistaken. Some bugs (even the silly but annoying ones) take forever to get fixed, and that's IF they ever get fixed!

Reply Score: 1

RE[2]: A new low for Intel
by Morgan on Sun 3rd Jan 2010 21:36 UTC in reply to "RE: A new low for Intel"
Morgan Member since:
2005-06-29

I didn't know about it until now. I'm not a programmer, so I don't always read about things like this unless they show up in my news feed (as this did today). And I realize that this is a known issue for a lot of people.

But I'm still going to be outraged, whether I have your permission or not. It's a shitty thing for Intel to do, and a slap in the face to people like me.

Tell me the truth: If your spouse had been cheating on you for years, all your co-workers knew about it but somehow you never found out until today, would you honestly say "oh well, I can't be mad since I didn't know about it from day one"? That's just silly.

Reply Score: 5

hackus
Member since:
2006-06-28

I am for what the subject says.

This is the last straw however, and for many years I rationalized that the performance increases on a year to year basis from iNTEL offset its monopoly position.

This is just another example of GREED in a time when many people are already sick and tired of greedy bosses, greedy companies like iNTEL ripping them off and greedy bankers stealing little old ladies pensions and a government that not only looks the other way, but encourages the practices with rich cash rewards larger than the GDP of some countries.

What we need is a complete revolution in the areas of government, businesses and scientific research that adopts a GPL montra of some sort. Open practices, peer reviewed government, technology and business.

That being said, iNTEL is a large company. What manager was enforcing this despicable practice should be black balled, and never permitted again to work in the computing industry.

-Hack

Reply Score: 3

jabbotts Member since:
2007-09-06

Sadly, the guy who who kept this bug in place for so many years would be snapped up by the next employer rather than blackballed by being fired from Intel. Think of it from the business side and tell me Corp2 isn't interested in a staffer from Corp1 who managed to keep a known flaw in place for so long. I'd be hard pressed not to pat that guy on the back if it was my company and I pretty much see everything from the point of view of what benefits the end user rather than share holder.

Nope, the more likely outcome of such a person existing and being fired would be "What, they let that guy go? Someone get me a conference call with him, HR and I."

Reply Score: 2

Scali
Member since:
2010-01-03

It's right there in the product brief:
http://software.intel.com/sites/products/collateral/hpc/compilers/c...

"Each compiler delivers advanced capabilities for development of application parallelism and
winning performance for the full range of Intel® processor-based platforms."

Not a word in the product brief about non-Intel CPUs, or other x86-compatibles or anything.

Reply Score: 4

WereCatf Member since:
2006-02-15

"and support for Intel® processors and compatible processors."

See the "compatible processors"? Taken straight from their own website, in the freaking front page: http://software.intel.com/en-us/intel-compilers/

The fact is, they do advertise it and sell it as a compiler for Intel and compatible processors, and of course software developers want a compiler which produces an executable which works equally well on all processors so they don't have to distribute several copies, one for each CPU manufacturer.

Intel has brought this all on themselves; if they'd clearly say that ICC has performance issues on anything other than Intel processors then developers would've know about that and would have either chosen another compiler which produces acceptable performance across all CPUs, or could have opted to use two compiler and distribute two binaries. Now those developers who have bought ICC and used it to compile their software will have to recompile it all and somehow distribute those new, fixed binaries to their customers. That's a lot of unneeded hassle, and the bigger the product the costlier it is to have to recompile it.

Reply Score: 8

mrhasbean Member since:
2006-04-03

Well, the code does work on non-Intel CPU. But Intel makes no claims about how optimized the code is for these CPUs.


Very true. Are people expecting Intel to know or work out the optimised code paths for processors they don't manufacture?

I don't know the details of this because I don't use the product but reading the methodology employed it seem this is nothing more than the lowest common denominator method, which is an accepted method for making sure something works if you don't know the optimal method. Have you ever followed a "take apart" guide to remove a specific component that in fact had you pulling the whole thing apart when you really only wanted to get that one component? You got to the end and thought "danm, I didn't need to take half that crap out!" But it still delivered you the component you wanted didn't it?

Unless Intel used the words "Optimised for..." or something similar when referring to processors from other manufacturers to describe their compiler they really shouldn't have a case to answer. On the other hand if they did make such claims then the outcome has been appropriate.

Reply Score: 1

tylerdurden Member since:
2009-03-17

Agreed. I have used both ICC and GCC.

It is not like Intel is inserting one billion no-ops if they detect an AMD processor. They simply disable some of the optimizations because they have no clue about the micro details (or they don't want to bother) of a processor they do not manufacture.

That is why we use math kernels from AMD in AMD processors. I don't see anyone freaking out because AMD does not optimize their math libraries for Intel processors. What a concept, eh?

Reply Score: 1

Scali Member since:
2010-01-03

It is not like Intel is inserting one billion no-ops if they detect an AMD processor. They simply disable some of the optimizations because they have no clue about the micro details (or they don't want to bother) of a processor they do not manufacture.


I'll go even further than that...
In theory, enabling a certain optimized codepath without any regard to the underlying microarchitecture could actually result in worse performance rather than better performance.

I've seen it many times myself. Not just Intel vs AMD, but especially in the days of 486 -> Pentium, Pentium -> P6 and P6 -> P4... code that was optimal for one microarchitecture could be disastrous for another one.
Which is why many compilers support the concept of 'blended' code. Obviously that is not a goal for Intel's compiler. It has a very specific goal: generate the most optimal code for specific Intel microarchitectures.

There is just no way for Intel to win this. People will just claim that it's crippling AMD when it selects a different codepath, and it turns out to be suboptimal anyway.
The only way for Intel to win is to support AMD's microarchitectures specifically, but obviously that is not going to happen.

Edited 2010-01-03 23:35 UTC

Reply Score: 1

bannor99 Member since:
2005-09-15

Don't drag "microarchitecture" into this - it's about INSTRUCTION SETS.
Need I point out that IT WAS AMD who first extended the x86 arch to 64-bits when Intel was still drinking the Itanic KoolAid? Let's see that change the compiler to not optimize for any AMD-compatible instructions - won't that be a laugh.

Let me spell it out for you - all modern x86 CPUs are performing sleight of hand as a single x86 instruction gets broken into 1 or more smaller RISC-like operands.
So, since Intel has no insight into the "microarchitecture" of AMD's CPUs, then how is it that ICC can create code for non-Intel procs at all?
I ( and many, many others ) have said it before and it bears repeating - the only checking should be for instruction sets. If the CPU says "I do SSE3" then the compiler shouldn't need to know if it's GenuineIntel or AuthenticAMD - unless it's a workaround for a KNOWN FLAW and I'm comfortable with Intel not changing their compiler to accomodate AMD bugs.

Reply Score: 1

Scali Member since:
2010-01-03

Don't drag "microarchitecture" into this - it's about INSTRUCTION SETS. Need I point out that IT WAS AMD who first extended the x86 arch to 64-bits when Intel was still drinking the Itanic KoolAid? Let's see that change the compiler to not optimize for any AMD-compatible instructions - won't that be a laugh. Let me spell it out for you - all modern x86 CPUs are performing sleight of hand as a single x86 instruction gets broken into 1 or more smaller RISC-like operands. So, since Intel has no insight into the "microarchitecture" of AMD's CPUs, then how is it that ICC can create code for non-Intel procs at all? I ( and many, many others ) have said it before and it bears repeating - the only checking should be for instruction sets. If the CPU says "I do SSE3" then the compiler shouldn't need to know if it's GenuineIntel or AuthenticAMD - unless it's a workaround for a KNOWN FLAW and I'm comfortable with Intel not changing their compiler to accomodate AMD bugs.


The whole argument is useless as Intel doesn't support its OWN CPUs anymore than non-Intel ones, unless it recognizes the family info.
Non-Intel CPUs are NOT a special case.
The issue here is that you and all other people crying 'unfair' WANT the Intel compiler to be about instructionsets and extensions, but it ISN'T. That's a simple fact.
I'm not saying that's how I think it SHOULD be, but it is how it is. Even the original Agner Fog article explains that.

Edited 2010-01-05 08:45 UTC

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

Very true. Are people expecting Intel to know or work out the optimised code paths for processors they don't manufacture?


This is different. This is Intel SPECIFICALLY writing ADDITIONAL code to REDUCE performance for AMD processors. This is not a bug or a case of "we don't understand AMD processors" - this is something discussed by management, passed on to the team lead, and executed by actual, code-producing developers.

And that's not something Intel can get away with - in the same way we would scream bloody murder if Microsoft intentionally crippled Firefox' performance on Windows.

Reply Score: 9

Scali Member since:
2010-01-03

This is different. This is Intel SPECIFICALLY writing ADDITIONAL code to REDUCE performance for AMD processors.


No it isn't. You make it sound like Intel went out of their way to add an extra "cripple AMD" codepath, and specifically select that path ONLY if non-Intel CPUs are present.
AMD CPUs just run the same non-SSE codepath as older Intel CPUs without SSE extensions would.

Now let's discuss whether people who are editors on reasonably large tech sites such as 'osnews.com' should be able to get away with posting such false accusations...
Really, no offense... but as an editor of this site, I think you have a responsibility to check your facts a bit better, and not be so quick to throw accusations around. Before you know it, it is copied by thousands of other websites.
You study journalism... you are familiar with the concept of 'hoor en wederhoor'?

Edited 2010-01-03 23:56 UTC

Reply Score: 0

Thom_Holwerda Member since:
2005-06-29

Uhm...

"The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."


Got it?

In other words, it checks from whom the CPU is, and then deliberately chooses the slowest code path, even though more optimal ones can be used. You HAVE to code this INTO the program, and as such, this has been A DECISION.

Look, read the original article before accusing me of lying.

Edited 2010-01-03 23:57 UTC

Reply Score: 4

Scali Member since:
2010-01-03

Uhm...

""The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."


Got it?

In other words, it checks from whom the CPU is, and then deliberately chooses the slowest code path, even though more optimal ones can be used. You HAVE to code this INTO the program, and as such, this has been A DECISION.

Look, read the original article before accusing me of lying.
"

I got it, but I don't think you did.
The CPU dispatcher can only dispatch codepaths that are compiled into the binary (obviously there is always a CPU dispatcher, as it needs to prevent CPUs from running code they don't support... Eg, not all Intel CPUs support SSE4 yet, so they need to have a fallback to SSE3 etc.. and a Pentium 4 will need different optimizations than a Core2 Duo, because of massively different microarchitecture).
You claim that there is an additional 'cripple AMD' codepath compiled into the binary.
This is not true.
So my previous post remains. It will just pick one of the different Intel-optimized paths.

Thing is 'slowest possible' and 'better version'... those claims are up for grabs. The compiler doesn't know anything about non-Intel microarchitectures in the first place. How slow the chosen path is, and how much better other paths would be, that is strictly up to chance.
Sure, in most cases it will probably pan out that way... but it's not a deliberate action.
You need to step lightly in this sort of subject.

Edited 2010-01-04 00:05 UTC

Reply Score: 3

mrhasbean Member since:
2006-04-03

Uhm...

""The system includes a function that detects which type of CPU ... If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."


Got it?

In other words, it checks from whom the CPU is, and then deliberately chooses the slowest code path, even though more optimal ones can be used. You HAVE to code this INTO the program, and as such, this has been A DECISION.

Look, read the original article before accusing me of lying.
"

This is again a matter or interpretation (not compilation ;) ). Thom what you see as a deliberate act of sabotage on Intel's behalf I see as nothing more than the lowest common denominator method.

You will notice it says "most cases" (my bold in the quote). To me this would mean it checks to see if it's dealing with an Intel processor and uses a more optimised path because the Intel developers who wrote the compiler specifically know it will work, otherwise it resorts to the lowest common denominator code, which is the code set that they know will work on all compatible devices. Nothing more, nothing less. There may be certain situations where they knew particular optimisations work for other processors and therefore use them, hence the most cases. If it was to use an optimised path that another manufacturer claims works with their processor and it doesn't work who would people blame? I guarantee it would be "this stupid {expletive deleted} compiler!!!".

If this is truly what this case was about I think Intel have been unfairly treated. If there are routines in the compiler to add superfluous code to purposely slow execution on other processors it's a different matter, but I see no evidence of this here, and there's nothing stopping other manufacturers from writing their own compilers optimised only for their processors, just like there's nothing stopping media player and phone manufacturers from writing their own apps to fully support their devices...

Reply Score: 2

cycoj Member since:
2007-11-04

"Well, the code does work on non-Intel CPU. But Intel makes no claims about how optimized the code is for these CPUs.


Very true. Are people expecting Intel to know or work out the optimised code paths for processors they don't manufacture?

I don't know the details of this because I don't use the product but reading the methodology employed it seem this is nothing more than the lowest common denominator method, which is an accepted method for making sure something works if you don't know the optimal method. Have you ever followed a "take apart" guide to remove a specific component that in fact had you pulling the whole thing apart when you really only wanted to get that one component? You got to the end and thought "danm, I didn't need to take half that crap out!" But it still delivered you the component you wanted didn't it?
"

You guys all don't get it! SSE et al. are standards, there is specific ways the compilers ask the CPU if they support these. For each of these Intel checks if the CPU supports it (they do need that because their own CPUs don't all support the same instructions either). They then perform and additional check for the CPUID and disable the SSE etc. instructions. So they do not choose the lowest common denominator, they deliberately choose a suboptimal path.


Unless Intel used the words "Optimised for..." or something similar when referring to processors from other manufacturers to describe their compiler they really shouldn't have a case to answer. On the other hand if they did make such claims then the outcome has been appropriate.


They don't say "optimised for ..." but they do say that they support SSE etc., and they do not say "supporting SSE only on Intel"

Reply Score: 1

Traumflug Member since:
2008-05-22

Are people expecting Intel to know or work out the optimised code paths for processors they don't manufacture?

Yes, they do and Yes, Intel knows how to do it.

Even Intel processors do not all share the same feature set, so the code has to check for CPU capabilities anyways. From the technical point of view, acting on the vendor string is just a waste.

Reply Score: 1

Scali Member since:
2010-01-03

From the technical point of view, acting on the vendor string is just a waste.


If it was JUST the vendor string...
But it's not.
They check for the CPU extensions (MMX, SSE1/2/3/4/etc), for the family, model, stepping, AND for the vendor.
If you don't check all three, you don't know exactly what CPU you're dealing with.
AMD uses the same family, model and stepping ranges as Intel, but obviously the CPUs are quite different.
So yes, in light of the other information, the vendor string is quite important.
If you look at the sourcecode of my CPUInfo library (http://cpuinfo.sf.net), you'll see that I even have to check the vendor string for certain CPUID functions. The same index gives completely different results on Intel and AMD CPUs (and possibly VIA).

Reply Score: 1

MamiyaOtaru Member since:
2005-11-11

They check for the CPU extensions (MMX, SSE1/2/3/4/etc), for the family, model, stepping, AND for the vendor.

No, they don't. They check for vendor, and if vendor != Intel they don't check the other stuff at all. You are in a small minority in thinking this is acceptable. Astroturf is a crappy playing surface.

Reply Score: 2

Scali Member since:
2010-01-03

Yes they do.
Notice my use of the word "AND".
A certain codepath is only chosen when ALL the conditions for that codepath are met.
One of the conditions just happens to be "GenuineIntel", but it is not the ONLY condition. If this condition is met, other conditions are checked aswell, and if they are not met, you still get to the same default codepath that non-Intel CPUs also run. It's not a "cripple AMD" path. It's the "I don't know this particular microarchitecture"-path. Since it doesn't know all Intel microarchitectures either, certain CPUs that DO report "GenuineIntel" will still run that path.

And I resent the Astroturf implications.

Reply Score: 1

tylerdurden Member since:
2009-03-17

So basically, you are blaming intel for your poor reading and comprehension skills? You have to play a very creative game of twist in order to fit you narrative in this case, me thinks.

BTW, if you had ever used icc you'd realize that none of the optimization flags even remotely claim to be targeted for amd microarchitectures.

There is a big difference between "supporting" an architecture and "optimizing" for such an architecture. Oh, and if we're going to label Intel as the evil ones. Guess what I am sure AMD themselves would not provide too many low level details for their microarchitecture to Intel (as that would provide a lot of privileged information which I am sure AMD does not feel like giving intel for free).

Reply Score: 1

tyrione Member since:
2005-11-21

Clang now builds LLVM.

In a year, Clang will be complete and you won't see me using either Intel or GCC for compiling.

Reply Score: 5

mutantsushi Member since:
2006-08-18

The debate here is getting side-tracked from the real context of this topic.

The context is not that Intel is fraudulently selling a compiler which doesn't do what they claim. They could well advertise ICC as "The best x86 compiler that executes crap on AMD" - that isn't the issue.

The issue is that their ICC compiler product is being connected to their broader anti-competitive actions to protect their CPU monopoly. Opening up ICC is akin to the EU opening Windows to multiple browsers or opening Windows' server protocols to all, or pre-Bush US Anti-Trust Court proposing to simply break up Microsoft into separate companies as a remedy to an anti-competitive monopoly.

If they didn't have a monopoly, all these actions may not be illegal in and of themselves. But they DO have a pretty effective monopoly, and they have done many low-handed things to perpetuate it. Making ICC CPU vendor-neutral is just ONE potential remedy for Intel's predatory monopolist behavior.

Reply Score: 2

jbettcher Member since:
2008-06-15

This is correct they are marketing something and making false claims about it.

But inside the Microsoft/Linux world of PC's how many people are actually making production released products that are built with the ICC?

That's all I want to know, while this is shady of Intel to do how big has the side effect of this deliberate bug been? I just think many people are blowing this way out of proportion. I thought this compiler was mainly used by institutions and research type outfits.

Reply Score: 1

Scali Member since:
2010-01-03

If they didn't have a monopoly, all these actions may not be illegal in and of themselves.


But Intel does NOT have a monopoly in the compiler market. In fact, icc is pretty obscure. The problem is not with icc either, as it is perfectly valid to have an optimizing compiler for Intel-only systems. I mean, if you have a server park with Intel CPUs, and you compile your (open source) software to get the best possible performance, what's wrong with that?
Before x86 became the de facto standard, it was very common to have an optimizing compiler for your CPU brand, especially since most brands had their unique ISA in the first place.

The problem is that benchmark suites have been using a compiler that is not vendor-neutral, and doesn't have any intention to be vendor-neutral.
But how is that Intel's fault?

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

But how is that Intel's fault?


This discussion is moot anyway.

Intel already admitted its wrongdoing in the compiler case in the AMD settlement. In other words, if even Intel agrees its actions with the compiler are questionable - who are you to argue with them?

Reply Score: 2

Scali Member since:
2010-01-03

This discussion is moot anyway. Intel already admitted its wrongdoing in the compiler case in the AMD settlement. In other words, if even Intel agrees its actions with the compiler are questionable - who are you to argue with them?


Read my posts more clearly. I said that I don't approve of Intel's actions. In other words, I already said that I found them questionable. So I'm not arguing that. In fact, I'd want for them to change it.
However, that does not mean that all accusations that people throw around are valid, nor that Intel's actions are illegal (again, semantics? Questionable and illegal are NOT the same thing).

Edited 2010-01-04 10:12 UTC

Reply Score: 1

silix Member since:
2006-03-01

developers want a compiler which produces an executable which works equally well on all processors so they don't have to distribute several copies, one for each CPU manufacturer

just because developers want something, doesn't mean they are "right" in wanting it - and i speak as a sw developer
Intel has brought this all on themselves; if they'd clearly say that ICC has performance issues on anything other than Intel processors

ICC is a proprietary compiler developed a cpu maker for their own cpu's...
one shall call luck if it retains compatibility (as a side effect of supporting intel's own legacy cpu's) with competitors' cpu, at all (but more on this later) - demanding it optimizes for Athlons and Phenoms as well as for Core i7's is a bit too much...
developers would've know about that and would have either chosen another compiler which produces acceptable performance across all CPUs, or could have opted to use two compiler and distribute two binaries. Now those developers who have bought ICC and used it to compile their software will have to recompile it all and somehow distribute those new, fixed binaries to their customers. That's a lot of unneeded hassle, and the bigger the product the costlier it is to have to recompile it.

you make it sound like something absurd, but it was actually how things used to go, at least with smart people where i studied and then worked
ICC for the intel target, MSCC for the "anything else" target was the norm - people who took for granted that a compiler developed by a chip maker should work equally well (if not better) for competitors' cpu's were rightfully mocked or scolded for not adhering to the norm
or at least reminded some factual and historical details they had overlooked, like cpu specific optimizations (in that every single cpu family, eg Core rather than Netburst or P6, and sometimes revisions inside processor families, see the P4 Prescott VS Northwood has peculiar features and idiosynchrasies and require a specific optimization strategy) and errata (in that individual revisions of individual families may or may need workarounds/fixes - then again, separate code paths) - now, intel is already busy with implementing their own optimizations and workarounds, expecting them to go beyond the minimum required for compatibility and fully implement workarounds for competitor's cpu's is unrealistic
historically, every major platform (/CPU architecture from a single vendor) came with its own custom C compiler (or more recently, a custom port of the GCC), to which it was practically tied
besides, intel has always finely tuned its compiler for their own processors to make for the sometimes pesky architectural features of the latter (like the first pentium's asymmetrical pipelines - two instructions could be executed at once, but with several restrictions, s.a. a simple and a complex one, or an integer and a FP one - or the P6)
since, not being able to leverage proprietary compilers and having to cope with a vast installed base of existing code, they were forced to design their chip in more elegant ways in order to achieve better performance with generic i386 code, the above was actually one of the very reasons of AMD's (and others') competitiveness in the performance field, starting with the mid 90's

Edited 2010-01-04 16:35 UTC

Reply Score: 1

Since ICC 8
by Carewolf on Sun 3rd Jan 2010 22:05 UTC
Carewolf
Member since:
2005-09-08

It should be mentioned that the crippling is not even an inherent or very old feature. It was at least on Linux introduced with icc 8. I know because this was the version where I stopped wasting my time on icc support in KDE, until then I had improved support for the Intel compiler in several OSS projects, but that work was to support a superiour compiler, not the crippling one version 8 and later was.

Reply Score: 2

Sabotage that spread like a virus.
by dulac on Sun 3rd Jan 2010 22:54 UTC
dulac
Member since:
2006-12-27

Small thoughts...

On the argument that the compiler is Intel specific:
- Just an excuse... Actually we are talking of a compiler from which code will be used in OTHER compilers. Not a CPU, and the compiler will NOT work as expected nor advertised.
- Stating that it is JUST for Intel CPUs is to accept the accusation of MONOPOLY practice and CARTEL-like practices where the compiler is used in association with CPU to achieve unfair advantage. It is called sabotage in any dictionary.


On the argument that "don't like it, don't use it":
- Just looking the other way, as code spread like a virus, without anyone except the first users to know what was used in the first place. Not even so... as the majority doesn't have a clue of what is going on.
- Again, the terminology to be used is sabotage... and that is not excused, nor justified by a little note posted "just-in-case" the "trick" is found. No crime is justifiable by "just following orders" nor "I warned you". Specially in a virus like spread of such code in libraries, tools, objects.
- The total ignorance of what is hidden under the statement "optimized just for Intel" has now a different meaning. And the different meaning is a revelation of what is under the sheets. Not an excuse, but an evidence of motive, intention and action.


Concluding...
1 - Let nobody states that a warning is fair, when no REAL information is stated to the statement to be understood. Just a legal precaution to commit a crime with impunity.
2 - Let nobody state that you have a choice when no choice is given, as reality is hidden with intention, away from a compilers expectations, and Misinformation is the game. Results have a viral spread.
3 - We have victims, motivation, and actions. These cannot be erased, whatever the statements to disguise facts. And facts are a the fees for courts, not sweet-talk (or at least they should).
4 - Legal actions are much away from the justice concept as delaying is a battle tool... as is misinformation... pseudo-statements to the future...

This is my honest interpretation. Is there another?
---
"it was in front of them but they could not see it"

Reply Score: 4

Scali Member since:
2010-01-03

Problem with accusations related to monopoly is that the compiler is a piece of software (and sold separately from CPUs). Intel may have a huge marketshare in the hardware market, but the Intel compiler doesn't have a big marketshare. Microsoft is the biggest player there, and gcc is second.

The Intel Compiler is just a commercial product. Since anti-trust laws won't apply, it is indeed down to the old 'voting with your wallet'. If you don't like it, don't use it.

While I don't approve of Intel's approach here, I think it would be a huge loss for the free market in general if Intel were forced by any organization to modify their product. It would create case-law with repercussions that I don't even dare to think about.

Reply Score: 1

dulac Member since:
2006-12-27

Hi,
Sorry by some glitches in the text but was very late and the edit window is very small. Being unable to edit after posting, glitches become permanent.

Naturally your point is clear, from the legal point.
However (and this was not clear) when using the word "crime", a strong word with a broad range, the meaning was the original. Not the interpretation that is becoming current. I refer to confusing crime with something that is illegal.

Everyone knows that crimes existed BEFORE laws. Maybe do not notice that crime is an ethical evaluation. Legalities only come very late. That is why the connection is very loose and some "laws" allow crimes and some forbid (or "criminalize" what is no crime at all).

The perspectives stated, also depend on another concept. The one of passive crime as complement to the the usually active that anyone understands. I believe its called "crime by omission"... or something like that. An example: Seeing somebody dying and just walk away when one could have saved a life.

This was an extreme example, but shows the obligations any citizen has towards to society. The same is applied to economics as the only justification for a corporation to exist (and profit from society) is to be useful to society. Nothing less, in the ethical point of view that should be the base for laws and it's enforcement.

A corporation is a virtual citizen, not a king.
And only while it is useful to accept it as such. Not by right... but by allowance. Even if that is forgotten, it is still true... and to be reminded.

Anyway, la enforcement has a double problem, and bigger one is that law is quite linear in a world that is not linear, but complex. Where perspective is more important that justifications.

Take ecology for example. And that applies to every situation in society. Laws cannot, EVER, command every possible situation. And that is the genesis of trouble when the wordings become more important than the reasons for laws to exist.

You where very clear and precise in your comment, unlike mine. I thank you for that.

It was also missing the mention to the common practice of ACTIVELY forbid developers to do what they are doing, or could do very naturally. And even worse. We have seen many "internal memos" proving that is a common practice.

That gives a long range of opportunities when a compiler chooses what routines to use depending from what CPU is present. If it a PASSIVE way to get a "result" or an ACTIVE one... The common practice gives us a clue of what is more likely.

Anyway, laws become a maze were justice is lost.
So fairness is more important than ever.
And I guess that you feel the same.

What really chocks me is the respect a compiler has for it's knowledge of hidden internals... and then that respect and confidence being (actively or passively) used to get unfair results.

To me, it does not matter if it passes bellow the legal sieve. The reasons for laws to exist are more important. And that were fairness lies. Specially when a tool is respected, used... and present in libraries used by third parties... with confidence... and unexpected implications on others.

It's all a question of perspective. Not words, justifications or laws. Laws should follow their goals, not the reverse. That is what they are for... or should.

And thanks, again, for pinpointing the more practical side. Regards.

P.S. - Sorry to have been a bit long. And hope not to have any glitches this time.
As, again, it is late, and need to rest.

Reply Score: 1

tylerdurden Member since:
2009-03-17

More than an "interpretation" I'd consider your post an ode to logical dissonance?

Do you even know what a compiler does? The difference between ISA and microarchitecture?

Furthermore, have any of you even used ICC... and if doing so, a real purchased and supported copy of ICC (not just one of them copies which fell off the internet?). Did you realize no where in the contract does Intel claim to produce optimized code for non intel microarchitectures?

Basically intel just promises that in exchange for buying ICC, they guarantee ICC will produce highly optimized code for their processors. If you want to build an application, and are using ICC, you are aware of that. Maybe it is a douche move by intel, I don't know... but that is not the point.

Next thing, we'll hear how evil Apple is because they did not optimize OSX for your AMD hackintosh.

ICC is a tool, and it does exactly what it claims to do: produce very optimized scheduling for intel microarchitectures. Trying to blame intel for not supporting competitor's products simply because you didn't bother to learn about the tool, or feel a level of entitlement... is a tad disingenuous.

Edited 2010-01-03 23:23 UTC

Reply Score: 0

Reality Check !
by dulac on Sun 3rd Jan 2010 23:22 UTC
dulac
Member since:
2006-12-27

Really, when it comes with habits, or less information than needed, small information is worse than having none. And we can miss what is in front of us. So...

To better understand the compiler status let's examine it from different scenario points. This give us a more clear picture, even in absence of other indications, as it allow us to compare and thus to scale the importance of a factor...

Imagine that the compiler:
A - Refuses to work on some CPUs.
B - Makes lousy code, in contrast to be the better one.
C - Uses the best routines available elsewhere as the programmers accept to be compromised or non-competent.

Now Check:
A - Fair... but too disturbing, and everybody complains
B - Unfair... no disturbance, and no complains
C - Fair... no disturbance, better product

I suppose that clears the picture, even to someone unable to evaluate the significance of some facts, and it's significance. Just a try (and an hope)...

Reply Score: 0

jbettcher
Member since:
2008-06-15

The main people this bug bothers and has hindered:

People with AMD systems using the ICC compiler. (which would make me ask why)

Intel hasn't hindered AMD in anyway when it comes to the normal XP/Vista/7 user running an AMD system, different C compilers were used to generate their programs and optimizations have been put in place by other developers.

Sure this make Intel look like bastards for their tactics with their own compiler. But if this was some huge sabotage move like people are making it to be Intel would've been sued into oblivion by now.

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

Sure this make Intel look like bastards for their tactics with their own compiler. But if this was some huge sabotage move like people are making it to be Intel would've been sued into oblivion by now.


Uhm, they HAVE been sued into oblivion, and the compiler was part of said lawsuit between AMD and Intel (and now it's part of the settlement). It's also part of the FTC probe.

It's all in the articles. Didn't you read them?

Reply Score: 1

cerbie Member since:
2006-01-02

"People with AMD systems using the ICC compiler. (which would make me ask why)"

...because nobody uses both Intel and AMD processors.

...because nobody uses binaries they did not personally compile.

Reply Score: 3

setec_astronomy Member since:
2007-11-17

People with AMD systems using the ICC compiler. (which would make me ask why)


Just a little perspective, even though I'm aware it's a bit of an extreme data point:

About three years ago, small scale clusters (esp. capacity machines) employing AMD Opteron processors and Gigabit Ethernet were in full swing in the HPC community, beacause they offered a very nice bang for the 10k€ - 100k€ Buck. Nothing earthshattering and certainly not Top500 material, but - especially due to the integrated memory controller - very nice machines to do test- and development runs before blocking the production machines with unoptimised code. As a consequence, AMD was able to capture a lot of ground of what was traditionally Intels hometurf.

Despite AMDs inroads in the hardware department, a lot of scientists (e.g. the "end users" of the HPC facilities) still defaulted to the intel compiler suite, due to various reasons: For Linux and non-commercial use, the compiler is gratis (this does not cover academic use, but as far as I'm aware of virtually nobody gives a damn) and is not limited compared to the commerical option, which makes it a lot more attractive than for example the PGI suite or the NAG compilers. Additionally, or even because of that, even if you have to deploy your code on a different x86 based cluster system (for example for a "real" production run) odds are good that the intel compilers are available, widely used and therefore well maintained.

Combine this with the fact that bulk of scientific code in many disciplines (like for example lattice QCD) is still written using a wild mixture of Fortran 77/90 + properitary Compiler extensions and neither gcc nor g95 up until recently having competative fortran compilers and you have quite a big hurdle to move away from the intel compilers (some might even go as far as calling this a potential vendor lock-in, those obnoxious brats, tz tz tz).

The majority of scientists in "my" field (high energy physics) are more interested to actually "get the job done" (oh how I hate this phrase but once in a while it's really appropiate) and not to fiddle with optimisation options and processor specific behaviour.
Consequently, they and the folks that have to actually maintain these cluster systems tend to be a conservative bunch, so that the un-sexy work of fine-tuning and optimising the code for a given compiler has to be done ideally only once.

If turning on optimisation options for SSEx/vectorisation yields not the performance gains people expect or are even used to, the "new" component (aka the non-Intel hardware) tends to receive the blame and not the compiler ("scales as expected on my Xeon desktop workstation, sucks at your shitty SUN / Opteron cluster. Fix that") . And although this behaviour was discovered some time ago, you would be surprised how many "end users" of HPC facilities are not aware of the implications of this performance regressions.

As a side note: If you are a programmer and interested in the ins- and out of optimising code, make sure to check out the sotware optimisation ressources on Agner Fog's site, they are imho a classic read:

http://www.agner.org/optimize/

Reply Score: 5

Scali Member since:
2010-01-03

Despite AMDs inroads in the hardware department, a lot of scientists (e.g. the "end users" of the HPC facilities) still defaulted to the intel compiler suite, due to various reasons: For Linux and non-commercial use, the compiler is gratis (this does not cover academic use, but as far as I'm aware of virtually nobody gives a damn) and is not limited compared to the commerical option, which makes it a lot more attractive than for example the PGI suite or the NAG compilers. Additionally, or even because of that, even if you have to deploy your code on a different x86 based cluster system (for example for a "real" production run) odds are good that the intel compilers are available, widely used and therefore well maintained. Combine this with the fact that bulk of scientific code in many disciplines (like for example lattice QCD) is still written using a wild mixture of Fortran 77/90 + properitary Compiler extensions and neither gcc nor g95 up until recently having competative fortran compilers and you have quite a big hurdle to move away from the intel compilers (some might even go as far as calling this a potential vendor lock-in, those obnoxious brats, tz tz tz).


It's not Intel's fault that AMD doesn't offer an alternative product.

By the way, the runtime selection of codepaths is just a compiler option.
It's perfectly possible to compile only a single codepath and effectively 'force' your CPU to run SSE/whatever code.
I've been using it on my Athlon back in the day, and got pretty good results.

Reply Score: 1

setec_astronomy Member since:
2007-11-17

It's not Intel's fault that AMD doesn't offer an alternative product.


Of course it isn't Intels fault. But in my opinion it's not AMDs fault if people who react allergic to compiler flags blame AMD for poor performance if an compiler silently drops back to a fallback scenario with - again, imho - unreasonable performance regressions.

EDIT: I accidentially deleted the second sentence, I suck at typing at a computer with a touchpad.


By the way, the runtime selection of codepaths is just a compiler option.
It's perfectly possible to compile only a single codepath and effectively 'force' your CPU to run SSE/whatever code.
I've been using it on my Athlon back in the day, and got pretty good results


Which is precisely the reason why I'm not really ´buying into the "you have to know the internals of the CPU to provide stuff like SSEx". It may not be optimal, but it sure as hell is faster than vanilla, non-superscalar, non-vectorised code.

My comment was just intended to highlight that there are people who use AMD based systems extensively and to the limit of their capabilities, yet have difficulties switching to a different compiler plattform for their production runs.

Edited 2010-01-04 10:11 UTC

Reply Score: 2

Scali Member since:
2010-01-03

Which is precisely the reason why I'm not really ´buying into the "you have to know the internals of the CPU to provide stuff like SSEx".


That's not the same thing though.
I didn't say you have to know the internals to USE SSE. Obviously you can run SSE code on any CPU that reports that it supports it (in the case of forcing a certain architecture without runtime CPU-dispatching, you just get a crashing executable on CPUs that don't support it).

However, you DO have to know the internals in order to select the most OPTIMAL path, be it SSE or something else.

So that is the difference...
Intel never claimed that they would run SSE-code or whatever on any CPU that reports support for it, nor do they make any claims about the level of optimization for CPUs that aren't directly supported (which is a recent subset of Intel CPUs only).

It's frustrating to see that so many people don't seem to understand the difference between instructionsets and microarchitecture.
Here's a simple question for those people to contemplate:
Considering that Core2, Core i7 and Phenom all support the same basic x86-64 and SSE instructionsets (up to SSSE3 at least), how is it possible that they do not perform the same, even if clockspeed, cache, and other factors are kept equal? And that the performance difference is not constant, but varies from application to application?
The answer is: microarchitecture.

Edited 2010-01-04 10:39 UTC

Reply Score: 1

setec_astronomy Member since:
2007-11-17

At the risk of repeating myself:

It goes without saying that better knowledge about the microarchitecture usually translates into better optimisation strategies / options(duh!), e.g. I'm not arguing against the fact that it is reasonable to expect the intel compiler to perform best on intels own processors.

But there is imho a vast difference between graciously degrading agressiveness and sophistication of optimisation strategies / code paths (e.g. providing generic SSEx implementations that may not be optimal performancewise but allow for a better utilisation of the hardware features / registers /etc. at hand, especially given that the generic code paths in question are already part of the compiler) and falling back to the behaviour of a glorified 386.

Intel is the king of the hill in the x86 processor buisness and so every move they make (or in this case: don't make) that treats their competitors/licensees significantly worse compared to their own platform is bound to raise questions of abusing their dominant market position.

Reply Score: 2

Scali Member since:
2010-01-03

Intel is the king of the hill in the x86 processor buisness and so every move they make (or in this case: don't make) that treats their competitors/licensees significantly worse compared to their own platform is bound to raise questions of abusing their dominant market position.


As I said before: Intel is hardly a large player in the compiler market, so any 'dominant market position' rhetoric is just nonsense.
Developers need to specifically BUY the Intel compiler, while gcc comes free with most OSes, and in Windows, people generally use Microsoft Visual Studio, which comes with its own compiler aswell.
Both gcc and Microsoft are quite capable of generating well-optimized code, so most developers don't see any reason to spend money on the Intel compiler. It's a nice product, mostly interesting for scientific computing (Fortran and/or getting the most out of your high-end hardware).

Reply Score: 2

cjcoats Member since:
2006-04-16

It's not Intel's fault that AMD doesn't offer an alternative product...


Actually, they do, in cooperation with Sun: see

http://developers.sun.com/sunstudio/index.jsp

Sun Studio Express compilers for Linux are available
free of charge for commercial and noncommercial use, and are probably the best AMD-targeting compilers out there since the demise of PathScale. They do a very good job for Intel processors too, for that matter (typically within
1% of Intel, according to benchmarks I've run).

It's sad that they haven't advertised this better...

Edited 2010-01-04 15:36 UTC

Reply Score: 2

RE: Alternative product...
by Scali on Mon 4th Jan 2010 15:50 UTC in reply to "Alternative product..."
Scali Member since:
2010-01-03

And they don't seem to support the biggest platform: Windows...
Because that's what it's all about... PCMark05, a Windows benchmark, written in C++.

Reply Score: 1

No mention of LLVM yet?
by Flatland_Spider on Mon 4th Jan 2010 00:00 UTC
Flatland_Spider
Member since:
2006-09-01

AMD, VIA, and anyone else interested in making an x86 compatible chip should start throwing code at the LLVM project.

Reply Score: 2

RE: No mention of LLVM yet?
by tyrione on Tue 5th Jan 2010 09:27 UTC in reply to "No mention of LLVM yet?"
tyrione Member since:
2005-11-21

AMD, VIA, and anyone else interested in making an x86 compatible chip should start throwing code at the LLVM project.


See above. In the weeds are at least two mentions of LLVM.

Reply Score: 2

Wait...
by deathshadow on Mon 4th Jan 2010 02:37 UTC
deathshadow
Member since:
2005-07-12

People actually use the Intel reference compiler for ... production code?

Since when?!?

That said, Oh noes, a companies software that works best on their hardware, still FUNCTIONS on others (though not as well)...

Who do they think they are, Apple?

Edited 2010-01-04 02:38 UTC

Reply Score: 1

RE: Wait...
by Vanders on Mon 4th Jan 2010 09:00 UTC in reply to "Wait..."
Vanders Member since:
2005-07-06

People actually use the Intel reference compiler for ... production code?

Since when?!?


HPC users, because ICC will usually generate the tightest and fastest code on their Intel based clusters. Of course those sorts of users tend to benchmark things like compilers before they use them, so the woeful performance on non-Intel CPUs has been known for some time (although perhaps not the mechanism).

Reply Score: 2

benchmarks
by Calipso on Mon 4th Jan 2010 14:11 UTC
Calipso
Member since:
2007-03-13

So I'm no expert on compilers or code in general but here's a thought I just had. When new cpu's come out, various websites benchmark them using certain benchmarking software. People then often buy cpu's based on results they read about. If the benchmarking software got compiled using Intels compiler, would they be optimized specifically for Intel cpu's and therefore the results could be skewed in favour of Intel's cpu's at a disadvantage to AMD?

Just curious what kind of affect this could have.

Reply Score: 1

RE: benchmarks
by big_gie on Mon 4th Jan 2010 16:25 UTC in reply to "benchmarks"
big_gie Member since:
2006-01-04

Yes this is exactly what was reported in the article. If they spoofed the cpu so it was reporting to be intel instead of amd/via, then the same benchmarking program would give up to 50% increase in performance! This is a serious problem as people will base their decision on this when shopping.

Reply Score: 2

RE[2]: benchmarks
by Scali on Mon 4th Jan 2010 16:28 UTC in reply to "RE: benchmarks"
Scali Member since:
2010-01-03

Yes, it's a problem, but the blame should be with the benchmark developers, not with the Intel compiler.

Reply Score: 2

RE[3]: benchmarks
by big_gie on Mon 4th Jan 2010 16:47 UTC in reply to "RE[2]: benchmarks"
big_gie Member since:
2006-01-04

I agree the benchmark developers should be held responsible for a part of the blame. I still think intel is taking advantage of it.

The conclusion to this is probably: do your own benchmark, with your own code. You cannot trust a program you don't know what is doing or how it was compiled. Benchmarking is an extremely hard art.

Reply Score: 1

RE[2]: benchmarks
by Calipso on Mon 4th Jan 2010 16:59 UTC in reply to "RE: benchmarks"
Calipso Member since:
2007-03-13

heh, oops. Next time I should read the article before commenting. ;)
thanks for the answer though.

Reply Score: 1

Scali
Member since:
2010-01-03

Anyone recall ScienceMark?
Historically it was one of the few benchmarks that Athlons performed well in...
Look at these results for example:
http://www.extremetech.com/article2/0,2845,2014652,00.asp
An Athlon64 FX-62 about as fast as a Core2 Duo X6800?

Amazing, no other benchmark shows results even remotely similar...

The plot thickens when you realize that some of ScienceMark's developers are/were AMD employees.
(eg 'redpriest', as he himself says here:
http://www.hardforum.com/showpost.php?p=1034771780&postcount=142
"Full disclosure: I am an engineer that works for AMD (in CPUs and not in graphics)")

Funny that in all those years there never was any news site that picked up on this. I guess Intel just generates far more hits than AMD.

Edited 2010-01-05 13:18 UTC

Reply Score: 1

Summary of it all
by Scali on Tue 5th Jan 2010 21:06 UTC
Scali
Member since:
2010-01-03

I've summarized most of what is said on my blog:
http://scalibq.spaces.live.com/blog/cns!663AD9A4F9CB0661!238.entry

I've also linked to an optimization challenge that I participated in on an assembly forum a while ago.
It clearly shows that the fastest code on one CPU is not necessarily the fastest on another, and could actually severely cripple performance.
Best example is an MMX routine, that was the fastest on an Athlon XP, but among the slowest on both Core2 Duo and Pentium 4. They were better off with solutions not using MMX, because of microarchitectural differences with the Athlon XP in the MMX implementation (to be exact: the penalty on the EMMS instruction).

Reply Score: 1