Mark Mitchell announced the availability of GCC 4.0.2. He explains, “this release is a minor release, containing primarily fixes for regressions in GCC 4.0.1 relative to previous releases.“
Mark Mitchell announced the availability of GCC 4.0.2. He explains, “this release is a minor release, containing primarily fixes for regressions in GCC 4.0.1 relative to previous releases.“
Give me 4.1.0 .. auto vectorization.. lots of new optimizations.. awesomeness!
Oh yeah! Let the ricing begin
GCC is the most overrated compiler available. It flat out sucks compared to most other C or C++ compilers. However, it is used to compile more lines of code per unit time that just about any other compiler, if not all of them. Why? Because it’s free and open source. Even if your source only compiles with a certain version of GCC, people can install that version free of charge and compile the code.
However, by all other metrics, GCC is crap. I’m glad that 4.x introduces a workable optimization framework, because I hope that maybe 5 years down the road, GCC will be as good as the next most popular compiler.
Autovectorization would be a start, but that would only begin to overcome the tremendous gap in optimization capabilities compared to other compilers. This isn’t about Gentoo or “ricing” (whatever that means). We need a decent compiler, not one that is merely free and open source.
Ricing in the context of compilers refers to this page:
http://funroll-loops.org/
Wrong. GCC is an excellent compiler and the only compiler that is on par is Intel’s compiler on Intel architectures. What we need is not a different compiler but more focused development behind GCC.
“Wrong. GCC is an excellent compiler and the only compiler that is on par is Intel’s compiler on Intel architectures.”
GCC is not in any way shape or form on par with ICC on x86 or IA-64. In fact, this is probably the biggest performance gap of all.
“What we need is not a different compiler but more focused development behind GCC.”
Totally agree with that. No reason to throw out the best FOSS compiler suite and start over.
I believe it’s farther behind PPC compilers than Intels.
But frankly, you’re making a big deal out of small potatoes. You could either spend 10 thousand hours improving GCC, or work for 100 and buy a machine that’s 3 times faster…
That doesn’t mean we shouldn’t improve gcc, it just means that it’s plenty fast enough for 99% of the jobs it does. The other 1% will just have to pay for now. It’s not a perfect world .
Gcc does have a huge advantage in that it’s the compiler which compiles for intel, ppc, and half of every other processor available. Get Intel to do that one!
“But frankly, you’re making a big deal out of small potatoes. You could either spend 10 thousand hours improving GCC, or work for 100 and buy a machine that’s 3 times faster…”
Or you could study your problem for 1 hour and use a better algorithm.
I have a friend working in Intel and they do software development for network processors.
They use GCC!
<em> the only compiler that is on par is Intel’s compiler on Intel architectures</em>
You’ve got to be kidding. Maybe you’ve got a case for the closest to, but you’ve to convince that it’s faster than MS visual c.
Other compilers don’t quite have the same goals as gcc. Sure, Intel’s compiler creates more optimized code for x86. However, it has a damned hard time creating code for a 68xxx or a PPC 970. Microsoft’s C/C++ compiler might be able to create pretty programs for Windows on x86, but that doesn’t help somebody running BeOS on a PowerPC, does it? Or what about other languages? Sure, lots of things compile C and C++. But how many compilers also do Ada? Fortran? Java?
Sure, it might not be the best at compiling highly optimized C code, it might not have the deepest depth. But it sure has breadth, eh?
“Sure, it might not be the best at compiling highly optimized C code, it might not have the deepest depth. But it sure has breadth, eh?”
Yes, and it’s free software. So in summary it is able to compile a lot of source languages for a lot of target architectures, but it doesn’t do any of them very well. All it has is breadth and availability.
It does them well enough (the vectorisation stuff going into 4.1 isn’t trivial), and it also excels in standards compliance. You could write STL-based C++ programs in GCC long before you could with commercial compilers like VC++.
And, to be fair, where a lack of optimisation occurs, often it’s due to them being unable to incorporate optimisation algorithms due to patents, rather than a lack of ability.
“but it doesn’t do any of them very well.”
That’s going too far. You’re not going to see your code quadruple in speed on vc++, but that’s what you’re conveying via implication. And you’re not going to see your code suddenly malfunction when it should work (but you’re conveying that via implication).
It does the job well. Gcc objects seem to run very stably and accurately for me (assuming I don’t -funroll-loops), it’s just misses that extra hump of speed.
If you all were accurate in saying gcc is significantly slower than vc++, then Microsoft has some explaining to do because it’s monolithic architecture built on a better compiler isn’t running much snappier than modular architectures built on a weaker compiler!
if GCC will never be the best compiler because if GCC was the best compiler then it would be the only compiler(because it’s free).
but for a portable, free compiler it’s all we have.
– Jesse McNelis
If you’re a serious developer, there is no reason your software should suffer because you refuse to pay for your compiler. That’s unfair to your end users.
Though MS’s compiler toolkit is free if you’re a windows developer. No reason to use gcc on Windows.
gcc is pretty good for linux though.
@ sappyvcv
There you again, little MS-lover
GCC (some times) produces faster binaries on Windows than Microsoft’s Visual C++ compiler. The reverse is also true. However GCC is cross platform, and MS VC isn’t.
And now you probably will be screaming about evidence and numbers.
So here you go:
Search in google with these words:
gcc microsoft compiler comparison
dylansmrjones
kristian AT herkild DOT dk
BTW: I’m not a GNU/linux troll. I just happen to like to have clean facts on the table and all the access to anything I want.
There you go again, calling me names.
Yeah, gcc is so great. That’s why Firefox devs use it to compile Firefox on Windows.
I’d love to hear your thoughts on that. Why do you think is GCC crap and why are other compilers better? Would you care to share detailed reasons? After all, crap is subjective. I’d like to form an opinion based on any supporting facts you have.
I’m using the renesas microcontrollers (h8300h).
gcc let me compile my code under linux, netbsd, and windows.
gcc let me develop c++ application for h8300h.
gcc let me compile the same code for arm7 too.
(Many compilers do not use the same way to place for example a const array into a special zone of memory).
There are not too much compilers, c++ are even more rare, for those micros around here.
My boss would tell me, hey for our next generation of boards we will use the arm7.
No problem, I can compile the same code, base libraries at least, for arm7 easily.
So cross-micro (not only microprocessors but even microcontrollers), cross-platform, c and c++, continuos developing, and it’s free.
Just an example I’m a testimonial of.
The code is not optimized as http://www.iar.com ?
Didn’t notice that.
But what’s the aim to have a compiler using a GUI, for windows only, with only one year of support, and costing 2000euro for one license (yes they have an h/w key) ?
I totally agree. Maybe gcc is not the best of the best, but it’s the best for me – because it works. Not only it’s free, and I also like that it compiles for arm7, avr and other microprocessors. Those “the most optimizing compilers” are IMO not better than gcc. Are the optimizations they perform actually worth the price, or they’re optimized for speed comparison tests ?
I’m not talking about compilers for microcontrollers, because they are optimized for chips with limited register set, while gcc expects unlimited register space.
“Are the optimizations they perform actually worth the price, or they’re optimized for speed comparison tests ?
I’m not talking about compilers for microcontrollers, because they are optimized for chips with limited register set, while gcc expects unlimited register space.”
There is a very significant real-world advantage to auto-vectorization in throughput workloads. For example, if I compiled GCC with ICC, then GCC would compile faster. This advantage only gets larger as the size of the register file increases. Now that more Linux/GCC users have x86-64 processors with 16 GPRs, the gains from vectorization are increased. In addition, Linux/GCC is becoming more popular on high-end architectures such as POWER5, where even higher gains can be realized through aggressive instruction reordering and vectorization.
The good news is that we might see some useful optimization code coming out of IBM very shortly. They’ve written a subset of autovectorization in order for GCC to support the Cell processor. Basically, it efficiently packs integer types into 128-bit registers for vector processing. The patches are to be released under GPL, but that’s all I know.
Now that more Linux/GCC users have x86-64 processors with 16 GPRs, the gains from vectorization are increased.
How so? I fail to see how GPRs have anything to do with vectorization.
In addition, Linux/GCC is becoming more popular on high-end architectures such as POWER5, where even higher gains can be realized through aggressive instruction reordering and vectorization.
Well it seems you really are pulling stuff out of your arse, because the POWER5 does not have any vector/SIMD units. Idiot.
LOL. Cold but truthful.
“Well it seems you really are pulling stuff out of your arse, because the POWER5 does not have any vector/SIMD units. Idiot.”
You’re right. Vectorization wouldn’t have any impact on POWER5. However, who’s to say that vector processing won’t be a feature on POWER6 (Eclipse)? Instruction reordering still remains important.
Thanks for keeping me honest.
Care to back up your claims about poor performance from GCC Butters?
Show me the code! (or numbers in this case).
GCC is very decent. Not the best of the best, but very decent. And here are the numbers! http://www.osnews.com/story.php?news_id=5602&page=3
So C code compiled with GCC performs worse than C++ or C# compiled with Visual Studio, despite the overheads incurred by the latter languages. But at least it’s faster than Java or Python!!
Look, without GCC, there would be no free software system as we know it today. My comment didn’t deserve the “Hey, take that back!!” kind of response. GCC doesn’t support basic optimization techniques aggressively implemented in just about every other C/C++ compiler, and it isn’t a very good Java compiler either. It produces working code (in most cases), and beyond that it’s absolutely mediocre.
I’m just saying there’s a lot of work yet to be done.
Yes. There wouldn’t be BeOS, Linux, and other non-Microsoft OSes, probably Apache, PHP, MySQL and most of free software.
“GCC doesn’t support basic optimization techniques”
Which ones ?
“So in summary it is able to compile a lot of source languages for a lot of target architectures, but it doesn’t do any of them very well.”
I guess Apple using GCC as the default compiler means a better compiler for Apple exists? XCode and tiger ship with GCC 4. So at least it is good enough for one vendor.
“I guess Apple using GCC as the default compiler means a better compiler for Apple exists? XCode and tiger ship with GCC 4. So at least it is good enough for one vendor.”
Xcode2 ships with GCC 4, but previous versions didn’t use GCC (at least not by default). Apple seems to have packages it’s own version of GCC 4, whatever that means. It lists autovectorization as a feature. From GCC’s website it seems that there is very primitive support for autovectorization in GCC 4.0.x. It only supports inner loops with sequential memory accesses to the same data type, with no reduction or induction. It does look like real vectorization support will arrive in GCC 4.1.x.
Nonsense. Xcode 1 used GCC 3.3, and Project Builder before that used GCC 3.1. Ever since OS X was released, Apple have been heavily using GCC.
To extend on that, in OS X 10.0, PB used gcc 2.95; before that, WebObjects used gcc 2.7, and so forth down to NeXT starting their company based on gcc. Where do you believe the Objective-C support in gcc comes from…
Actually, Tiger is still compiled with GCC 3.3, however, they offer GCC 4.0 for the default compiler since 2.1 released (2.0 included both 3.3 and 4.0, however 3.3 was still the default).
I have a feeling that they want to get coders used to GCC 4.x requirements before making the big move of MacOS X comiling over to GCC 4.x.
With that being said, the optimisation framework IIRC is either finished or still being worked on – its work in progress and I’d have a feeling that you’ll find alot of the work will start to go into optimising the code for x86 as much as possible – make their x86 machines appear to be better than the PowerPC machines once released.
Mind you, I’d love to see OpenWorkstation vendor release a PPC970 based motherboard and processor, with more of an effort by IBM to get more OS’s on PPC970, such as Solaris, Linux, AIX (a free, unsupported version), FreeBSD etc. etc.
I have a number of friends who are core developers at Apple. And they ALL OF THEM HATE GCC. And most of them come from a FOSS background.
GCC is about 5x (if not more) slower than Metrowerks (what they previously used) at compiling and the generated code is noticably slower.
Many teams inside Apple resisted the change to GCC.
GCC just recently got “dead code removal” a feature that’s been around since, what, the 1960’s ?? Without dead code removal I found it unsuitable for many embedded projects (yes, I know that a staggering number of them use gcc none the less).
GCC has a lot of advantages, but quality of compiled code, or speed with which it compiles or size of resultant binaries is certainly not among them.
My understanding is that GCC4 involved a massive internal redesign to a really, REALLY good architecture. In transition many little features (adding up to a lot of performance) got put to one side.
The new architecture will take time to leverage, however it will then be head and shoulders ahead of anywhere it has ever been. Am talking about modularity that enables optimisations to be simpler to integrate and apply across languages, processors and so forth.
Now is IMHO a silly moment to whinge about GCC performance; so go with an earlier version if you are looking for all the optimisations.
Of course, I may have got it all wrong.
Ibm most definatley has a better PowerPC compiler, but it isn’t free.
I believe IBM ships a better PowerPC compiler, not sure if it’s optimized for a G5 or G4, but if it was worth their time/energy they’d do it, but alas it’s not.
I believe IBM ships a better PowerPC compiler, not sure if it’s optimized for a G5 or G4, but if it was worth their time/energy they’d do it, but alas it’s not.
Get off your ass and donate your time to make it the “best of breed.” Or if you are really talented, get hired by IBM or RedHat and work on GCC for C/C++.
Otherwise, proclaiming a compiler suite that targets a minimum of 7 languages versus a compiler targeting 2 is pointless.
Wanna take a guess at how butt slow Intel’s compiler is for Objective-C?
Oh wait! It doesn’t and will not support Objective-C. Guess Apple will have to do all the work on that one and give it back to GCC.
The Autovectorization will be nice along with several advancements to C and Objective-C at 4.1 and beyond. Get hired by Apple if you want to know more.
“Get off your ass and donate your time to make it the “best of breed.” Or if you are really talented, get hired by IBM or RedHat and work on GCC for C/C++.”
At IBM we only use GCC in the Linux Technology Center. Everyone else uses Visual Age XLC for C/C++.
I work on a static analysis tool, which inherently needs to parse the code like a compiler. We worked really hard to tweak GCC to emulate XLC, but in the end we only got about 90% parse coverage. We dumped GCC for the EDG parser, which we were able to configure to get about 98% coverage. Of course, IBM owns Visual Age, but for some reason we can’t integrate XLC into our static analysis tool.
The funny thing about IP lawyers is that they never get fired for saying no.
Correct me if I’m wrong but the benchmarks where made with gcc 3.3.1, didn’t there go a lot of optimization ito the 4.x.x codebase?
maby those benchmarks should be repeted with the latest gcc to get a more currant resoult.
Plecae give me feedback on this, I.m not a programmer so I may be wrong
Now maybe Arch Linux can start getting it’s updates put into the current and extra branches again!
I don’t care much about most FOSS but there are some packages who are just great: GCC, Valgrind and Firefox to name the three I use regularly.
Linux is *not* user friendly, and until it is linux will stay with >1% marketshare.
Take installation. Linux zealots are now saying “oh installing is so easy, just do apt-get install package or emerge package”: Yes, because typing in “apt-get” or “emerge” makes so much more sense to new users than double-clicking an icon that says “setup”.
Linux zealots are far too forgiving when judging the difficultly of Linux configuration issues and far too harsh when judging the difficulty of Windows configuration issues. Example comments:
User: “How do I get Quake 3 to run in Linux?”
Zealot: “Oh that’s easy! If you have Redhat, you have to download quake_3_rh_8_i686_010203_glibc.bin, then do chmod +x on the file. Then you have to su to root, make sure you type export LD_ASSUME_KERNEL=2.2.5 but ONLY if you have that latest libc6 installed. If you don’t, don’t set that environment variable or the installer will dump core. Before you run the installer, make sure you have the GL drivers for X installed. Get them at [some obscure web address], chmod +x the binary, then run it, but make sure you have at least 10MB free in /tmp or the installer will dump core. After the installer is done, edit /etc/X11/XF86Config and add a section called “GL” and put “driver nv” in it. Make sure you have the latest version of X and Linux kernel 2.6 or else X will segfault when you start. OK, run the Quake 3 installer and make sure you set the proper group and setuid permissions on quake3.bin. If you want sound, look here [link to another obscure web site], which is a short HOWTO on how to get sound in Quake 3. That’s all there is to it!”
User: “How do I get Quake 3 to run in Windows?”
Zealot: “Oh God, I had to install Quake 3 in Windoze for some lamer friend of mine! God, what a f–king mess! I put in the CD and it took about 3 minutes to copy everything, and then I had to reboot the f–king computer! Jesus Christ! What a retarded operating system!”
So, I guess the point I’m trying to make is that what seems easy and natural to Linux geeks is definitely not what regular people consider easy and natural. Hence, the preference towards Windows.
And now back to topic…
I’m really excited about good f95 support! But I’m also waiting for f77 and f2c stuff to be working again.
You need to clarify the “waiting for f77 and f2c
stuff to be working again.” Mainline gfortran,
which will be in 4.1.0, compiles the NIST F77
testsuite without a problem along with numerous
other standard Fortran 77 codes. The code freeze
to stablize 4.0.2 prevented some of the needed
patches from being included in 4.0.2. The 4.0.x
branch is now open for commits, and the gfortran
developers are started to back port the patches.
I have no idea what you mean about “and f2c
stuff”.
—
steve
I’m really excited about good f95 support!
Are you kidding me?? Try to compile this with gfortran:
program bug
     implicit none
     integer, parameter :: nx=32,ny=32
     real, dimension (nx,ny) :: f
     integer :: i
     f=spread(cshift((/(i,i=1,nx)/),nx/2)*2,2,ny)
end program bug
It’s such a pity that there’s no collaboration between gcc and g95 — a free Fortran 95 Compiler that actually works…
Obviously, no compiler in the world would compile the above code… To my defense, the  ’s did not show up in the preview.
Yes, there are bugs in gfortran. I can’t find
your bug report in the GCC bugzilla database.
What is the bug number?
You should also recognize that gfortran needs to compile
legacy code, which g95 can’t. A simple example,
do 10 x=0.0, 1.0, 0.01
print *, x
10 continue
As to gfortran and g95 getting along. The
ball is in Andy’s court. For a brief history
http://gcc.gnu.org/wiki/TheOtherGCCBasedFortranCompiler
as the gcc devs themselves have stated, the reason they are behind in terms of optimization are first that it is a CROSS compiler, secondly, the old optimization framework was severely limited. rewriting the framework was the primary task for 4.0.x. now that the new framework is in place, likely the old optimizations that are not yet implemented will first be added, THEN we’ll probably see what the new framework is capable of. gcc is also getting alot of help from ibm, amd and intel these days, since they all have a great interest in gcc generating good code for their respective cpu’s (yes, even intel, since even though free, icc for linux just isn’t very compatible). that said, cpu-specifically targeted compilers such as icc, vc, vectorc etc will likely always outperform gcc (hell, else they should be ashamed), however, gcc is definately on track to minimize the gap, and it will be interesting to follow the 4.1.x series.
You’re right. Some people, however, still like to insist that gcc produces better machine code with c/c++ code.
It’s not really a criticism to say it doesn’t, it’s the truth. It’s more of a side-effect of the scope of the compiler. Cross-platform and cross-language.
On windows, until it catches up to the other compilers, there is next to no reason to use it.
ibm compiler can only be used if the system is a gcc 3.3 system. its not compatible with 3.4 or 4.0 backend.
(maybe a newer version will be released)
gcc 4.0 is as fast as the ibm compiler.