“Changes from GCC 4.4, which was released almost one year ago, include the use of the MPC library to evaluate complex arithmetic at compile time, C++0x improvements, automatic parallelization as part of Graphite, support for new ARM processors, Intel Atom optimizations and tuning support, and AMD Orochi optimizations too.” See also the GCC 4.5 changelog.
Awsome stuff.
I guess by now everybody has heard about the new plugin system – it’s huge! Even GCC-skeptics will be happy about it since it makes Dragonegg possible (http://dragonegg.llvm.org/) which is a much better reimplementation of llvm-gcc. In other words it allows you to get the GCC frontend together with the LLVM middle and back ends.
Other huge improvements are in the optimizer: the addition of the new link-time optimizer (LTO) and the much improved automatic parallelizer.
But personnally, what I’ve been looking forward to is the huge improvements in the C++ front-end. It’s no mystery by now that GCC has had, ever since 4.2, the best C++ front end of all compilers. Each subsequent release since has brought great improvements to the C++ front end, and 4.5 is no exception. In this release, the compilation times grow only linearly with the number of template instantiated, and specialization lookup should now take constant time. Together with the other template-related improvements (e.g. gcc’s debug output will now omit template parameters that are equal to their default value) and the improved c++0x support, this means yet another round of great improvements for all us template-metaprogramming lovers.
Yes, great news. LTO is the biggie this time, but Graphite has been given much improvement aswell (although there’s still work to be done). And as you stated, the plugin architecture has great potential (especially for llvm with clang still needing alot of work and their gcc-llvm releases using old gcc 4.2 as frontend). Given that llvm has LTO already but that it’s only available on OSX, will the DragonEgg plugin be able to use gcc’s LTO? I’d like to do some benchmarking between the two.
Is it possible to provide maybe a link to what has changed in the LTO? I’m a little clueless as to what it does – am I correct in assuming it is referring to compiling stuff that links to dynamic libraries? does it improve speed and snappiness?
There are a list of changes:
http://gcc.gnu.org/gcc-4.5/changes.html
But nothing is mentioned in depth about the x86-64 optimisations; I’m sure heaps of work has been done but it would be great if someone can point me to an article about it
It’s about optimizations that happen at the time when the various object files (.o) produced by the compiler are linked together to produce the final executable or library. For example if your project consists of just 2 cpp files and the result is 1 standalone executable, you already benefit from it. The point of that is that certain optimizations can be done at that time only (linking time). It can result in faster and smaller code. Links? google gives:
http://gcc.gnu.org/wiki/LinkTimeOptimization
Edited 2010-04-16 01:58 UTC
Well, nothing has actually ‘changed’ in LTO since it’s new for Gcc 4.5. Link Time Optimization is a way of looking at all the different object files as one single entity which allows for more aggressive optimization (particularly if there’s no need of exporting symbols), aswell as dead code elimination, removing duplicate functionality etc which usually results in faster and smaller code. An example would be sqlite’s amalgamation which is basically a ‘manual’ version of LTO where they merge all the source files before compiling which resulted in a 5-10% speed increase.
PGO (profile guided optimization) is another type of optimization which relies on running data to make the best choices in optimization. You first compile the program in a ‘test’ stage where alot of measuring code are inserted, you then run the program which then gathers alot of data about the way the code executes. Then you compile the program again in a second stage in which all the previously gathered data is used to perform better optimizations, things like cache usage, loop unrolling, branch prediction all benefit massively from the aforementioned gathered data and the code is typically 10-15% faster in the programs I’ve benchmarked. This optimization is not yet available for llvm though.
They are updating the download page of llvm. I guess we will finally get llvm-2.7 very soon!
Hum, false alarm. Anyway, I’m also looking forward to have llvm working with gcc through dragonegg.
Edited 2010-04-15 20:01 UTC
LLVM 2.8 is when we’ll really see it shine.
Clang trunk is currently at version 1.5.
When Clang meets its aim and passes all it’s tests I won’t touch GCC.
How does Clang compare to GCC in terms of optimising the code? I understand that when compiling it is faster than GCC but in terms of code spat out the other end – is it of better quality? more optimised? I’ve been following LLVM for quite some time and I wonder whether we’ll see Apple move to LLVM for the next release of Mac OS X with the whole thing being compiled with it. XCode 3.2.2 now provides Clang for a front end so I am wondering whether the growing confidence will result in Clang being used as the default compiler within Mac OS X in the next release
Edit: The *BSD programmers also are very excited about moving over to LLVM/Clang for their default compilers so it will be interesting to see how that translates into more mindshare being contributed to LLVM development.
Edited 2010-04-16 01:52 UTC
Most benchmark comparisons between gcc and llvm I’ve seen are very old, it did not fare well in the Performance Smackdown over at ‘breaking eggs’ iirc but alas that was also an older version and ffmpeg is but one benchmark. My own benchmarks are varied. On some of the programs from the language shootout it did better than gcc (this was llvm 2.6 vs gcc 4.4) and on some it was slower, but the differences were very slight overall. However when I compiled larger programs there was often a large difference in favor of Gcc which puzzled me. It will be interesting to compare gcc 4.5 vs llvm 2.7 since both are progressing nicely. I also hope llvm will put work towards adding PGO (profile guided optimization) since that is a very powerful optimization as I’ve noticed in programs such as Blender (3d rendering). Also hope that llvm LTO will also be available for other targets than OSX/Darwin.
Having these two excellent free and open source compilers makes me feel spoiled.
Clang and GCC are very close in performance of the compiled code in the tests that I have done. Clang won sometimes, and GCC won sometimes. Neither ever won by a very large margin.
However, the backend matters much more than the front end in optimization. Clang and GCC with Dragonegg should produce very similarly performing code, because they both use LLVM as the backend. When comparing normal GCC to Clang or Dragonegg, GCC’s backend has the advantage of being more mature. LLVM has the advantage of being more modern. In the long run, I think LLVM will beat GCC in performance.
This is true with C and to some extent with “easy C++”, for example Qt. It’s not true with “heavy C++”, i.e. templates, metaprogramming, deep chains of inline function calls, etc. As found in the boost libraries, in many scientific c++ libraries, etc.
Edited 2010-04-16 13:20 UTC
In the long run, LLVM will probably get pulled into GCC.
Apple is driving LLVM. GCC will never control the compiler platform again for OS X and others.
If the end product of that is that things like LTO in llvm is only available on OSX/Darwin I don’t really see that as a plus. Personally I think Apple is all about control, and since they can’tcontrol Gcc they will eventually switch entirely to llvm. I really hope llvm on other platforms than Apple’s will not become second rate cizitens.
Three cheers for that! Every once in a while, my heart bleeds a little when sinister Gnu forces hinder Apple from delivering consumer value and genuine understanding of where style meets substance.
Did I miss the sarcasm somehow?
Had it not been for gcc, there would not have been OSX. I fail to see how a compiler has anything to do with the “style and feel” of an GUI.
Oh, well…
I don’t know. Did you?
It was obviously sarcasm. I seem to be pretty bad at it however.
Edit: as a clarification – what really troubles me is how easily people stop considering the “big picture” when they are given shiny things. LLVM is/was a highly intiguing project, but now it reminds of me of Thom’s analogy of Hitler inventing cure for cancer. Or, it might be just the beer talking.
Edited 2010-04-17 21:46 UTC
I seriously doubt that. Also I seriously don’t want that. Having two ‘competing’ open source compilers is great. Also, since llvm adopted the same flags as gcc they can be easily (when llvm/clang becomes more mature) used interchangeably. Given the extremely complex nature of compilers these days, they will likely always be better than the other at some things. So being able to use the one that works best for _your_ project will be a great benefit.
ClangBSD is now self-hosting:
http://lists.freebsd.org/pipermail/freebsd-current/2010-April/01664…
Awesome, thank you for the heads up – from what I read on the LLVM mailing lists there is someone doing regular builds of the ports using LLVM/Clang and addressing hickups as they appear; maybe in a years time we’ll see LLVM/Clang become the default compiler not just for the system but for the ports as well.
Regarding Binutils, are LLVM developers are working on tools to replace binutils?
Edited 2010-04-17 02:05 UTC
I don’t know if it’s being done as part of the llvm project or as separate projects, but there is work underway to replace the entire gcc toolchain (compiler, linker, assembler, etc).
I hope for an up to date ObjectiveC/C99 implementation with correct debugging. For me g++ fits my needs. I also would like some of the work in LLVM/GCC to be reused by Open64 (but I don’t believe it will happen). A healthy compiler system could benefit all.
What I expected from gcc is a sane cross-OS libatomics implementation in co-operation with Boehm gc. In current situation there are two implementations. I believe both should converge into one and re use it in both projects (and general kernel development). This is what I expect from a next release of both projects. It is crucial to have this functionality there with multi-core systems. I do not aim at a flamewar but such a library could complement pthreads and even Threading Building blocks. I personally consider it vital. It could provide a standard interface that could be reused by other compilers or exploited by gcc’s syntax. This is my opinion of course.
Edited 2010-04-17 09:02 UTC
Here’s one that caught my attention.
Speaking as a LinuxFromScratch user and sometime developer, “about time” is my reaction to that one… it’s crazy that the previous behaviour was to simply ignore missing includes, and letting the compile fail downstream on an undefined function or type…
Yeah another “about time”: There’s now an option to compile the libstdc++ statically! Before this is was nearly impossible:
http://www.google.co.uk/search?q=static+link+libstdc%2B%2B
AMD Orochi ?
http://en.wikipedia.org/wiki/List_of_future_AMD_microprocessors#.22…