Linked by Michael on Tue 29th Mar 2011 23:53 UTC
Benchmarks "Version 4.6 of GCC was released over the weekend with a multitude of improvements and version 2.9 of the Low-Level Virtual Machine is due out in early April with its share of improvements. How though do these two leading open-source compilers compare? In this article we are providing benchmarks of GCC 4.5.2, GCC 4.6.0, DragonEgg with LLVM 2.9, and Clang with LLVM 2.9 across five distinct AMD/Intel systems to see how the compiler performance compares."
Permalink for comment 468317
To read all comments associated with this story, please click here.
Member since:

Indeed, data sets and their characteristics do not matter, it is not like a computer program's main function is to process data or anything.

For example, different schedulings can have vastly diverging behaviors in performance, esp. given to the cache and load/store queue architectures of modern out-of-order multicore processors. And the characteristics of the data set are fundamental to properly understand the behavior being observed and benchmarked. Similarly, it is fundamental to understand the type of data distribution, to understand for example if the compiler is scheduling efficiently to keep the the multiple functional units busy. Etc, etc.

So yes, understanding the data sets, as well as the instruction mixes is fundamental when benchmarking different compilers and their performance properly.

I don't consider studies done with such huge omissions to be useful at all, a waste of time if anything. Although I understand it is a relatively easy way of filing up 8+ pages of contents with graphs.

Reply Parent Score: 2