Linked by Christopher W. Cowell-Shah on Thu 8th Jan 2004 19:33 UTC
General Development This article discusses a small-scale benchmark test run on nine modern computer languages or variants: Java 1.3.1, Java 1.4.2, C compiled with gcc 3.3.1, Python 2.3.2, Python compiled with Psyco 1.1.1, and the four languages supported by Microsoft's Visual Studio .NET 2003 development environment: Visual Basic, Visual C#, Visual C++, and Visual J#. The benchmark tests arithmetic and trigonometric functions using a variety of data types, and also tests simple file I/O. All tests took place on a Pentium 4-based computer running Windows XP. Update: Delphi version of the benchmark here.
Permalink for comment
To read all comments associated with this story, please click here.
For those who care...
by Bascule on Fri 9th Jan 2004 01:14 UTC

I've compiled my results into a more easy to interpret format, and drawn some different conclusions than I posted here:

http://fails.org/benchmarks.html

In reply to MikeDreamingofabetterDay...

My point is, Java code profiling on the fly, is the better solution.

The primary drawback of Java's run-time profiling is that all optimizations are discarded when the application exits. Profiling really helps optimize code which spends most of its time executing in a small number of places within the executable. Consequently, large applications which do an elaborate amount of startup processing take an additional performance hit from run-time optimization in that the startup code will only be touched once, but the run-time's optimization code still attempts to determine how best to optimize. Eclipse and NetBeans certainly come to mind... their start-up times are an order of magnitude worse than any others IDEs I've used.

Profile guided optimization, on the other hand, is a one-time process, and the optimizations are permanent to the binary, thus no performance loss is incurred.

Mostly because 99% of the programmer's out there will never get the chance to profile their code, if they even know how to do it.

Profiling should be (and often is) an additional automated function of the unit testing process. Intel's icc can take a number of profiles from a number of different test runs and compile the collective results (a separate .dyn file is generated for each run of the instrument executable) to determine the best way to optimize the given module when a release build is performed.

I've never used Microsoft Visual C++ on a large project, but your woes there are not really pertainent to the use of profile guided optimization.