posted by Andy Roberts on Wed 1st Jun 2005 16:22 UTC
IconFollowing on from my previous article, the Java platform has an even greater image problem that is more than skin deep. Coming under yet another two-pronged attack, you'll typically hear complaints falling into the two camps:

  1. Java is slow
  2. Java is a memory hog

The corollary being that Java on the desktop is infeasible for those without the patience of a saint. Funnily enough, in my experience, many people who have commented to me, "Andy, I don't like Java because it's slow" are the very same people who use Perl/Python/PHP. It suggests to me that not all the criticism is entirely objective! This article aims to discuss some of the criticisms aimed at Java and see whether they are justified.


To be honest, even the most faithful Java evangelist would have trouble believing that Java is light on memory usage (relatively speaking, of course). I don't believe that it is, so I'm not going to pretend. Object-orientated languages typically have a slightly larger memory footprint as you have to carry around a lot of information about all your objects currently initialised. A large factor is simply the JVM itself. A simple "Hello world!" class of say a kilobyte will still require the entire JVM, and the several megabytes that entails. Yet, you must think about what you get for your money, so to speak. The JVM and the Java runtime classes are feature packed.

However, one man's feature is another man's bloat and I suppose this is where a lot of the debate stems from. When designing languages, you go either minimalist, and rely on the users to implement all the functionality they need; or you go for the opposite, and provide an extremely rich language where developers can rapidly produce software. As a Java developer, I don't need to manage my memory programmatically: I can leave it all to the garbage collector. This is great for me, but not necessarily the most efficient way to manage memory. If you want to partake in old-school manual memory-management then you can do a certain amount (such as dereferencing your objects and then explicitly calling the garbage collector), but nothing as low-level as what C programmers would be accustomed to. There are other things you can do as a Java programmer to reduce the overhead such as using third-party libraries designed to be more efficient, such as fastutil or Javolution, which replace commonly used classes like the collections framework.

It's worth being aware that the JVM by default doesn't use all the available memory on a given system. This means that by default a Java application won't overwhelm your system and cause heavy swapping and other such nastiness. Programs known to be memory intensive by nature, like scientific applications can in fact be limited by the JVMs conservative usage, and so many developers launch their Java apps with special JVM flags that permit Java to utilise additional memory.


To be fair, before I began looking into this, I had never thought that Java was the fastest language out there. I suppose, however, that because I'm still using Java, that I believe it's fast enough. Assuming that C++ is the holy-grail in terms of performance because it's so fast, then even if Java could achieve half it's speed, it's still fast! People seem to get so focussed on the milliseconds that they forget that if a task takes 0.01s in C++ and 0.02s in Java, then, "oh no", it's half as slow!

Yet, since mulling over this topic, I have found interestingly that Java has really made great gains in overall performance that can in fact put it on par with C++, if not a little quicker! There are various benchmarks that have reported Java algorithms running quicker than C++ equivalents. (Java Pulling Ahead, Java Faster than C++, FreeTTS case study) You will of course find many benchmarks finding the converse. What this shows is that they're at least comparable which is enough for me to imply that Java is fast, and I'll leave it to the benchmark zealots to fight over their nanoseconds.

I think Java is somehow still seen as an interpreted language; in fact, it does get compiled to native code using Just In Time (JIT) compilation. It is also a myth to think that JIT code is slower than pre-compiled code. The only difference is that bytecode gets JITed once its required (i.e., the first time a method is called - and the time is negligible) it then gets cached for subsequent calls. JIT code can benefit from all the same optimisations that pre-compiled can get, plus some more (from Lewis and Neumann, 2004):

  • The compiler knows what processor it is running on, and can generate code specifically for that processor. It knows whether (for example) the processor is a PIII or P4, if SSE2 is present, and how big the caches are. A pre-compiler on the other hand has to target the least-common-denominator processor, at least in the case of commercial software.
  • Because the compiler knows which classes are actually loaded and being called, it knows which methods can be de-virtualized and inlined. (Remarkably, modern Java compilers also know how to "uncompile" inlined calls in the case where an overriding method is loaded after the JIT compilation happens.)
  • A dynamic compiler may also get the branch prediction hints right more often than a static compiler.

Even if it were slower, once again, you have to think what value for money you get per-clock cycle with Java. Many of the core classes are thread-safe as standard, for example; and let's not forget free garbage collection. Also, that bit of code will run fine on all supported platforms without any additional effort. All the OS abstraction is done for you. That's pretty incredible when you think about what it takes to actually pull off such a large abstraction layer like that.

Table of contents
  1. "Java performance debate, Page 1/3"
  2. "Java performance debate, Page 2/3"
  3. "Java performance debate, Page 3/3"
e p (0)    150 Comment(s)

Technology White Papers

See More