If you’re part of the current blogging craze, then you’ve likely heard of Blog-City, a blogging site owned and operated by Blog-City Ltd. When some unexpected performance issues cropped up, Java performance experts Jack Shirazi and Kirk Pepperdine(from JavaPerformanceTuning.com) were asked to assist in a technical tuning of Blog-City. Here’s what they found out and did to fix the problems.
Tuning Java garbage collection
Submitted by David 2004-07-08 Java 8 Comments
It’s funny that their internal cache was fingered in the memory leak. Remember, flush your cache occasionally! Or better yet, turn it off, then turn it on to see if it really does help performance.
I’ll bet they either (1) forgot to remove the old entries from the cache when new, updated entries were inserted into the DB, or (2) had multiple containers holding cached items and only nulled out entries in one of them. The latter is more likely, since there’s usually a hashtable (quick access) and a list (iteration) for holding cache entries.
Anyone from the project want to pipe up?
Great article! Very informative though I am not more of a script language programmer than a Java programmer. The conclusion you have is no less relevant however. You conclude that HD swapping and garbage collecting don’t get along without tuning the garbage collector or VM. That’s fine if you have a specific application to tune up and if your VM is tunable like that.
I suspect that a simpler solution would be to move the swap partition to a RAM disk. Of course this will effectively reduce the RAM available to your programs. But if your program or any other program fits in 512 MB total RAM and swap, this should eliminate the 70 second full garbage collection times that result from swapping. No more swap storms to the HD that put your machine into a coma.
I suppose it is possilble to elimate the swap partition completely. But I think it is still necessary for reclaiming unused pages. I would guess a ratio of 384M system RAM and 128M swap partition on a RAM disk would be a good place to start. Your applications will need to swap sooner, but the swapping won’t kill your performance. Plus, it’s an easier and more general solution than tweeking the VM.
See? gc is bad. to those who blame c’s mallocs and don’t want to spend time optimsing c code, look at the effort spent on java tuning in this article.
A good article which helps explain why Java Apps do need some tuning after all. There is however nothing to beat tight and efficient coding in apps. I try to ensure that I do not have to rely of default GC scenarios in my apps to make them work properly by doing it myself whenever possible. One side effect is that the performance of these apps is more consistent (if sometimes slightly slower) than those who rely on the Java VM to do it automatically.
The comments about the lack of Physical Memory reminds me of something that an old salesman colleague of mine said sometime about 1987 when doing a sales pitch to a large Telco in the UK.
“There is nothing better to improve the performance of a Virtual Machine Operating System than Physical Memory”
(Dick Clements where are you now?)
The customer was stalling over the purchase of an extra 16Mb or Ram for a big Clustered VAX System. How times have changed…
When an article about how to write a decent desktop JAVA application that doesn’t take 10 mins to load on an average machine and that doesn’t crawl while you click on a button or menu???
If you want to write fast applications then you need to be a good programmer. No matter what language you are writing in. As to ucedac’s comment, I find that it is possible to get some good performance out of systems running java, if you program with speed in mind, eg why use the Vector class in non-concurrent applications, when ArrayList is faster, etc. I think this reminds me of when someone made someone of the Tcl programs in the great programming language shootout 10x faster with a few minor alterations. Stop worrying about whether C is 10x or 30x faster, look at your algortihms! If you can reduce a problem from O(n2) to O(n) then do it! These things will make you programs much more impressive to the user, than fannying around argument over a few % here or there.
“But in the real world, related objects are rarely (if ever) clustered. ”
Unless, of course, the application is built on a framework _other_ than Java in which you can make sure that they _are_ clustered, where necessary. Alternatively, you could allocate memory in larger chunks to start with and free the whole chunk regardless of how many dead objects it contains…
Actually, that sort of thing could make an interesting article, in the way that ‘we changed the survivor ratio and hey, in this case the effect was positive’ doesn’t.
It’s a good thing the JVM’s garbage collection removes the need to worry about memory leaks and tweak the… oh, wait.