Linked by Thom Holwerda on Sat 8th Oct 2005 18:40 UTC, submitted by anonymous
Java Programmers agonize over whether to allocate on the stack or on the heap. Some people think garbage collection will never be as efficient as direct memory management, and others feel it is easier to clean up a mess in one big batch than to pick up individual pieces of dust throughout the day. This article pokes some holes in the oft-repeated performance myth of slow allocation in JVMs.
Thread beginning with comment 42024
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Question on Java memory usage
by zlynx on Sat 8th Oct 2005 23:13 UTC in reply to "Question on Java memory usage"
zlynx
Member since:
2005-07-20

I find that Java apps use a lot of memory simply because they can. There's a parameter that you can pass to the JVM to limit how much memory it uses.

The application will run slower because it'll hit the memory limit and run garbage collection more often, but it isn't really bad.

The big problem with it is when an application needs a lot of memory for some operation but doesn't need it all the time. Then, the program will crash if the JVM memory is too limited, but it'll also leave all that memory used when it isn't actually needed.

Which leads to another Java problem: these applications with lots of allocated RAM that is never touched get swapped out to disk, then when the GC does run, it has to swap all these pages back off disk to check them, even though nothing is actually using them.

Reply Parent Score: 1

Simba Member since:
2005-10-08

"Then, the program will crash if the JVM memory is too limited, but it'll also leave all that memory used when it isn't actually needed."

It is possible to force the garbage collector to run programatically with System.gc() if you want to programatically free up resources at some point.

Reply Parent Score: 1

Member since:

It only suggests the VM to collect, nothing more..

Reply Parent Score: 0

offtangent Member since:
2005-07-06

We've seen this happen with our simulations on cellular networks where we created a lot of objects. We found that one of the limiting factors was the heap-size, where increasing it resulted in performance improvements.

For us, setting this number to about 1Gb on a machine with 2Gb RAM led to less aggressive garbage-collection, leaving more resources for the simulation. However, it did not use all that memory in every run; instead, things went very smoothly until we pushed the simulation to a very high number of nodes, at which point, it complained about running out of heap-space, and then crashed.

Reply Parent Score: 1