Have you had your Java Heap looked at lately, well maybe you should. It could be oozing something nasty. Why walk around wondering if your Heap has a leak? Use the free HeapAnalyzer graphical tool and check yourself before your wreck yourself.
Have you had your Java Heap looked at lately, well maybe you should. It could be oozing something nasty. Why walk around wondering if your Heap has a leak? Use the free HeapAnalyzer graphical tool and check yourself before your wreck yourself.
must be sleepy.. 😉
Java , Maybe
two fixes.. sorry to be anal but eh.. im bored 😀
In C++ I can do memory leaks by hand without a fancy garbage collection =)
The link, which I didn’t follow, was to request a license..?
From the link you didn’t follow:
Your input from filling out this quick survey will contribute towards determining if a commercial license can be offered for this technology.
So free for now at least it seems…
It failed to analyze heap of my Sun 1.4.2 jvm.
will try with 1.5…
And of course due to the geniuses at Sun, if you hit maximum heap the application just crashes. Which makes java inherently unstable, you have to tune your heap size until you find where your app doesn’t crash. Of course then load goes up beyond expectations and CRASH.
Interestingly, .net (and every other platform on the planet) doesn’t have this problem.
Read the book, don’t wait for the movie.
http://www.amazon.com/exec/obidos/tg/detail/-/0130952494/102-150832…
Sounds like chair to computer interface error to me. If you don’t set a maximum heap and design your apps properly, it’s not a problem. If your app is very aggressive about claiming more memory from the heap, there are jvm parameters to fine tune how they do so. The only time I see setting a maximum heap necessary is if you are searching for optimal performance and don’t want the java heap to exceed the amount of physical memory available to the host system; doing so results in the java app being swapped and performance degradation. And by your “.net… and every other plaform” comment, it’s clear you have limited experience with all of the above.
>it’s clear you have limited experience with all of the above.
well it’s clear you have little experience in java as well.
I work with application servers all day, and anyone in my position will tell you that if you have a heavy-load server running on the default 64MB heap, then the whole thing will soon come crashing down with OutOfMemoryErrors. The space you give the heap really has nothing to do with the memory management that the OS does for you.
I don’t see why you have to tune that parameter when you don’t care. By default the JVM should allow the applications to allocate as much memory as they want, but it doesn’t.
For example lets say you are programming a Photoshop-like application, if you open a picture, there is no way you can say how much memory your application is going to use before running it; the picture file may be as small as a couple of bytes or it could be gigabytes of data. Either way, there is no way you can “fine tune” the max heap usage.
I do agree with Phil about Java being unstable because of that problem.
Dude. The JVM defaults to 64 meg. You are writing tiny apps or something… the enterpise app I work on uses up to 2 gigs of heap depending on the complexity of the customer installation. The JVM *SHOULD* work the way you describe -allocate as needed – that’s my point. *BUT IT DOESN’T*!!
If you don’t tweak max heap size and the app goes beyond 64 megs than you will get an OutOfMemoryException and the application will be come unstable and act very strangely, various threads that try to go beyond the max heap size will die, and the entire app often crashes.
I really had no idea java had a maximum heap size that if exceeded would crash. That just seems weird to me. I guess I’ve been lucky enough to not have to write an applicaition needing more than 64MB.
Anyone know what they’re doing ‘under the hood’ on the average VM that makes this a limit needed? Why can’t they just allocate more memory if needed? I mean they’re willing to stop the world to do GC, why could they not stop the world to do what’s needed to allocate more memory (maybe copy the current heap to a larger one).
Is there are reason the JVM doesn’t allocate enough as needed? I mean, does this have a plus side? Or is it just a mistake by Sun. And what about other JVM’s, ie IBM’s?
Things are different in Java 5. default is no more 64MB AFAIK. Plus, with java 5 there are tools for examining virtual machine and heap.
“Interestingly, .net (and every other platform on the planet) doesn’t have this problem.”
Java will and should allocate more memory, out of memory exceptions should only get throw when java has either hit its max heap size, or if the os has told java there is no more memory.
The first of these problems is due to poor user configuration, the second would happen in any environment, I have not programming extensively in .net, so I am unaware of any bugs that it has.
Thus, if your above statement is valid, it would lead me to conclude that the jvm you are using has a bug in it. As a student of Computer Science who has run applications requiring several gigs of ram (I know this isn’t a pissing contest, just stating some experience) I have not had this problem, though I am aware of several bugs in the memory management of previous jvms, all of which I thought had been fixed.
It should be noted that java has a formal memory model which, providing it has been correctly implemented, should give it a massive advantage over that of other environments, esp those requiring programmer memory management, eg c.
Can you please answer the following questions:
Is this a bug in an older jvm?
Can you replicate it on version 1.5 ? (I refuse the 5.0 naming scheme)
Under what circumstances does it occur?
mullet
“…If you don’t tweak max heap size and the app goes beyond 64 megs than you will get an OutOfMemoryException…”
I’d hook up a profiler and take a took at what is going on and then refactor your design to eliminate the problem(s). My current java app (jvm 1.4) will reach beyond 1gb heap (windows platform fwiw) with no problems and no max heap size set. You may want to take a look at setting your initial heap size higher and when profiling, take a close look at the heap allocation graph(s). Also, try to keep you objects in the eden memory space, the gc there is more efficient and you’ll be less likely to run out of memory. Having seen multiple enterprise apps run well over a 1gb heap with no max heap set, I suspect there is a problem somewhere in the program’s design/implementation.
That’s about as much help as I can offer for free based on the information you provided.
“well it’s clear you have little experience in java as well.
I work with application servers all day, and anyone in my position will tell you that if you have a heavy-load server running on the default 64MB heap, then the whole thing will soon come crashing down with OutOfMemoryErrors”
He didn’t mention using application servers and being Websphere certified and having worked with JBoss the past few years I’m pretty sure I have more experience in the this area than you claim to have (if you’re talking weblogic than fine you have more experience with wl than I do, here’s a cookie). I’ll concede that the default/min heap size may not be enough for app servers, but that’s part of the enterpise architects’ job: determine the load requirements and the memory/horsepower to meet those requirements. If you know your load will easily push higher than 64mb than you should set the default heap higher. If you are running out of memory because you didn’t set a max heap, than you most likely have a problem in your design with execessive object creation and phantom references that are keeping them from being gc’d. The only time I’ve ever capped the max heap size is to avoid the jvm claiming more memory than there is physical memory (prevents swapping of the jvm). Again you need to profile to see what your application is really doing. Just setting the heap higher doesn’t solve the problem, it merely masks it. What happens when your heavy-loaded app server gets more load? Do you just scale it vertically and horizonitally and hope for the best? I’ve seen some poorly written j2ee apps (and “regular” java apps–hell I’ve written some of them) that can be refactored to perform twice as fast and be much more efficient with their memory allocations.
Yes, the reason is that some apps will try to claim memory (malloc if you will) before the heap size has a chance to increase. In that case…out of memory. This can happen for a number of reasons (search ibm dev works for jvm profiling gc for more info). So the problem that the orginal poster was eluding to was that if you exceed the maximum heap you run out of memory; well of course you will…if you set a maximum heap and try to get more memory beyond the heap cap, the jvm is telling you that your SOL. Hence then reason I’ve said don’t cap you max heap (-Xmxblah). If you are runing our of memory on startup/execution increase your initial heap, BUT I believe it’s also important to find out why you may be getting OOME with no max heap set. It can be (but doesn’t have to be) a sign of a design flaw/reference leaks. The JVMs aren’t perfect either and how they behave will vary by platform.
HTHs some people out (obviously not for those ppl that claim to have so much more experience)
Umm, if your application needs more than 2 GB of memory, is your OS able to handle such a request? Its possible that you’ve simply reached the limit of per-application memory requests for your OS.
Yep. Java has a max heap size of ~1.8g on 32bit architectures. That is one reason why I’m excited for Solaris 10 + a 64b JVM on Opteron.
As for the stability discussion, my experience has been that JVM’s are very unstable when the heap is nearly full. As the GC load increases, it appears to “hit” many JVM bugs, particular with the server VM. This is my experience with 1.4.X and 1.3.1, at any rate. No memory allocation errors, just big ol’ core dumps.
It has taken us a great deal of time to find stable a stable VM configuration. I took a great deal of trial & error fiddling.
Frottage, I’m a bit confused.
No maximum heap size is set
Your application startup memory usage is < initial heap size
Is it possible for the out of memory error to occur even though the OS has memory available?
From what I read from your last post, this is possible if the heap has not had a chance to resize. As an example.
at time X:
free heap= 12 MB
at time X+1
request object of size 16 MB –throw out of memory exception
Heap expansion planned at time X+5
Is this correct? I guess the question I would ask is why would the JVM throw an out of memory exception without first checking if the heap can be expanded? Could it not block thread requesting the memory until the heap expansion time is reached? blocking is not uncommon in java. The GC does it, why not on the allocation side?
What JVM are you using? And what OS? We’ve observed the behavior I describe on Sun’s JVM on Solaris, Linux, and Windows.
And some apps just use lots of memory, so the suggestion about it being bad design is BS. We are sampling 10’s of thousands (for some customers 100’s of thousands) of metrics constantly, doing statistical analysis on them in real-time, persisting the data to a database, presenting the data to users in a UI, etc. That’s going to take a lot of memory no matter what approach we take. And changes in load change the memory footprint…customers might add metrics to be monitored at any time. So it’s not that easy to get the JVM max heap set correctly. The default 64 meg *NEVER* does it. If you don’t set the max heap size and you go past 64, CRASH.
This caught me by surprise too. I assumed (the brother of all f#$kups) that it would continue to allocate more heap until the machine ran out. I was wrong.
This is with Sun’s JVM 1.4.2_4 (so not dark ages technology). I’m yet to look at 1.5.
Currently it’s using about 150meg of heap. It used to thrash around with GC before it died at the default 64 Meg.
See:
Runtime.getRuntime().maxMemory(); // max heap size
Runtime.getRuntime().totalMemory(); // current heap size
Runtime.getRuntime().freeMemory(); // free space on heap.
I don’t understand why the maximum heap size isn’t based on the amount of real physical memory available. It seems dumb to have it limited by default (I could understand having the option).
When you run out of heap it seems to just thrash for a fair while (the application just starts throwing a hail of exceptions), but it can recover, it just takes forever.
It sounds like you folks aren’t programming defensively and doing some graceful degredation when you get an OOM exception. It is a catchable exception, and you ought to be doing something with it.
Yep, when your program catches that error, it should call System.exit() and run the .net version which does not have such a silly built-in limitation.
Ummm. Ok, say you have 300 threads running, each of which could be the one that runs into the OutOfMemoryException. So it happens, what are you supposed to do? The only thing you can do is launch an email to yourself saying to close the app, adjust heap, and restart. Not exactly a solution. Running in a crippled mode isn’t useful.
“The default 64 meg *NEVER* does it. If you don’t set the max heap size and you go past 64, CRASH.”
What is your minimum/inital heap set to? If you app(s) are very memory intensive from the start, then I would increase the initial heap to at least that amount (I use initial memory required*1.5 for my minimum heap). As for the “bad design”, I didn’t mean it to sound so harsh as I re-read. I should have said something more along the lines of “not as good as it could be for your current situation” or something like that…give the same problem to 10 designers, you’re bound to get at least 11 solutions…that’s whath I meant…there is always another way to do something.
If setting the maximum heap is the only way to get you app(s) to run with some sanity than that is the solution for your problem for your given design. I’m just “suggesting” that I too have seen/written apps with similiar problems and refactoring the design in areas that showed to be “weak” with regards to object creation/dead references, etc, usually solved the problem. Now, the other thing to consider is the app server itself. I don’t believe you mentioned it in your initial post, but I got the feeling based on other posts that you are using an appserver. I have seen very many issues with appservers (most of them are just java code as well); websphere 3.5-4 were riddled with problems. If you suspect a problem with the appserver code, open a problem ticket with them….with 100’s of thousands of customers relying on your app, you should have no problem getting the appserver vendor’s attention.
I now see the complaints about the Sun JVM, which even at Java5 defaults the max heap to 64MB and the need to use -Xmx<num> in order to avoid OOME. I am using a different vendors (has big “I” in it) jvm which computes the max heap based on the amount of memory you have on your machine. So that does present a problem in the case you identified and I now see why everyone is pulling their hair out…sorry for the confusion and I now *agree* that the Sun JVM does suck in that regard. Even running the jvm with the -server option caps the heap at 64MB. Well one more reason to open source the entire java baseline.
BTW, you can get the java jvm code, if you are really interested in memory tweaks, that code has some useful nuggets.