Linked by Thom Holwerda on Sun 20th Apr 2008 15:43 UTC
General Development is running an opinion piece on the extensive reliance of programmers today on languages like Java and .NET. The author lambastes the performance penalties that are associated with running code inside virtualised environments, like Java's and .NET's. "It increases the compute burden on the CPU because in order to do something that should only require 1 million instructions (note that on modern CPUs 1 million instructions executes in about one two-thousandths (1/2000) of a second) now takes 200 million instructions. Literally. And while 200 million instructions can execute in about 1/10th of a second, it is still that much slower." The author poses an interesting challenge at the end of his piece - a challenge most OSNews readers will have already taken on. Note: Please note that many OSNews items now have a "read more" where the article in question is discussed in more detail.
Thread beginning with comment 310624
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: He's making the wrong point
by siimo on Sun 20th Apr 2008 23:07 UTC in reply to "He's making the wrong point"
Member since:

No you are wrong about the memory usage. Granted some managed apps may have this problem. But this is not necessarily true.

Memory allocation to managed code is managed by the runtime (.NET / Java etc) and it sometimes allocates memory to an application and does not reclaim it back until some other application on the system needs to use this memory, if the memory is unused then it keeps it so that things remained cached.

Yes some programs have memory leaks but *don't* go by what your task manager says, when checking the memory usage of managed code.

Reply Parent Score: 2

SomeGuy Member since:

Even entirely disregarding memory leaks, a managed program will have a much higher memory footprint than a non-native compiled program. There are three main reasons for this:

1) A virtual machine with JIT means you *cannot* demand page. In other words, if you have 20 processes that each load the same 10 shared libraries which for the sake of argument are 10 megs in size.

In a VM you'll have 20*10 megs for each chunk of JIT'd code, plus the interpreter's memory overhead.

In a native compiled language, the OS is smart enough to share all the compiled code between all the processes, so you only use 10 megs for *all* the proceses. (Ok, this isn't an entirely accurate view, but it's a good first approximation.)

2) Garbage collectors run every now and then. This means that the memory usage piles up as you create and forget about objects. Sure, you don't leak memory any more, but you need room for the garbage waiting to be collected. Research papers [sorry, no links at the moment, but they're around] show that to be effective, a garbage collected system needs between 1.5 to 5 times the amount of RAM, depending on the exact collection algorithms and usage patterns of the program. The garbage has to sit around somewhere in the time between you finishing using it, and the GC kicking in and cleaning it up.

3) Finally, this isn't an intrinsic property of a VM environment, but the languages that run inside a VM tend to emphasize making lots of little objects. This leads to everything from lots of garbage being created (see point 2) to memory fragmentation causing increased heap size (or increased CPU time, if you have a compacting/copying GC).

So, while VM-based languages certainly have their place, pretending that they're as memory efficient as their natively compiled counterparts with manual memory allocation is rather a crackpot notion at this time.

Reply Parent Score: 4

draethus Member since:

In a VM you'll have 20*10 megs for each chunk of JIT'd code, plus the interpreter's memory overhead

What a load of rubbish. Java's been using class data sharing for a good while now, which shared memory between JVM instances, avoiding that very problem.

Reply Parent Score: 2