Linked by kragil on Wed 23rd Jan 2013 20:26 UTC
Google "Native Client enables Chrome to run high-performance apps compiled from your C and C++ code. One of the main goals of Native Client is to be architecture-independent, so that all machines can run NaCl content. Today we're taking another step toward that goal: our Native Client SDK now supports ARM devices, from version 25 and onwards."
Permalink for comment 550424
To read all comments associated with this story, please click here.
RE[10]: Comment by Laurence
by Neolander on Fri 25th Jan 2013 07:06 UTC in reply to "RE[9]: Comment by Laurence"
Neolander
Member since:
2010-03-08

Difference is that you can determine when to impose this possible 'random latency bubble' unlike with a GC where the GC logic decides when to do a memory reclamation sweep.

This means that you can release memory back at a pace that is dictated by yourself rather than the GC, a pace which would minimize 'latency bubbles'.

As I said, you can decide to run the GC yourself in a non-critical area, after a lot of memory handling has occured, so that it has no reason to run on its own later. It is no more complicated than running free() in itself.

After that, let us remember that GCs are lazy beasts, which is the very reason why GC'd programs tend to be memory hogs. If you don't give a GC a good reason to run, then it won't run at all. So code that does no dynamic memory management, only using preallocated blocks of memory, gives no work to do to the GC, and thus shouldn't trigger it.

But even if the GC did trigger on its own, typically because it has a policy that dictates it to run periodically or something similar, it would just quickly parse its data structures, notice that no major change has occurred, and stop there. Code like that, when well-optimized, should have less overhead than a system call.

Now, if you are dealing with a GC that runs constantly and spends an awful lot of time doing it when nothing has been going on, well... maybe at this time you should get a better runtime before blaming GC technology in itself for this situation ;)

Also, given that you control exactly which memory is to be released back at any specific time, you can limit the non-deterministic impact of the 'free' call.

The beauty of nondeterministic impacts like that of free() is that you have no way of knowing which one will cost you a lot. It depends on the activity of other processes, on the state of the system's memory management structures, on incoming hardware interrupts that must be processed first...

If you try to use many tiny memory block and run free() a lot of time so as to reduce the granularity of memory management, all you will achieve is increase your chances of getting a bad lottery ticket, since memory management overhead does not depend on the size of the memory blocks that are being manipulated.

Which is why "sane" GCs, and library-based implementations of malloc() too for that matter, tend to allocate large amounts of RAM at once and then just give out chunks of them, so as to reduce the amount of system calls that go on.

Reply Parent Score: 1