Linked by kragil on Wed 23rd Jan 2013 20:26 UTC
Google "Native Client enables Chrome to run high-performance apps compiled from your C and C++ code. One of the main goals of Native Client is to be architecture-independent, so that all machines can run NaCl content. Today we're taking another step toward that goal: our Native Client SDK now supports ARM devices, from version 25 and onwards."
Thread beginning with comment 550347
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[8]: Comment by Laurence
by Neolander on Thu 24th Jan 2013 13:47 UTC in reply to "RE[7]: Comment by Laurence"
Neolander
Member since:
2010-03-08

"That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the "fast" snippets later."

Unless of course you're operating in the real world where resources are finite. Writing high-performance code is a balancing act between various opposing requirements and automatic GC takes that control away from you. It feeds you some of it back in the form of manual GC invocation, but often that is insufficient.

I agree that GC does take some control away, but can you provide some use cases as for when this is a problem ?

Also please note that you are totally strawmanning my position - I never said anything about manual dynamic memory control.

You said something about random latency bubles called by GC operation. I said that if you care about such, you should not use dynamic memory management at all, GC or not, since it is a source of extra latency on its own. How is that strawmanning ?

"Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that's an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future."

Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I'd enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.

If you have GC'd away the initialization garbage before enabling the interrupt handler, and are not allocating tremendous amounts of RAM in the interrupt handler, why should the GC take a lot of time to execute, or even execute at all ?

Reply Parent Score: 1

RE[9]: Comment by Laurence
by moondevil on Thu 24th Jan 2013 16:06 in reply to "RE[8]: Comment by Laurence"
moondevil Member since:
2005-07-08

Sun played a bit with writing device drivers in Java for Solaris:

http://labs.oracle.com/techrep/2006/smli_tr-2006-156.pdf

The VM used is part of the SPOT project, which is a VM running on bare bones hardware with just a very thin layer of C code and everything else done in Java itself.

http://labs.oracle.com/projects/squawk/squawk-rjvm.html

The SPOT project was an Arduino like project from Sun, now Oracle, targeted mainly to schools and enthusiasts

http://www.sunspotworld.com/

Reply Parent Score: 2

RE[9]: Comment by Laurence
by Valhalla on Fri 25th Jan 2013 00:49 in reply to "RE[8]: Comment by Laurence"
Valhalla Member since:
2006-01-24

You said something about random latency bubles called by GC operation. I said that if you care about such, you should not use dynamic memory management at all, GC or not, since it is a source of extra latency on its own.

Difference is that you can determine when to impose this possible 'random latency bubble' unlike with a GC where the GC logic decides when to do a memory reclamation sweep.

This means that you can release memory back at a pace that is dictated by yourself rather than the GC, a pace which would minimize 'latency bubbles'.

Also, given that you control exactly which memory is to be released back at any specific time, you can limit the non-deterministic impact of the 'free' call.

Reply Parent Score: 3

RE[10]: Comment by Laurence
by satsujinka on Fri 25th Jan 2013 03:11 in reply to "RE[9]: Comment by Laurence"
satsujinka Member since:
2010-03-11

You can also control when GC happens in most languages. By 1 or more of several methods:

1. Disable the GC in performance critical areas
1.a. if you can't disable it arrange time to run the GC before you get to a critical area
2. Run the GC in non-critical areas
3. Manage the life cycle of your objects to minimize GC during critical areas

90% of the time GC makes life easier. The other 10% of the time you just need to know how to take advantage of your GC, which is not much different than manually managing your memory.

Reply Parent Score: 2

RE[10]: Comment by Laurence
by Neolander on Fri 25th Jan 2013 07:06 in reply to "RE[9]: Comment by Laurence"
Neolander Member since:
2010-03-08

Difference is that you can determine when to impose this possible 'random latency bubble' unlike with a GC where the GC logic decides when to do a memory reclamation sweep.

This means that you can release memory back at a pace that is dictated by yourself rather than the GC, a pace which would minimize 'latency bubbles'.

As I said, you can decide to run the GC yourself in a non-critical area, after a lot of memory handling has occured, so that it has no reason to run on its own later. It is no more complicated than running free() in itself.

After that, let us remember that GCs are lazy beasts, which is the very reason why GC'd programs tend to be memory hogs. If you don't give a GC a good reason to run, then it won't run at all. So code that does no dynamic memory management, only using preallocated blocks of memory, gives no work to do to the GC, and thus shouldn't trigger it.

But even if the GC did trigger on its own, typically because it has a policy that dictates it to run periodically or something similar, it would just quickly parse its data structures, notice that no major change has occurred, and stop there. Code like that, when well-optimized, should have less overhead than a system call.

Now, if you are dealing with a GC that runs constantly and spends an awful lot of time doing it when nothing has been going on, well... maybe at this time you should get a better runtime before blaming GC technology in itself for this situation ;)

Also, given that you control exactly which memory is to be released back at any specific time, you can limit the non-deterministic impact of the 'free' call.

The beauty of nondeterministic impacts like that of free() is that you have no way of knowing which one will cost you a lot. It depends on the activity of other processes, on the state of the system's memory management structures, on incoming hardware interrupts that must be processed first...

If you try to use many tiny memory block and run free() a lot of time so as to reduce the granularity of memory management, all you will achieve is increase your chances of getting a bad lottery ticket, since memory management overhead does not depend on the size of the memory blocks that are being manipulated.

Which is why "sane" GCs, and library-based implementations of malloc() too for that matter, tend to allocate large amounts of RAM at once and then just give out chunks of them, so as to reduce the amount of system calls that go on.

Reply Parent Score: 1

RE[10]: Comment by Laurence
by moondevil on Fri 25th Jan 2013 08:02 in reply to "RE[9]: Comment by Laurence"
moondevil Member since:
2005-07-08

This means that you can release memory back at a pace that is dictated by yourself rather than the GC, a pace which would minimize 'latency bubbles'.

Also, given that you control exactly which memory is to be released back at any specific time, you can limit the non-deterministic impact of the 'free' call.


You think that you control when memory is returned.

Actually what happens in most languages with manual memory management is that you release the memory in the language runtime, but the runtime does not release it back to the operating system. There are heuristics in place to only return memory in blocks of certain size.

Additionally depending how you allocate/release memory you can have issues with memory fragmentation.

So in the end you end up having as much control as with a GC based language.

This is why in memory critical applications, developers end up writing their own memory manager.

But these are very special cases, and in most system languages with GC, there are also mechanisms available to perform that level of control if really needed.

So you get the memory safety of using a GC language, with the benefit to control the memory if you really need to.

As a side note, reference counting is also GC in computer science speak and with it you also get determinist memory usage.

Reply Parent Score: 2