Linked by kragil on Wed 23rd Jan 2013 20:26 UTC
Google "Native Client enables Chrome to run high-performance apps compiled from your C and C++ code. One of the main goals of Native Client is to be architecture-independent, so that all machines can run NaCl content. Today we're taking another step toward that goal: our Native Client SDK now supports ARM devices, from version 25 and onwards."
Thread beginning with comment 550250
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: Comment by Laurence
by Neolander on Thu 24th Jan 2013 05:23 UTC in reply to "RE[5]: Comment by Laurence"
Neolander
Member since:
2010-03-08

And that it will make memory management of your code largely unpredictable. And introduce random latency bubbles as the incremental mark&sweep collector decides to run. And that you might start hitting into various OS-enforced resource allocation limits (number of open file descriptors, for example).
GC is good for some things (like large non-performance-critical CRM systems, ERP systems, web apps, etc.), but shit for cases where you need to make careful decisions about available resources and runtime (OS kernels, databases, HPC, etc.).

That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the "fast" snippets later.

Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that's an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future.

Edited 2013-01-24 05:40 UTC

Reply Parent Score: 1

RE[7]: Comment by Laurence
by moondevil on Thu 24th Jan 2013 06:46 in reply to "RE[6]: Comment by Laurence"
moondevil Member since:
2005-07-08

Another solution used in languages like C#, Modula-3, Oberon family or D, is to allow in a very controlled way to request and release memory from the runtime.

But this is only allowed in system/unsafe code blocks.

Reply Parent Score: 2

RE[8]: Comment by Laurence
by Neolander on Thu 24th Jan 2013 07:39 in reply to "RE[7]: Comment by Laurence"
Neolander Member since:
2010-03-08

Another solution used in languages like C#, Modula-3, Oberon family or D, is to allow in a very controlled way to request and release memory from the runtime.

But this is only allowed in system/unsafe code blocks.

Does it amount to disabling automatic GC and thus forcing garbage collection to run only when you want it to, like gc.disable() in Python ?

Or is it a more in-depth alteration of the language mechanics, that requires extensive programming practice changes, such as disabling garbage collection altogether and thus making all standard library code which relies on it fail ?

Edited 2013-01-24 07:50 UTC

Reply Parent Score: 1

RE[8]: Comment by Laurence
by henderson101 on Fri 25th Jan 2013 13:19 in reply to "RE[7]: Comment by Laurence"
henderson101 Member since:
2006-05-30

Another solution used in languages like C#, Modula-3, Oberon family or D, is to allow in a very controlled way to request and release memory from the runtime.


I remember a co-worker driving himself insane with CF1.0 and SqlClient. Somewhere is wasn't releasing memory correctly and basically using up all the free memory on the device. In the end, he put a number of explicit GC.Collect()'s in the code. It was a documented bug, I think it was fixed in CF2.0, but we had to use 1.0 because the devices being used didn't support 2.0 (IIRC, they were mixed, some were running PocketPC 2000 or something like that... plus most had bugger all RAM.)

Reply Parent Score: 2

RE[7]: Comment by Laurence
by saso on Thu 24th Jan 2013 09:52 in reply to "RE[6]: Comment by Laurence"
saso Member since:
2007-04-18

That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the "fast" snippets later.

Unless of course you're operating in the real world where resources are finite. Writing high-performance code is a balancing act between various opposing requirements and automatic GC takes that control away from you. It feeds you some of it back in the form of manual GC invocation, but often that is insufficient.

Also please note that you are totally strawmanning my position - I never said anything about manual dynamic memory control.

Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that's an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future.

Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I'd enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.

Reply Parent Score: 2

RE[8]: Comment by Laurence
by moondevil on Thu 24th Jan 2013 12:42 in reply to "RE[7]: Comment by Laurence"
moondevil Member since:
2005-07-08

Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I'd enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.


You mean like the A2 (BlueBottle) that has a kernel level GC?

Reply Parent Score: 2

RE[8]: Comment by Laurence
by Neolander on Thu 24th Jan 2013 13:47 in reply to "RE[7]: Comment by Laurence"
Neolander Member since:
2010-03-08

"That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the "fast" snippets later."

Unless of course you're operating in the real world where resources are finite. Writing high-performance code is a balancing act between various opposing requirements and automatic GC takes that control away from you. It feeds you some of it back in the form of manual GC invocation, but often that is insufficient.

I agree that GC does take some control away, but can you provide some use cases as for when this is a problem ?

Also please note that you are totally strawmanning my position - I never said anything about manual dynamic memory control.

You said something about random latency bubles called by GC operation. I said that if you care about such, you should not use dynamic memory management at all, GC or not, since it is a source of extra latency on its own. How is that strawmanning ?

"Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that's an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future."

Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I'd enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.

If you have GC'd away the initialization garbage before enabling the interrupt handler, and are not allocating tremendous amounts of RAM in the interrupt handler, why should the GC take a lot of time to execute, or even execute at all ?

Reply Parent Score: 1