“Native Client enables Chrome to run high-performance apps compiled from your C and C++ code. One of the main goals of Native Client is to be architecture-independent, so that all machines can run NaCl content. Today we’re taking another step toward that goal: our Native Client SDK now supports ARM devices, from version 25 and onwards.”
How long until black hats start having fun with native client?
The security behind it was already tested a few times and so far seems to be fairly solid. The native code has to conform to very strict rules, but sure there might be holes(but probably less than Java 😉 )
BUT Chrome X86 ships with it for some time and so far the security nightmare that the “ActiveX!” screeming masses have predicted hasn’t happened yet.
There, I fixed it for you.
Yep… Happy user of IBM’s JDK here.
They already have. Recall it was a couple of native client bugs that were used to exploit Chrome at Pwnium last year. No doubt it will be part of the attack surface again at this years pwn2own.
http://nakedsecurity.sophos.com/2012/05/24/anatomy-of-an-exploit-si…
Sure, but imagine all the eyes that now play with Oracle’s JVM, starting to play with it as well.
This is an abomination.
Not sure why anyone would want to run C++ in a web browser.
Off topic, but I wish Google would hurry up and release an Android SDK for Go though.
but I wish Google would hurry up and release an Android SDK for Go though.
But why?
Go is a C like language, a little more pretty but still with a syntax that can become hard to read IMHO.
Sorry, but I’m not sure I get your point as both the officially supported languages are C-like (Java and C++).
However Go is much more than just another C-like language. Syntactically it’s concise yet still verbose enough to be readable. It’s a managed language but it doesn’t make assumptions (unlike some managed languages).
In all honestly, I’ve only been using it a week yet it’s so easy to pick up that I already feel like I’ve been programming in the language for months. It really is a joy to use.
Plus it’s cross compiling support is child’s play. I can write an application on x86, get it working exactly how I want, then just change one compiler flag to create and ARM binary for my Raspberry Pi. I will concede that it’s been the best part of 10 years since I’ve done any cross compiling in C++, but I’m sure it was never that easy.
Edited 2013-01-23 20:50 UTC
Personally, I find the sintax cryptic and I would only use it as a last resource.
For example:
func fib() func() int {
a, b := 0, 1
return func() int {
a, b = b, a+b
return a
}
}
Some people may feel comfortable with that kind of syntax, I don’t, It is as bad as the worse C++ IMHO, well not that bad.
Edited 2013-01-23 20:58 UTC
To be honest, that doesn’t even look valid Go (though, as I said, I’m only a week in on the language).
However some of the weirder syntax I’ve read on the tutorials were purely demonstrations showing how the more advanced C++ routines would be ported (so not really your typical day to day functions).
But lets be honest, you can find extreme cases of bad code in any language, so if you’re genuinely curious about the language then I’d recommend have a quick browse through the tour: http://tour.golang.org/
It is a valid Go code, I took it from one of golang.org examples, my point is, just like in C++, it can become unreadable with not much effort.
Edited 2013-01-23 21:09 UTC
But that’s the case for all languages. Keeping any sufficiently complicated language tidy takes effort.
Quite honestly, I think it’s silly making comments like these when you’ve not even written one line of code. If nothing else, it smacks of prejudice. So how about you actually give the language a try before passing judgement? Or STFU and let people who have used the language comment.
Edited 2013-01-23 21:25 UTC
But that’s the case for all languages.
Nope, some are ironed for these kind of stuff, some fail to do so.
Quite honestly, I think it’s silly making comments like these when you’ve not even written one line of code.
You have the premise that I don’t write code, cute.
Or STFU and let people who have used the language comment.
Why only the people who have used the language? how about those who have reasons to avoid it at any cost?
Edited 2013-01-23 21:31 UTC
Like what?
I was talking about Go, not stating that you’ve never programmed in your life
Why is it people always assume the worst when reading comments online?
The answer to that should be quite obvious. if you’ve not used the language then your obviously not in a position to form an educated opinion. It would be like asking a biology student to write a paper on quantum entanglement because it’s all just science 😛
However the language you’ve used there demonstrates quite a hatred, which is odd if you’ve never used it. So you’ve obviously got your own biases clouding your judgement here.
No no no, I don’t hate Go, I mean, it has some good features like concurrency and fast compilation.
I said I don’t like the syntax, but, looks more like Go is your pet project or you work at Google, because telling me to STFU in such aggressive way reflects that.
Oh yeah, and for your information, I’ve tried Go already, and that’s why I concluded that the Syntax sucks monkey balls IMHO.
Edited 2013-01-23 22:18 UTC
I don’t work for Google and was only aggressive because you gave the impression you hadn’t used it. If you had then I genuinely apologise.
Personally I love the C syntax and use to love Pascal before I got into C derived languages. Go, for me, is an elegant marriage of the two. However we’re talking about personal preferences so I can sympathise with those who don’t prefer it.
You still didn’t answer my earlier question though. You gave an example of a function with complicated types (akin to C++) and said other languages solved that problem. Which languages and how?
You said that every language has unreadable code, I said, some are ironed (not solved) for that like Python and some fail like Ruby.
I said that it is easy in Go as C++ to make unreadable code, and that’s why I said is a C like language.
And that example I gave is the less complicated one, I can put another one with pointers, arrays and callbacks if you want.
Go IMHO it is designed to be less verbose but is cryptic at the same time.
Edited 2013-01-23 22:41 UTC
Well python has an advantage in that it doesn’t compile to machine code. So it’s not really a fair comparison. I think you’d be hard pressed to find a language as powerful as the C or even Go that didn’t get messy with complexity. Except maybe if you switched programming paradigms entirely. Lisp is next on my todo list.
Well python has an advantage in that it doesn’t compile to machine code. So it’s not really a fair comparison.
Fair enough, lets compare it to D and Vala, those are as powerful as C, are native and with better syntax, or lets compare it to pascal, I mean, it is native, coherent syntax (verbose, but hey, is readable), so that’s my point, Google with Go sacrificed readability in alas to make it more compact.
Edited 2013-01-23 22:54 UTC
How would that function look in D and Vala (I know of those languages, but that’s where my experience on them ends )?
Pascal was an awesome language . It’s a real pity it died out as much as it did.
I can show you the Vala version:
fib = (i) => (i <= 1) ? i : (fib (i – 2) + fib (i – 1));
Nice code, even though it does look really alien to me.
It’s a pity you didn’t open your comments with that example as it demonstrates your point well.
For now, I’m really enjoying Go, so I’m going to keep at it. Particularly as its the first time in ages that I’ve felt inspired to code rather than just coding to get paid. But it’s always good to see how other languages handle things.
The only thing that’s really alien about it is the function definition syntax.
The rest is just a recursive algorithm implemented using a ternary operator which uses the familiar C-style syntax.
That definition doesn’t do what your Go example does. That example also requires a parameter which the Go example does not.
A similar Go function would be:
func fib(i int) int {
if i <= 1 {
return i
} else {
return fib(i-2) + fib(i-1)
}
}
That said, IMHO, the most readable Fibonacci function is Haskell’s:
fib 0 = 0
fib 1 = 1
fib n = fib (n-2) + fib (n-1)
Though an encoding similar to what you provided is also possible:
fib n = if i <= 1 then i else fib (n-2) + fib (n-1)
Not as terse, but that’s simply the lack of ?:, but a simple replacement would make Haskell more terse than your Vala example:
fib n = (n <= 1) ? i : fib (n-2) + fib (n-1)
I don’t have a dog in this fight, but dude…I think you are the one being hateful here. I’ve actually never read such an angry comment from you as the one where you told him to “STFU”. I know I’ve been guilty of spouting off like that in the past but you’re one of the “cool dudes” here and it just floored me.
I’m not trying to condescend or patronize or anything like that. Believe me, I know how it feels when you think someone is walking all over your favorite piece of tech. But I honestly don’t think Hiev meant any offense.
And I do agree with your position: Ugly but functional syntax can be found in any programming language. Programming is both a skill and an art; a skill that can always be improved and an art that can never be mastered.
Yeah you’re right that I over reacted. I was reading way too much into his comments (which is ironic because I accused him off doing the same. Eep)
It takes time to adjust, but Go’s declaration syntax is novel and helpful, especially when dealing with complex types.
Go:
f func(func(int,int) int, int) int
C#:
var f = Func<Func<int, int, int>, int, int>
There is just no sane way to write the syntax generally. I’m scared to even think what this would look like in C++.
int f(int (*fnc)(int,int),int,int)
Would probably want to be
var f = Func<Func<myObjectWith3ints>, int, int>
If I am understanding Func correctly.
Func can support up to 16 variables and a return type.
But passing so many parameters is usually bad design code wise, I haven’t needed to use it so I am not sure on it limitations on the types that can be parsed to it.
Generally I’m not a fan of objects containing other objects and passing them as a single parameter. It makes the code less glance-able.
I haven’t run into the situation where I need that many parameters and I would seriously question my design if I ever did. Methods in my opinion should have a clear purpose, and so many parameters would seem to indicate its trying to do too much at once.
I think the classic example is passing in a type of “Point”, which contains x,y and z values. I agree if there is some massive class hierarchy it is probably being done wrong.
I think my overall point was that (as was yours) is that code shouldn’t exist like that in the first place place and is not necessarily the fault of the language.
Edited 2013-01-25 09:05 UTC
Sounds like you have an issue with C++.
C++ would require just one extra * char for function pointer. Hardly anything to be scared of.
Assuming you ignore C++isms and use function pointers, then sure.
You’ve been space butchered, but it’s not that weird.
The anonymous function is simply a closure that captures the variables a and b, then modifies them both to produce the next Fibonacci number when called.
It’s really quite standard. I mean compare to a similar Scheme function (mind you it’s been awhile):
(define fib (let ((a 0) (b1))
(lambda () (begin (set! a b) (set! b (+ b 1)) a)))
But of course, until you share what language you think does this “better,” it’s impossible to address your complaint.
Edited 2013-01-23 22:19 UTC
I can read the code:
func fib() func() int {
declares a function that returns a function.
a, b := 0, 1
declares a and b as integer then initialized them with a zero.
return func() int {
a, b = b, a+b
return a
}
}
That’s the closure, but the difference with its counterparts like Javascript and C# is that you can mix that example with pointers and a weird array initialization syntax that Go allows.
I said Go is a C like language, why do you keep comparing it with Javascript and C#?
I’m not comparing it with JavaScript or C#. I was just showing that it’s a 1-to-1 match with how you’d normally write a function that returns a function that computes successive Fibonacci numbers.
Though since you mention it, the JavaScript version looks identical as well.
function fib() {
var a = 0; var b = 1;
return function() {a = b; b = b + 1; return a;};
}
How’s pointer or array syntax weird?
func f() []int {…}
func f() *int {…}
There’s nothing weird about those signatures.
Array initialization is almost exactly like C:
[]int{1,2,3,4}
Congrats, you’ve written shit code with no sense of style and taste. Hope you feel good that even automatic code formatting tools like “indent” are smarter than you. Writing unreadable code to make a statement about style is about as valid as criticizing car safety by driving down a mountain road at 200+mph, IOW nobody will take you seriously.
mmm, never mid, replied to wrong comment, I’m sorry for that.
Edited 2013-01-23 23:33 UTC
Go is not managed language, even if it’s garbage collected. It compiles directly to native code that is executed on the CPU (i.e. no interim “bytecode” representation is used).
True, and the consequence of being garbage collected is that it makes the size of the executable bigger, a minor issue compared with the benefits of a GC.
And that it will make memory management of your code largely unpredictable. And introduce random latency bubbles as the incremental mark&sweep collector decides to run. And that you might start hitting into various OS-enforced resource allocation limits (number of open file descriptors, for example).
GC is good for some things (like large non-performance-critical CRM systems, ERP systems, web apps, etc.), but shit for cases where you need to make careful decisions about available resources and runtime (OS kernels, databases, HPC, etc.).
That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the “fast” snippets later.
Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that’s an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future.
Edited 2013-01-24 05:40 UTC
Another solution used in languages like C#, Modula-3, Oberon family or D, is to allow in a very controlled way to request and release memory from the runtime.
But this is only allowed in system/unsafe code blocks.
Does it amount to disabling automatic GC and thus forcing garbage collection to run only when you want it to, like gc.disable() in Python ?
Or is it a more in-depth alteration of the language mechanics, that requires extensive programming practice changes, such as disabling garbage collection altogether and thus making all standard library code which relies on it fail ?
Edited 2013-01-24 07:50 UTC
Well, first of all this type of code is relegated to such blocks, because manual memory management is usually used together with other tricks, so in safer languages you want to minimize its use unless it is really required.
Usually this is memory that is outside GC knowledge so it should be handled with care and not escape unsafe code.
In most languages with such features you can use compiler switches to prevent compilation of unsafe code.
Here are some links about how to do this
Modula-3
http://modula3.elegosoft.com/cm3/doc/reference/complete/html/2_7Uns…
C#
http://msdn.microsoft.com/en-us/library/system.intptr.topointer.asp…
D
http://dlang.org/memory.html
Go
A bit cumbersome, but you can do it via cgo
http://golang.org/cmd/cgo/
Oberon
Oberon’s case is special, since the GC is implemented at the kernel level. So only a small piece of code, written in Assembly does manual memory management.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.1857
But for the cases where the language is implemented on top of other operating systems, there are language extensions to mark pointers as not being tracked by GC.
I remember a co-worker driving himself insane with CF1.0 and SqlClient. Somewhere is wasn’t releasing memory correctly and basically using up all the free memory on the device. In the end, he put a number of explicit GC.Collect()’s in the code. It was a documented bug, I think it was fixed in CF2.0, but we had to use 1.0 because the devices being used didn’t support 2.0 (IIRC, they were mixed, some were running PocketPC 2000 or something like that… plus most had bugger all RAM.)
Unless of course you’re operating in the real world where resources are finite. Writing high-performance code is a balancing act between various opposing requirements and automatic GC takes that control away from you. It feeds you some of it back in the form of manual GC invocation, but often that is insufficient.
Also please note that you are totally strawmanning my position – I never said anything about manual dynamic memory control.
Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I’d enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.
You mean like the A2 (BlueBottle) that has a kernel level GC?
I agree that GC does take some control away, but can you provide some use cases as for when this is a problem ?
You said something about random latency bubles called by GC operation. I said that if you care about such, you should not use dynamic memory management at all, GC or not, since it is a source of extra latency on its own. How is that strawmanning ?
If you have GC’d away the initialization garbage before enabling the interrupt handler, and are not allocating tremendous amounts of RAM in the interrupt handler, why should the GC take a lot of time to execute, or even execute at all ?
Sun played a bit with writing device drivers in Java for Solaris:
http://labs.oracle.com/techrep/2006/smli_tr-2006-156.pdf
The VM used is part of the SPOT project, which is a VM running on bare bones hardware with just a very thin layer of C code and everything else done in Java itself.
http://labs.oracle.com/projects/squawk/squawk-rjvm.html
The SPOT project was an Arduino like project from Sun, now Oracle, targeted mainly to schools and enthusiasts
http://www.sunspotworld.com/
Difference is that you can determine when to impose this possible ‘random latency bubble’ unlike with a GC where the GC logic decides when to do a memory reclamation sweep.
This means that you can release memory back at a pace that is dictated by yourself rather than the GC, a pace which would minimize ‘latency bubbles’.
Also, given that you control exactly which memory is to be released back at any specific time, you can limit the non-deterministic impact of the ‘free’ call.
You can also control when GC happens in most languages. By 1 or more of several methods:
1. Disable the GC in performance critical areas
1.a. if you can’t disable it arrange time to run the GC before you get to a critical area
2. Run the GC in non-critical areas
3. Manage the life cycle of your objects to minimize GC during critical areas
90% of the time GC makes life easier. The other 10% of the time you just need to know how to take advantage of your GC, which is not much different than manually managing your memory.
As I said, you can decide to run the GC yourself in a non-critical area, after a lot of memory handling has occured, so that it has no reason to run on its own later. It is no more complicated than running free() in itself.
After that, let us remember that GCs are lazy beasts, which is the very reason why GC’d programs tend to be memory hogs. If you don’t give a GC a good reason to run, then it won’t run at all. So code that does no dynamic memory management, only using preallocated blocks of memory, gives no work to do to the GC, and thus shouldn’t trigger it.
But even if the GC did trigger on its own, typically because it has a policy that dictates it to run periodically or something similar, it would just quickly parse its data structures, notice that no major change has occurred, and stop there. Code like that, when well-optimized, should have less overhead than a system call.
Now, if you are dealing with a GC that runs constantly and spends an awful lot of time doing it when nothing has been going on, well… maybe at this time you should get a better runtime before blaming GC technology in itself for this situation
The beauty of nondeterministic impacts like that of free() is that you have no way of knowing which one will cost you a lot. It depends on the activity of other processes, on the state of the system’s memory management structures, on incoming hardware interrupts that must be processed first…
If you try to use many tiny memory block and run free() a lot of time so as to reduce the granularity of memory management, all you will achieve is increase your chances of getting a bad lottery ticket, since memory management overhead does not depend on the size of the memory blocks that are being manipulated.
Which is why “sane” GCs, and library-based implementations of malloc() too for that matter, tend to allocate large amounts of RAM at once and then just give out chunks of them, so as to reduce the amount of system calls that go on.
This code is no problem to handle without garbage collection either, it’s hardly something to sell the notion of a GC with. Only difference is that even when not doing an active memory reclamation sweep, the GC still uses cpu/ram for it’s logic, causing overhead, there’s no magic that suddenly informs it of the current gc heap state, this is done by monitoring.
Than what system call? You wouldn’t call free to begin with unless you were explicitly freeing memory.
These activities of ‘other processec etc’ affect the GC the same way, the GC heap is not some magic area with zero-latency allocation/de-allocation, also it fragments alot easier than the system memory does as it uses but a subset of the available system memory which is most likely alot smaller than the memory available to the system allocator (more later).
Again, by dictating the pace of memory allocation/deallocation you can limit the non-deterministic impact of memory allocation/deallocation in order to minimize latency problems, we’re not talking about adhering to real-time constraints here which is another subject, but to prevent latency spikes.
And latency spikes have always been the problem with GC’s, as you give away control of memory reclamation to the GC you also lose the control of when and what memory is to be reclaimed at a certain time. So while in a manually managed memory setting, you’d choose to only release N amount of allocated objects at a given time to minimize latency, the GC might want to reclaim all memory at a single go, thus causing a latency spike.
This is exactly what the system memory allocator does, except it has all the non-allocated memory in the system at it’s disposal.
Just to make this point clear, the system allocators of today do not employ some ‘dumb’ list that is traversed from top to bottom while looking for a free memory chunk of a large enough size.
System memory is partitioned/cached to be as effective as possible in allocating/de-allocating memory chunks of varying sizes. I would suggest this article for a simple introduction: http://www.ibm.com/developerworks/linux/library/l-linux-slab-alloca…
Now, managing your own pool in order to minimize allocation/deallocation overhead is nothing new, it’s been done for ages. However for this to be efficient you want to have pools of same sized objects, else you will introduce memory fragmentation in your pools.
This is what happens in the GC heap, the GC must be able to manage memory of all sizes, so it will introduce memory fragmentation. And unlike with system memory allocation, the GC can’t pick and choose from the entirety of available RAM in the system, it can only pick and choose from the chunk of memory it allocated and now manage.
This means that it will fragment easier, and when fragmentation occurs which prevents allocation of a size N block there are two choices, one is to resize the GC heap by asking for a larger block from the system which is likely very expensive and also ineffective use of memory as we likely have the memory needed, just fragmented, the other and more commonly used option is to defragment, also known as compaction.
Here the GC moves blocks of memory around so as to free as much as possible space for further allocations. While this is not as expensive as resizing the heap, it’s still very expensive.
Now system memory also fragments, however given that system memory is not near as memory constrained as the GC heap (which typically is a small subset of available system ram), it will take so much more fragmentation for it to cause problems.
Now I want to point out that I am not adverse to garbage collectors, they simplify alot of coding and their use makes sense in tons of situations, but unlike what some people think they are not some panacea. Garbage collecting comes at a cost, this is undeniable and has been established since long, modern gc’s of today goes a long way in minimizing that cost, but it will always be there.
For certain code this cost has no significance, for other code the benefits outweigh the costs, and for some code the costs are simply unacceptable.
Any of the tricks a system allocator is capable of is also possible in a GCed language. It’s not like the algorithms are only possible by the system (and if they were the runtime can just use the system allocator.)
Further, in practice it turns out that you’re completely incorrect about fragmentation. GCed languages have less issues with fragmentation.
GC may have a base cost that’s higher than manual memory management, but it can be pushed into the same ranges as manual memory management, even if it takes more work to do so. That’s the trade off, GC makes life easy when you don’t care, but makes life more difficult if you need the performance. But sacrificing GC for some dubious performance gain is not good engineering (though choosing a language with optional GC would be a good decision if you know you’ll have performance issues, if only because it gives you more options.) So: write the code with GC, profile it, tune your code, repeat; if after a certain number of iterations you still can’t get the performance you need disable the GC in that section or drop down to C or assembly and rewrite your bottlenecks there.
You think that you control when memory is returned.
Actually what happens in most languages with manual memory management is that you release the memory in the language runtime, but the runtime does not release it back to the operating system. There are heuristics in place to only return memory in blocks of certain size.
Additionally depending how you allocate/release memory you can have issues with memory fragmentation.
So in the end you end up having as much control as with a GC based language.
This is why in memory critical applications, developers end up writing their own memory manager.
But these are very special cases, and in most system languages with GC, there are also mechanisms available to perform that level of control if really needed.
So you get the memory safety of using a GC language, with the benefit to control the memory if you really need to.
As a side note, reference counting is also GC in computer science speak and with it you also get determinist memory usage.
And yet there are critical systems taking decisions at millisecond level coded in GC enabled languages, go figure.
When people develop in GC enabled languages, or operating systems (AOS for example), they need to write GC friendly algorithms.
This is nothing new, even malloc/free are unpredictable when writing high performance code and most developers resort to custom allocators.
Preciously few very carefully coded algorithms. Can you point to a piece of such real-time code on the net? So far, all the video codecs, OS kernels, device drivers and HPC libraries I’ve seen were all C/C++.
I didn’t say anything about malloc/free. You’re reading something I didn’t write.
Missile radar control systems:
http://www.pr.com/press-release/136232
Battleship’s war systems:
http://www.businesswire.com/news/home/20030715005133/en/NewMonics-W…
STM32 micro-controlers for automation:
http://www.emcu.it/STM32/STM32-STM8_embedded_software_solutions_mod… (last slide)
With A2, you get an desktop operating system 99% coded in Active Oberon, the remaining part in Assembly (boot loader and GC).
http://www.ocp.inf.ethz.ch/wiki/Documentation/Kernel
http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager
Sadly Oberon based systems never got much uptake outside ETHZ in Zurich.
Sorry yes. You’re absolutely right.
Either because you have a C++ desktop app or game that you want to turn into a web app without rewriting all the core logic in JavaScript, or because you a have web app that you want to speed up by writing a couple of core bottlenecks in C++.
Both an abomination.
The browser is for documents.
Ugh, ever here of a some computation requiring performance??? Say something like game, imagine if Chrysis were written in some script kiddie language.
Nacl holds huge promise for games,HPC/distributed computing, or any other computationally intensive app.
I thought we already had that, it called desktop!
Here’s a thought , writing native C++ for the desktop. I bet it would never catch on though
HPC in a web browser? How, why? Frontend of HPC yes, but for that even HTML4 is enough.
Think of dynamic compute nodes.
Just imagine something like folding @ home, with NaCl, it would not require any install, just run in a browser.
Half of me thinks this is really cool. The other half thinks this is a disaster waiting to happen. Either way, it’s an interesting project.
I’m showing my age, but I grew up in the age of big iron and dumb terminals etc. and I’m perfectly comfortable at the green screen. So the idea of running an OS and a desktop to get to a browser in which you run C++ code seems like an extraordinary waste of resources, now matter how cool it is. Look at what we used to be able to do over FTP and telnet. I hope the first native-client app is a fart app. Because then the world will be complete.
Using Native Client C++ would facilitate the deployment of very high-velocity farts. In addition, the OO features of C++ would simplify the creation and maintenance of additional fart types. Certainly, this would be a fertile area of web development.
Edited 2013-01-23 20:52 UTC
Unfortunately I’ve already commented so cannot +1, as that’s easily the funniest thing I’ve read all week.
Is it me or does this look like a cross-platform version of ActiveX ?
The big problem with in-browser ActiveX is that it allowed websites to request specific, non-sandboxed code and request that it be installed with nothing more than a simple confirmation dialog.
NaCl uses some very clever static analysis to ensure the code can’t break out of the sandbox and puts up decent prizes up for anyone who properly reports confirmed vulnerabilities in the runtime environment’s API.
Heck, you don’t even need sandboxing to do a proper ActiveX. Just look at how Konqueror uses KParts as browser plugins to allow embed/object for anything with a KPart in the system while still exposing only the same attack surface as normal NPAPI plugins like Media Player and PDF Viewer. (The key there being that the user, not the website, chooses whether a KPart will be used and, if so, which one)
It’s all about making sure you have security measures proportionate to the API you expose.
Yes!
No, It is a cross-platform version of NPAPI. For example, the way you run flash right now, or QT and RealMedia videos before HTML5.