

The security behind it was already tested a few times and so far seems to be fairly solid. The native code has to conform to very strict rules, but sure there might be holes(but probably less than Java ;-) )
BUT Chrome X86 ships with it for some time and so far the security nightmare that the "ActiveX!" screeming masses have predicted hasn't happened yet.
They already have. Recall it was a couple of native client bugs that were used to exploit Chrome at Pwnium last year. No doubt it will be part of the attack surface again at this years pwn2own.
http://nakedsecurity.sophos.com/2012/05/24/anatomy-of-an-exploit-si...
Go is a C like language, a little more pretty but still with a syntax that can become hard to read IMHO.
Sorry, but I'm not sure I get your point as both the officially supported languages are C-like (Java and C++).
However Go is much more than just another C-like language. Syntactically it's concise yet still verbose enough to be readable. It's a managed language but it doesn't make assumptions (unlike some managed languages).
In all honestly, I've only been using it a week yet it's so easy to pick up that I already feel like I've been programming in the language for months. It really is a joy to use.
Plus it's cross compiling support is child's play. I can write an application on x86, get it working exactly how I want, then just change one compiler flag to create and ARM binary for my Raspberry Pi. I will concede that it's been the best part of 10 years since I've done any cross compiling in C++, but I'm sure it was never that easy.
Edited 2013-01-23 20:50 UTC
Personally, I find the sintax cryptic and I would only use it as a last resource.
For example:
func fib() func() int {
a, b := 0, 1
return func() int {
a, b = b, a+b
return a
}
}
Some people may feel comfortable with that kind of syntax, I don't, It is as bad as the worse C++ IMHO, well not that bad.
Edited 2013-01-23 20:58 UTC
To be honest, that doesn't even look valid Go (though, as I said, I'm only a week in on the language).
However some of the weirder syntax I've read on the tutorials were purely demonstrations showing how the more advanced C++ routines would be ported (so not really your typical day to day functions).
But lets be honest, you can find extreme cases of bad code in any language, so if you're genuinely curious about the language then I'd recommend have a quick browse through the tour: http://tour.golang.org/
But that's the case for all languages. Keeping any sufficiently complicated language tidy takes effort.
Quite honestly, I think it's silly making comments like these when you've not even written one line of code. If nothing else, it smacks of prejudice. So how about you actually give the language a try before passing judgement? Or STFU and let people who have used the language comment.
Edited 2013-01-23 21:25 UTC
But that's the case for all languages.
Nope, some are ironed for these kind of stuff, some fail to do so.
Quite honestly, I think it's silly making comments like these when you've not even written one line of code.
You have the premise that I don't write code, cute.
Or STFU and let people who have used the language comment.
Why only the people who have used the language? how about those who have reasons to avoid it at any cost?
Edited 2013-01-23 21:31 UTC
It takes time to adjust, but Go's declaration syntax is novel and helpful, especially when dealing with complex types.
Go:
f func(func(int,int) int, int) int
C#:
var f = Func<Func<int, int, int>, int, int>
There is just no sane way to write the syntax generally. I'm scared to even think what this would look like in C++.
You've been space butchered, but it's not that weird.
The anonymous function is simply a closure that captures the variables a and b, then modifies them both to produce the next Fibonacci number when called.
It's really quite standard. I mean compare to a similar Scheme function (mind you it's been awhile):
(define fib (let ((a 0) (b1))
(lambda () (begin (set! a b) (set! b (+ b 1)) a)))
But of course, until you share what language you think does this "better," it's impossible to address your complaint.
Edited 2013-01-23 22:19 UTC
I can read the code:
func fib() func() int {
declares a function that returns a function.
a, b := 0, 1
declares a and b as integer then initialized them with a zero.
return func() int {
a, b = b, a+b
return a
}
}
That's the closure, but the difference with its counterparts like Javascript and C# is that you can mix that example with pointers and a weird array initialization syntax that Go allows.
I said Go is a C like language, why do you keep comparing it with Javascript and C#?
I'm not comparing it with JavaScript or C#. I was just showing that it's a 1-to-1 match with how you'd normally write a function that returns a function that computes successive Fibonacci numbers.
Though since you mention it, the JavaScript version looks identical as well.
function fib() {
var a = 0; var b = 1;
return function() {a = b; b = b + 1; return a;};
}
How's pointer or array syntax weird?
func f() []int {...}
func f() *int {...}
There's nothing weird about those signatures.
Array initialization is almost exactly like C:
[]int{1,2,3,4}
Congrats, you've written shit code with no sense of style and taste. Hope you feel good that even automatic code formatting tools like "indent" are smarter than you. Writing unreadable code to make a statement about style is about as valid as criticizing car safety by driving down a mountain road at 200+mph, IOW nobody will take you seriously.
And that it will make memory management of your code largely unpredictable. And introduce random latency bubbles as the incremental mark&sweep collector decides to run. And that you might start hitting into various OS-enforced resource allocation limits (number of open file descriptors, for example).
GC is good for some things (like large non-performance-critical CRM systems, ERP systems, web apps, etc.), but shit for cases where you need to make careful decisions about available resources and runtime (OS kernels, databases, HPC, etc.).
GC is good for some things (like large non-performance-critical CRM systems, ERP systems, web apps, etc.), but shit for cases where you need to make careful decisions about available resources and runtime (OS kernels, databases, HPC, etc.).
That is also a problem with manual dynamic memory management, though, or any other form of system resource allocation for that matter. Whatever programming language you use, when writing high-performance code, you probably want to allocate as much as possible in advance, so as to avoid having to perform system calls of unpredictable latency within the "fast" snippets later.
Now, your problem seems to be that GC runtimes can seemingly decide to run an expensive GC cycle at the worst possible moment, long after objects have been apparently disposed of. But that's an avoidable outcome, since any serious garbage-collected programming language comes with a standard library function that manually triggers a GC cycle (runtime.GC() in Go, Runtime.gc() in Java, GC.Collect() in C#/.Net). The very reason which such functions exist is so that one can trigger GC overhead in a controlled fashion, in situations where it is not acceptable to endure it at an unpredictable point in the future.
Edited 2013-01-24 05:40 UTC
Unless of course you're operating in the real world where resources are finite. Writing high-performance code is a balancing act between various opposing requirements and automatic GC takes that control away from you. It feeds you some of it back in the form of manual GC invocation, but often that is insufficient.
Also please note that you are totally strawmanning my position - I never said anything about manual dynamic memory control.
Good luck with that in the odd interrupt handler routine. If your multi-threaded runtime suddenly decides, for whatever reason, that it is time to collect garbage, I'd enjoy watching you debug that odd system panic, locking loop, packet drop or latency bubble.
And yet there are critical systems taking decisions at millisecond level coded in GC enabled languages, go figure.
When people develop in GC enabled languages, or operating systems (AOS for example), they need to write GC friendly algorithms.
This is nothing new, even malloc/free are unpredictable when writing high performance code and most developers resort to custom allocators.
When people develop in GC enabled languages, or operating systems (AOS for example), they need to write GC friendly algorithms.
Preciously few very carefully coded algorithms. Can you point to a piece of such real-time code on the net? So far, all the video codecs, OS kernels, device drivers and HPC libraries I've seen were all C/C++.
I didn't say anything about malloc/free. You're reading something I didn't write.
Either because you have a C++ desktop app or game that you want to turn into a web app without rewriting all the core logic in JavaScript, or because you a have web app that you want to speed up by writing a couple of core bottlenecks in C++.
Ugh, ever here of a some computation requiring performance??? Say something like game, imagine if Chrysis were written in some script kiddie language.
Nacl holds huge promise for games,HPC/distributed computing, or any other computationally intensive app.
Ugh, ever here of a some computation requiring performance??? Say something like game, imagine if Chrysis were written in some script kiddie language.
Nacl holds huge promise for games,HPC/distributed computing, or any other computationally intensive app. "
Here's a thought , writing native C++ for the desktop. I bet it would never catch on though
Half of me thinks this is really cool. The other half thinks this is a disaster waiting to happen. Either way, it's an interesting project.
I'm showing my age, but I grew up in the age of big iron and dumb terminals etc. and I'm perfectly comfortable at the green screen. So the idea of running an OS and a desktop to get to a browser in which you run C++ code seems like an extraordinary waste of resources, now matter how cool it is. Look at what we used to be able to do over FTP and telnet. I hope the first native-client app is a fart app. Because then the world will be complete.
Using Native Client C++ would facilitate the deployment of very high-velocity farts. In addition, the OO features of C++ would simplify the creation and maintenance of additional fart types. Certainly, this would be a fertile area of web development.
Edited 2013-01-23 20:52 UTC
The big problem with in-browser ActiveX is that it allowed websites to request specific, non-sandboxed code and request that it be installed with nothing more than a simple confirmation dialog.
NaCl uses some very clever static analysis to ensure the code can't break out of the sandbox and puts up decent prizes up for anyone who properly reports confirmed vulnerabilities in the runtime environment's API.
Heck, you don't even need sandboxing to do a proper ActiveX. Just look at how Konqueror uses KParts as browser plugins to allow embed/object for anything with a KPart in the system while still exposing only the same attack surface as normal NPAPI plugins like Media Player and PDF Viewer. (The key there being that the user, not the website, chooses whether a KPart will be used and, if so, which one)
It's all about making sure you have security measures proportionate to the API you expose.