“Over the past year and a half I’ve been spending more and more of my time working with Mozilla’s latest project, Firefox OS. During that time I’ve fallen in love with the project and what it stands for, in ways that I’ve never experienced with a technology platform before.” I’m not convinced just yet. I hope it succeeds, but I just doubt it actually will.
As it is currently being discussed to death in Reddit and HackerNews, given the way mobile operators and handset makers behave, the amount of failed attempts with alternative mobile operating systems, Firefox OS is most likely going to be the next corpse.
Since it relies on Android compatible hardware and Android underlying lower stack with kernel/drivers – its potential installation base is quite broad. So it won’t be a corpse by any means.
So far what’s really lacking are devices compatible with conventional (non Android) Linux stack. Samsung’s devices for Tizen didn’t come out yet to address this, and Jolla’s one is supposed to appear next year.
Tizen is much broader than Samsung: http://www.tizenassociation.org/en/
May be, but it doesn’t seem that anyone besides Samsung is going to release any devices. I’m not even sure that Samsung really will.
Edited 2012-09-14 15:26 UTC
Just curious: where did you get that info ?
FYI Huawei is hiring engineers for Tizen
Edited 2012-09-14 17:22 UTC
I don’t know much about them. Except that they tried to troll the Opus codec (which is an important part of the open Web) with some weird IPR claims: https://datatracker.ietf.org/ipr/1712/
Not a good reputation IMO. I have no trust in these kind of participants.
Edited 2012-09-14 18:04 UTC
Huawei is just an example: every participant of the TA will have Tizen devices
about the codec: open source is a learning process for most companies, but you’re right: it’s plain stupid
Edited 2012-09-14 18:07 UTC
I see a contradiction in the fact that they use HTML5 and JS and that they want it to run on cheap hardware as well as Android.
I highly doubt the claim in the article about javascript game running faster than Android native games. I _really_ want to see some proof of that, because that’s really not what I have seen. I mean even on the desktop, HTML5/JS is slow when compared to other technologies.
Also, it’s really time for the web browser developers to start working on a real VM for the browser instead of hacking on top of JS. .NET seems like a good example of how you can have a common API shared across multiple languages.
Edited 2012-09-13 21:04 UTC
The slowest part isn’t JavaScript, hasn’t been for a while.
Communication with the HTML-document has always been the slowest part. This is because you are crossing boundries and the page might need to re-render.
Most of the things people should avoid doing are known.
If your Firefox installation supports WebGL proper (see: about:support ) you can run WebGL games fairly close to what the hardware can give you. Just see the demo at: https://developer.mozilla.org/demos/detail/bananabread
For example I use https://addons.mozilla.org/en-US/firefox/addon/pdfjs/ which is a lot faster than loading some plugin or reader. The rendering itself isn’t much slower actually.
The slowers part of the language itself is that it is dynamically typed. But that has been solved by detecting the type and by introducing types for large sets of data (Typed Arrays).
Java can be very close to C/C++ performance for a number of things, but definitely not everything. And the startup performance of Java usually sucks.
Javascript is now with proper type support for most things about twice as slow as Java or C.
If you remember that languages like PHP or Perl are 100 times slower than C, then you should see it’s not that bad. For example Lua without JIT is something like 50 times slower than C.
Lennie,
I have to admit that’s a cool demo. It only partially worked in FF before an update. Hopefully mozilla doesn’t make a habit of imposing version incompatibilities (to be fair, at least this was documented).
I think they may have gotten the high res and low res reversed because High res worked ok (though it did pause briefly at intervals for me) and “Low res” was unusable.
The graphics look great, obviously opengl should render the same under the control of any language. Also with the resources being preloaded even a modest performing language should be able to run fluidly – at least until the number of in-game objects imposes a game engine bottleneck.
All in all it’s certainly impressive. Never the less, as a developer I would prefer to build on top a well tuned native game engine rather than one implemented in javascript. For my own curiosity, I compared GCC and Firefox on these two. I’ll share those results here:
int main(int argc, char *argv[]) {
int x=0;
for(int a=0; a<50000; a++) {
for(int b=0; b<a; b++) {
x=(x+(a^b))&0xffff;
}
}
printf(“%d”,x);
return 0;
}
<script>
var t0=new Date().getTime();
var x=0;
for(var a=0; a<50000; a++) {
for(var b=0; b<a; b++) {
x=(x+(a^b))&0xffff;
}
}
var t1=new Date().getTime();
document.getElementById(‘out’).innerHTML=x + ‘ ‘ + (t1-t0) + ‘ ‘;
</script>
GCC=1.26s
JS=4.1s
(I had to actually change my initial test because javascript was timing out.)
I’d say that’s not bad at all, all things considered. Here’s a more difficult test to measure object overhead.
int main(int argc, char *argv[]) {
float x=0;
unsigned int count=10000;
COORD*coords[count];
for(unsigned int i=0; i<count; i++) {
coords[i] = malloc(sizeof(COORD));
coords[i]->x=i;
coords[i]->y=i;
coords[i]->z=i;
}
for(unsigned int j=0; j<count; j++) {
for(unsigned int i=0; i<count; i++) {
coords[i]->x=(coords[i]->x+coords[j]->y)/2;
coords[i]->y=(coords[i]->y+coords[j]->z)/2;
coords[i]->z=(coords[i]->z+coords[j]->x)/2;
}
}
for(unsigned int i=0; i<count; i++) {
x+=coords[i]->x+coords[i]->y+coords[i]->z;
free(coords[i]);
}
printf(“%f”,x);
return 0;
}
<script>
var t0=new Date().getTime();
var count=10000;
var x=0;
var coords=new Array(count);
for(var i=0; i<count; i++) {
coords[i] = new Object();
coords[i].x=i;
coords[i].y=i;
coords[i].z=i;
}
for(var j=0; j<count; j++) {
for(var i=0; i<count; i++) {
coords[i].x=(coords[i].x+coords[j].y)/2;
coords[i].y=(coords[i].y+coords[j].z)/2;
coords[i].z=(coords[i].z+coords[j].x)/2;
}
}
for(var i=0; i<count; i++) {
x+=coords[i].x+coords[i].y+coords[i].z;
coords[i]=null;
}
var t1=new Date().getTime();
document.getElementById(‘out’).innerHTML=x + ‘ ‘ + (t1-t0) + ‘ ‘;
</script>
GCC=0.34s
JS=3.07s
That’s pretty bad. Anything that makes heavy use of objects is going to suffer.
I ran some more tests to eliminate allocation & initialisation overhead with similar results. On the one hand, this could be great for a “scripting language”, but on the other it would be very disappointing if this scripting language was aiming to replace native ones in practice since 90% of the CPU utilisation might be lost to language overhead.
This would roughly cut your runtime roughly in half:
Not that is matters much… I’m not suggesting this is as an optimization, I just wanted to point out that only about half of the performance difference is due to object access overhead in your example – the rest is just pure overhead due to it being interpreted. Converting from objects to arrays (with a few minor optimizations here and there) only cuts the runtime in half.
When I profile this in Chrome 90% of the execution time ends up being engine overhead (parsing, compilation, etc). The actual time spent running the meat of the code itself is only about 10-15ms.
Just saying while scripting overhead is not exactly a fixed cost – it tends to be inversely proportional to the amount of code you write. In other words the bigger and more complex the application is, the less it matters…
In a contrived example like this – all you are really demonstrating is that a scripting language has high overhead costs relative to a compiled language when your code is essentially doing nothing useful in a tight loop.
I wouldn’t make the arguement that Javascript is an ideal language for doing computationally intensive stuff – but then again that isn’t what it is for. Still, it isn’t half bad compared with other interpreted languages…
Edited 2012-09-14 10:02 UTC
galvanash,
“When I profile this in Chrome 90% of the execution time ends up being engine overhead (parsing, compilation, etc). The actual time spent running the meat of the code itself is only about 10-15ms.”
Just to be clear, this example did NOT measure script parsing overhead, which was mere milliseconds on my machine and an acceptable one-time cost. What was slow was the run time inside the loop – several seconds in each case. I’m baffled why you’re claiming 90% of the execution time is due to “parsing, compilation, etc”? Obviously we can’t be talking about the same thing.
“Converting from objects to arrays (with a few minor optimizations here and there) only cuts the runtime in half. ”
There are several ways we could have eliminated the objects, but given that they were what I was measuring the overhead of this would have defeated the point.
“In a contrived example like this – all you are really demonstrating is that a scripting language has high overhead costs relative to a compiled language when your code is essentially doing nothing useful in a tight loop.”
Maybe I could of/should have implemented a vector multiplication benchmark to be less contrived, but I doubt it would have improved the performance of javascript objects, which is still a problem.
“I wouldn’t make the arguement that Javascript is an ideal language for doing computationally intensive stuff – but then again that isn’t what it is for.”
Fair enough, but if HTML apps continue to displace native ones for things like game engines, that’s kind of where we’ll end up.
“Still, it isn’t half bad compared with other interpreted languages… ”
Agree, I think it’s a pretty good language & the implementations of it aren’t bad either.
Sorry, I wasn’t clear. Your right – we are not talking about the same thing. I wasn’t trying to imply that the code above only takes 10-15ms to run in walltime – it takes (on my machine) 1.45s.
What I was saying is that, minus the 10-15ms, the rest of the time is purely parsing, compiling, memory management, type inference, etc. etc. – i.e. non-user code or system code. The actual amount of CPU time spent executing _only_ the actual loops and dong the calculations is 10-15ms. If you want to run it in Chrome and look at the CPU profiler you will see what I am talking about.
In other words, in a perfect world where JS was a compiled to machine code language and memory allocation was perfect and could be done in advance – the code would take around 15ms to run, which is probably very comparable to GCC (if you rewrote the GCC code to allocate the whole chunk of memory in one shot, for example).
But – It is obviously not fair to discount the rest of that time – memory allocation counts too and is real overhead. Its just that since JS does this automatically it isn’t done in user code – so it ends up being counted up as “program” time by the profiler. It still matters of course.
I was simply pointing out that it isn’t purely because of object access. Object access is slower of course, but the bulk of the time is actually spent on engine level stuff (like memory allocation).
That is, by and large, where you will end up seeing all the time go relative to something like GCC – memory allocation, compilation, and garbage collection overhead. Other things (like difference in object access performance) tend to be eclipsed by it. Object access patterns are slower, and that is a real problem, but it isn’t the bulk of the performance delta your seeing.
That is my point though – only about 40% of that overhead you measured is due to object access. I removed it and the code runs 2x as fast, but it is still more than 50x slower than GCC. The big difference is memory allocation and general engine overhead.
Like I said, that isn’t a fixed cost – it is highly dependent on the code. But it does tend to become less and less significant the larger and more complex the code base becomes.
galvanash,
“I was simply pointing out that it isn’t purely because of object access. Object access is slower of course, but the bulk of the time is actually spent on engine level stuff (like memory allocation).”
If I understood you correctly, you may be suggesting that JS only spent ~20ms (we’re handwaiving here) doing actual work and the rest was language support overhead. That’s a peculiar thing to say, but I guess that’s plausible. However I don’t see why it changes anything because the overhead is still there regardless of what you attribute it to. Hopefully there is room for improvement.
“That is my point though – only about 40% of that overhead you measured is due to object access. I removed it and the code runs 2x as fast, but it is still more than 50x slower than GCC. The big difference is memory allocation and general engine overhead.”
Well, just for kicks I’ve rerun my original code but with a new timer around the inner loop.
entire script=3.032
inner loop = 3.027
So I think we can rule out memory allocation overhead as a culprit (Unless javascript is continuously reallocating memory unnecessarily in the inner loop?). Still impressive compared to JS engines from a few years go, but I think further optimisation is going to be increasingly difficult.
“Like I said, that isn’t a fixed cost – it is highly dependent on the code. But it does tend to become less and less significant the larger and more complex the code base becomes.”
You know I can’t let you get away with a statement like that without some kind of evidence
Edited 2012-09-15 17:21 UTC
Well… I withdraw most of what I posted…
It turns out Chrome’s profiler does some very weird shit when you are profiling naked code in the global scope. I rewrote the code so that it is contained in a function, so that all the variables are local.
The timing didn’t really change, but what changed was where Chrome was reporting % of time spent…
Long story short – Array access is faster than Object access, but either way that is where the majority of time is being spent in your test code.
Hell – I don’t know exactly what Chrome was reporting before when it was showing 10-15ms… It ends up reporting things completely differently once you get out of the global scope. I just assumed it was reporting things correctly… sorry about that.
If you instead did something focusing on the calculations involved and less on the object access (obviously this wont give the same result)…
You end up with this taking only about 6ms on my machine… Same operations same number of times – just no object access and a simpler loop with just one control variable.
So all in all I would say your original observation was dead on – most of the delta between GCC and JS in your example is purely object access.
galvanash,
“So all in all I would say your original observation was dead on – most of the delta between GCC and JS in your example is purely object access.”
The question becomes how to optimise it. I think you may have been on the way to this idea before getting sidetracked by my benchmark: in theory the JS compiler might convert the dynamic object into a static structure under the hood. But I see that as being a difficult challenge because in JS we don’t know which members to allocate prior to execution. A simple analysis may work for simple scripts, but I can conceive of other cases where the compiler will have to run the entire script before being able to determine what static structure it should be using.
We probably ought to revisit this topic on a new article with more time to discuss it.
Edited 2012-09-17 01:28 UTC
What usually happends with game engines is they port an existing engine (like the demo) to Javascript with the use of https://github.com/kripken/emscripten
I think this is the talk/demo which explains it best:
http://blip.tv/jsconfeu/alon-zakai-emscripten-5666035
I agree that most of the problem is the document/JS boundary.
Still, I’ve played a bit with WebGL and it can be quite fast when using static scene data and therefore the JS code only upload the data to the graphic card and that’s it. If you start doing things in JS between each frame (like trying to add a physic engine or animate lot of stuff), then performances quickly drop.
That being said, I would be happy to be proven wrong. It’s just that claiming that JS games on mobile* run just fine without any proof triggered some alarms.
* I’ve written a small Android game and I had to write most of it in C++ to get decent performances on low-end devices.
Game-engines are ported from C/C++ to Javascript with emscripten:
https://github.com/kripken/emscripten
I believe this is the talk/demo you might want to watch:
http://blip.tv/jsconfeu/alon-zakai-emscripten-5666035
I had to go back and re-read that claim – and the claim is that FirefixOS runs webapps faster than the Android *browser*, not than native apps. And that claim is perfectly believable to me.
Worst case it will popularize some phone APIs, like SMS. I hope it succeed, but I just don’t believe that much.
Did you know that browsers will soon support a whole lot more than just SMS.
How about an API for doing real time communication like Video-chat, or VoIP-calls ? It’s called WebRTC. RTC == Real Time Communication.
WebRTC is a set of APIs:
MediaStream: Granting web apps/sites access to the camera and microphone on your computer, via the getUserMedia API.
DataChannel: Communicating data peer to peer.
PeerConnection API: Enabling direct peer to peer connections between two web browsers for audio and video.
Basically, it allows for:
reliable (TCP) and unreliable (UDP) peer2peer encrypted communication between two or more browsers or a browser and a server. With NAT-traversal and encryption. Suitable for Video, Audio and any other data.
Basically a built-in Skype-like API.
If you have Firefox installed you probably also now have support for this really cool new free audio codec:
https://hacks.mozilla.org/2012/07/firefox-beta-15-supports-the-new-o…
The Opus codec is mandatory-to-implement for browsers that support WebRTC. Which will probably include browsers from:
– Mozilla
– Opera
– Google
– Microsoft
Apple is still keeping quiet.
It can also be combined with traditional VoIP and thus old style phone calls.
There seems to be even an interrest from Telcos.
It gets even more interresting when you start combining it with other things:
http://www.youtube.com/watch?v=aK1DC2zp6ZE (playing Chess while you can see your opponent) I’m sure people will come up with even better ideas.
Yes, I know, and that is exactly my point. Mozilla is pioneering here, so even if the OS platform as a whole fails miserably, their work must be credited.
This sounds every bit like WebOS. As a former Pre owner, I love the idea, but am completely skeptical about the success of this.
Firefox OS is the most forward-looking I have ever seen the open-source/standards-worshipping crowd. The overall strategy has a lot going for it, both when it comes to user rights and freedoms (open platform), market forces (getting apps will get easier and easier), and from a technology perspective (easy to profit from the work of a wider community). Looking forward to this. Interestingly Microsoft has also caught on to the wisdom of this move, having made HTML5 apps first-class both in the runtime and the dev tools in Windows 8. Surprisingly more forward-looking than Google when it comes to the web. Apple is also falling behind when it comes to this direction.
On the surface it may look like users look down on webview apps, but they are getting better and better, and in many cases the users aren’t complaining simply because they don’t realize that it is a webview app they are dealing with.
Microsoft has moved HTML5 tooling forward a few years with Expression Blend.
However, Windows 8 JavaScript apps are decidedly Windows 8 only. The knowledge carries over, its not a write once run anywhere deal.
“Works Best in IE6” websites didn’t work well in Firefox 1.0, either. A few years later, Microsoft was desperately trying to kill IE6 if favor of IE8 – that worked well with Firefox-compatible (standard) sites.
Not saying that David will beat Goliath again, but he’s 1-0. I’m not betting against him.
(I’m anxious to try a FirefoxOS device now. I love the scrappy underdog…)
Web based tools and languages don’t even work properly for the web nowadays. JS for example is bloaty, inadequate and people try to make it do things it was never designed to do … resulting in hacks uppon hacks uppon hacks.
HTML is a MARKUP LANGUAGE, it is meant to do layout, framing, formatting, etc of a DOCUMENT.
So why would it be a good idea to build an entire mobile operating system around this?
I find the situation to be better than it ever has been to be honest. It is certainly better than it was 10 years ago.
How? Why? It has it’s warts, no doubt about that, but honestly I find it to be a wonderfully useful language. It is also extremely powerful and expressive – it is just tied up in an unfortunate straight-jacket of C-like syntax. Sure it can be ugly, but it works – and it works well.
Anyway, that is becoming less and less of an issue because it is (for better or worse) becoming quite common as intermediary language for other languages to target (GWT, CoffeScript, etc.)
That is more of a people problem than a language problem… It happens with all languages, just more so with popular ones.
The same reason many modern GUI layout tools use markup-like systems (XAML, Glade, XUL, etc.) – you need to do layout, framing, formatting, etc in any GUI…
Besides, your twisting the premise a bit. Firefox OS is not “built around” HTML, it is built around gecko…
Gecko is a powerful, extremely feature rich layout engine – light years beyond most purpose built layout systems used in most OS stacks if feature set is your measuring stick. Same goes for webkit and other browser engines. Sure, there may not perform as well in certain scenarios, and they all sprawl quite a bit, but what they lack in speed and refinement they make up for in sheer flexibility.
It is a crime to waste the amount of optimization and research that went into these engines – why wouldn’t you want to use them for GUI layout?
Just saying… What is so different between something like this and something like Glade or XAML?
Relative to other languages, it is still in the stone age. Better than 10years ago isn’t really an excuse. Web tooling is pathetic.
Besides the warts, it is inherently difficult to make fast. Making something which is almost axiomatically slow the bed rock of web technology is foolish.
Its barely palatable on the web, do not push it into the app space where there are much higher expectations. People have come to expect the web to be a sub optimal experience.
Use a real intermediary language. Don’t shoe horn JavaScript into that position.
You know things are bad when a selling point of JS is “Its good because its so ugly others hide it as much as possible”
Correct. Mozilla has a people problem. Probably a common sense deficiency too.
XAML is for marking up applications. It has a 1:1 mapping to the .NET object model. Glade is closer to XAML than it is to HTML.
Just because they’re all markup doesn’t mean they’re all the same. HTML is almost comically bad at marking up applications.
The HTML layout model is a mish mash of 100 bad ideas.
This is exactly what happens when you design by committee. I’m sure things will get better in another 10 years. Not.
Edited 2012-09-14 01:32 UTC
I was talking about web development in general – not comparing to other languages… Besides – it makes no sense at all to mix metaphors like that, languages themselves have next to nothing to do with development as a whole – it is just a small cog in the machine.
And web development tooling is great imo – some people just don’t like having 500+ options to choose from I guess… There doesn’t have to be a “one true way” answer to every problem.
That is categorically false – you have no idea what you are talking about. Modern JS engines are extremely performant – in fact much more impressively so when you consider they have to work directly from human readable source code. Sure, compared to C, Java, and C# they have weak spots – but those are compiled languages (or at the least they do byte code compilation) – and even then it is usually only a factor or 3 slower…
Compared to other straight-up runtime interpreted languages? Extremely competitive, often greatly superior. I do not know where you get the idea that it is slow…
Regardless, it is a stupid argument anyway. Languages are not fast or slow – interpreters and compilers and binary executables are – and they can be quite easily improved without changing a language. Even then, it makes little difference in the grand scheme of things. There are things JS is extremely good at (async programming being one of them) and some things it isn’t (low level byte manipulation, etc.) The same arguments apply to any language. It isn’t all about how fast it goes at the end of the day.
Yes, the web is a sub-optimal experience. For lots of reasons – design by committee feature sets, incompatibilities, etc. etc. No argument. Its also constantly changing and a challenge to keep up with. But people overlook what it buys you…
Sure, it is much easier to write a C# app targeting .NET, or a java app targeting a JRE… The fact is neither of those will ever be universal.
There are 3 major mobile platforms – Windows Phone, Android, and iOS. They have 3 completely incompatible development frameworks, but they can all run HTML5 apps quite well… So can desktops (any OS), so can just about anything with a chip in it. HTML5 makes up for its shortcomings by being deployable almost anywhere – and not in the “run anywhere” fairytale land of Java where it is a figurative statement – I mean really deployable anywhere.
Makes perfect sense to me to cut out the middleman and build a mobile OS designed with HTML5 apps as a 1st class citizen… You can turn up you nose at it all you want – I’ll still have a job in 10 years
If what you want is a human readable IM language, It is as good as any other. If you want byte code then we are talking about two different things… Byte code will never fly on the internet (been tried – failed terribly -at least twice)…
If you write .NET apps you are worshipping at Microsofts altar. If you write Android apps you are worshipping at Google’s altar. If you write iOS apps you are worshipping at Apple’s altar.
I don’t have to worship at anyone’s altar – I don’t care who wins. I make a comfortable living and have for 15+ years. I like iOS development too, as well as C# and a few other platforms – but none of them hold a candle to web development when it comes to reach.
I didn’t say that was a selling point – just pointing out reality. Syntax isn’t everything – CoffeScript and JavaScript are in fact the same language semantically, the only difference is syntax – and CoffeeScript is quite lovely to look at. Syntax can be fixed quite easily, and it eventually will be.
The point is that semantically javascript is a great language for its current use case – which is wiring up logic to GUIs.
Your arguing about the semantics of the markup language… What does any markup language need? A fast parser, a good interpreter, a layout and rendering engine….
Gecko???
You don’t like the semantics of HTML – that’s fine. It is not ideal and probably never will be. But it is flexible as hell and gets better as time goes by.
And yet it is still around after 20 years, and becomes more and more ubiquitous as time goes by. It changes when it needs to. Things like XAML will be a faded memory in 5 years or so…
Comparatively is the only way to go about such things, since the argument is that JS+HTML is god awful choice for app development.
The tooling state is laughable, compared to C#, there is no comparison when it comes to things like debugging. Hats off to the JS support in Blend and Visual Studio, and those developers are wizards, but its still comparatively terrible.
You say “fast despite inherent limitations” (non static typing leading to stupid design and compile time assumptions, no byte code compilation, etc) and I say “slow because of inherent limitations”. Its two sides of the same coin.
The fact that JS is as fast as it is, is a testament to the immense skill of the engineers behind the JS engines. It doesn’t mean the language is conducive to speed at all, in fact, a reasonable language in the browser is almost certain to be much faster.
Actually, no. JS has some uniquely JS features which make it slower than it should be. Namely, is type system makes it difficult to do type based optimizations at JIT level. Sure you can do cool type inferencing, but you quickly run across an even more limited time budget than traditional JIT compilers.
Here we come to a fundamental disagreement. I categorically reject the notion that write once run anywhere is desirable. I didn’t buy it when Java said it, andi don’t buy it now. It leads to a poor and confusing user experience.
I believe in code sharing between mobile apps by using common back ends with native front ends. On the web, I’m cool with JS and HTML. Let the web be the web. But for Christ’s sake, let apps be apps.
I wouldve offed myself years ago if I had to deal with web technology.
I’m interested to the instances where its failed, but ultimately, this is a discussion about HTML and JS for use as a fundamental app platform.
Syntax and peformance are separate things. Only in JS, things are slow because of the syntax. You can fix one part of it using CoffeeScript, arguably, but you still can never fix the second, since you compile down into JS along with its limits.
They are used differently. HTML defines structure. XAML is a declarative way to instantiate .NET objects. It can easily be extended to do whatever you want, it can map to arbitrary .NET objects.
You can’t really do that in HTML, which is a shame, because it’d be a tremendous improvement alone. You’re stuck with awkward solutions which is funny because it makes your code less declarative.
Besides, XAML has the concept of controls, events, properties, data binding, static methods, etc.
Just because XAML is markup, and HTML is markup.. doesn’t make HTML good because XAML is good.
So your original “but you’re okay with XAML what’s so wrong with HTML” statement is wrong.
You’re aware XAML is a key part of the Windows 8 app platform, right? You’re aware that the XAML team is part of the Windows Division, right?
XAML going anywhere is a Pipedream. It’ll be on 800 million devices in a years time.
I said web development… as in JS as it applies to web development is better than it was 10 years ago. There is no comparison there, because there is nothing to compare it to.
Whatever though – I get your point.
Whats missing? I have a debug console. I can step through code. I can setup watches on variables. I can analyze objects and browse their properties. I can even fiddle with values at runtime and change code as I’m stepping through it…
Again, what is missing exactly? I think you are using the wrong tools…
Now you getting into religious arguments Personally, I would argue that static typing is just a compiler optimization – it isn’t a language feature. Its a stupid design decision because its primary purpose is to make compilers faster – it isn’t about developer productivity at all. Its about forcing developers to manually disclose things that a computer can easily figure out at runtime.
To each their own on that one – I use 7 or 8 languages routinely, about half statically typed and the other half dynamic. I prefer dynamic any day of the week, I don’t like having to put training wheels on my code.
Ill give MS credit for allowing the CLR to do it either way though – if they didnt’ allow dynamic typing the number of languages running on it would have stopped at 1…
The exact same thing can be said about C# or Java. Time, money, and good engineering can turn a weakness into a strength…
Again – “features” that exist to make compilers happy and developers sad don’t interest me much…
Yep. Pretty fundamental.
I would argue that the user experience depends more on the developer than the tools used. Ill even concede it is much easier to create a consistent user experience when using a full stack application development framework – as long as you are only targeting that single stack. I often care more about how many screens I can reach – and it is far easier to spend the time and write a good UI in HTML5 that you can deploy everywhere. It depends of course – it isn’t good for everything – but that doesn’t make it bad.
I do too for the most part. I’m pragmatic – sometimes it is easier that way to get a good result. But I find I choose that route less and less often because the JS/HTML approach is improving so much. I’m not religious about it – I just think saying it is crap and nonviable is extremely short-sighted…
Come on now… Its fun! Having to relearn everything every 3 years keeps you on your toes
Um.. Java. Flash. Silverlight for the most part too. Im not saying they are dead for app development, but they are all dead for general purpose web development…
You talking about dynamic typing again I assume… Ive said enough on that I think.
Look at meteor. Or angularjs. Or knockout. Or about 20 other up and coming declarative frameworks. Same thing… That is my point – HTML eventually adopts anything worth doing. It might not be quite as elegant and refined as XAML, but in the long run it won’t matter because it doesn’t need yet-another-runtime in order to work.
Markup is markup. XAML is not good because of the markup language – it is good for the reasons you stated above – which are features of the runtime. Most of the same features can (and have been) applied to HTML/JS…
800 million devices with a 2-3 year lifespan… Well see how it goes. I’m actually a fan of Windows 8 – Im not knocking it at all. My point was what they call XAML now will likely be replaced with the-next-new-paradigm in the next 10 years or so. There is always something better around the corner – things like XAML fade away – remember OWL, MFC, etc. Web technologies evolve and adapt…
I think you two are getting overly hyped up over what is, at the end of the day, a matter of preference about which technologies you like better.
galvanash,
Yes, javascript promises to lower the barrier to entry for app development. But you’ve said alot of things I disagree with. Like static typing…it does have a use (though whether you benefit from it is a different matter), it explicitly limits the number of states a variable may hold and helps catch errors at compile time.
In some languages, string concatenation and arithmetic addition are separate operators to help resolve the ambiguity, but javascript depends on inferred typing.
var a = x.foo() + ‘x’; // what is a?
For this reason, I’d argue javascript is a poor choice for mission critical applications like those at NASA. There are more problems, like the inability to define proper “classes”, which make javascript both slower and less error proof. Again, maybe you prefer not to use classes and find the prototype substitute good enough for you. But whether it’s an enhancement or restriction of the language depends on your point of view – it’s a matter of opinion.
Also, we’ve build other languages on top of javascript because browsers don’t give us much choice, not because it particularly makes sense to do so otherwise.
Nelson,
You are right that no single language is good for everything. But you must accept that portability is very important and useful to many developers and users, they should be able to write once, run anywhere without needing to rewrite it again… You can dismiss portability for yourself, but it really doesn’t make sense to dismiss the utility for others.
Edited 2012-09-14 06:41 UTC
Exactly. I don’t like languages that limit the number of states a variable may hold. And the types of errors that are actually caught by type checking at compile time are generally trivial typos and such.
That said, if you do proper unit testing (something you should be doing in either type of language) all the pros of static typing disappear – except for it being faster to compile. Hence why I said that is its only real strength.
a is a variable
Most of the space stuff tends to use plain old C, which might be statically typed but it isn’t strongly typed – which is arguably just as bad from an “avoiding errors” point of view…
Then again, I have read they use quite a bit of Python at JPL – Python isn’t all that different from javascript fundamentally – both are dynamically typed. It sure is prettier though…
How does using prototypes instead of class make things slower and less error proof? Its just a different way to skin the cat – they both get the job done equally well if you understand the paradigm.
I don’t consider prototyping a “substitute” for classes – it is the sane way to do it
Agreed.
Whether its javascript or some other language – it makes no sense to deliver an opaque intermediary language representation to a browser that is designed to render human readable markup. The human readable part is important for a multitude of reasons, and that includes the logic too…
galvanash,
“Exactly. I don’t like languages that limit the number of states a variable may hold.”
Other than an ADT, can you give me an example where storing multiple types in one variable is good practice?
VB supported both typed and untyped code, plus it did not overload ‘+’ as a concatenation operator, both of these traits help resolve ambiguities we have in javascript.
“That said, if you do proper unit testing (something you should be doing in either type of language) all the pros of static typing disappear – except for it being faster to compile.”
This is not it at all. Throw out compilation speed all together and you’ll still find that some programmers prefer rigid constraints to inferences. If I know that I want a boolean, there’s no reason the language should force me to use a variable than can store a float, an integer, a string, an object, an array, etc. For me, this is a shortcoming of JS REGARDLESS of performance. Even if variants are better in your opinion, they’re not better for everyone.
“How does using prototypes instead of class make things slower and less error proof? Its just a different way to skin the cat”
The problem with JS objects is the same as the problem with it’s variables: it doesn’t permit the programmer to be explicit about his intentions at compile time.
As for performance, javascript’s objects are actually hash tables, very powerful but they’re not anywhere as efficient (space or computation) as a static language memory structure. While it is true that JS objects are more flexible at run time, that benefit is completely lost on programmers who (in their own opinion) would benefit more from compile time structure/type checking.
“Whether its javascript or some other language – it makes no sense to deliver an opaque intermediary language representation to a browser that is designed to render human readable markup.”
I don’t think being HTML or JS are intrinsically human readable unless that code was written by a human. Ever tried to use a JS optimizer to remove variable names/spacing/comments? It’s impossible to read again without reverse engineering it. So to the extent that a human wrote the code, then having source code is nice (even if irrelevant to most users). But I doubt many devs will have reason to work with the intermediate representation even if it is in Javascript.
So for running many languages in the browser, we’d probably be better off with a well defined bytecode that can easily be decompiled back into source form (tools exist to do this for java).
It makes perfect sense. I’m not in the business of caring about what lazy developers prefer. Anyone using HTML/JS to write run anywhere app is pretty much the worst kind of developer I know.
Wake up, developers. At one point, people were more principled than this.
Nelson,
“It makes perfect sense. I’m not in the business of caring about what lazy developers prefer. Anyone using HTML/JS to write run anywhere app is pretty much the worst kind of developer I know.”
Wow Nelson, I think you might have initially started out defending a sound principal, but your argument is becoming absurd. Writing portable code is ultimately about not having to reimplement the same thing everywhere. You can call it lazy if you want, but I’ll call it being efficient.
The WWW, for all it’s problems, could not be as ubiquitous and transparent as it is today if everyone used their own protocols and different standards. Unless you believe in segregating the online population, it’s a good thing that everyone can visit each other’s websites with the expectation that they will work everywhere else.
You do this really naive thing where you assume you’re the moral arbiter of programming and I’m supposed to care about your opinion.
Also, my apps feel more native, perform better, and I achive comparable productivity with just slapping together an alien feeling HTML5 website and calling it a day by stuffing it into an app.
Nelson,
“You do this really naive thing where you assume you’re the moral arbiter of programming and I’m supposed to care about your opinion.”
You don’t have to care about my opinion, but can you read your own post in context and tell me it’s not pure arrogance?
http://www.osnews.com/thread?535086
You allude to principals, however you’ve failed to lay them out in the course of this discussion. We’ve highlighted a shortcoming shared by many native platforms, but instead of acknowledging the truth in that, you resorted to Ad Hominem attacks against all web developers. Regardless of what you think of my opinion, you should at least redact that post since it can’t possibly be true; some of the best developers in existence will use HTML/JS despite its shortcomings BECAUSE they want to reach the largest audience.
“Also, my apps feel more native, perform better, and I achive comparable productivity with just slapping together an alien feeling HTML5 website and calling it a day by stuffing it into an app.”
Well Nelson, here you are using an HTML news & discussion board, what happened to your principals? You’re obviously not a hypocrite, so where’s the link to the native version of osnews since I’d like to give it a try!
My point being that the web does have it’s uses, and even if some of us prefer native apps (including myself btw), we have to admit that they fail to reach as wide an audience as exists on the web.
Instead of denying the benefits of portability, we’d be better off making native frameworks MORE portable so that they could run anywhere without regards to OS/platform. Go ahead and disagree with my opinion, but for gods sake drop the more-principled-than-thou talk.
Edited 2012-09-16 19:46 UTC
My principals are that user experience should never suffer because of a need to reach a wide audience. However, as I’ve said in earlier comments, this is exclusively talking about when used for APP DEVELOPMENT. I’m fine with using HTML and JS and the WWW as a least common denominator website, just don’t stuff it in an app package and pretend its an app.
A good example is Facebook. They have apps for the major platforms, and a least common denominator website for everyone else.
I’ve stated this all before, don’t confuse your unwillingness or inability to read with me not having stated my position before.
Again, you mischaracterize my position because you’re too busy doing grandstanding to read for comprehension. I think the web is a good idea, on the web. Where I am vehemently against such an atrocity is when it is packaged and advertised as an app. It looks like a duck, and quacks like a dog.
You are really the most annoying type of person to talk to. A crucial prerequisite for being a smart ass, is to actually know what you’re speaking about.
In fact, I’ve already spoken well of portability, and in fact I employ various measures to ensure a high degree of code sharing every day. Across multiple platforms. Without compromising the user experience or performance.
Not with trying to arm wrestle HTML into something it was never meant to do, not with inducing suicidal thoughts by maintaining large gobs of JavaScript, but by writing portable back end C# with a native front end for each of the platforms I support.
My principled jab is the people who, when strictly talking about app development, would throw away user experience for the sake of having a write once run anywhere scenario.
I don’t care if its you, someone else on this board, or really anyone else, it is beneath me as a programmer to pretend to even respect people who cut corners in such an egregious matter.
We in the app development tech circles spit on people who pervert programming like this.
“My principals are that user experience should never suffer because of a need to reach a wide audience. However, as I’ve said in earlier comments, this is exclusively talking about when used for APP DEVELOPMENT.”
And that’s a completely fair opinion, don’t assume that I disagree with it. I just think there’s more than one right answer and I don’t “spit on people” who prefer something different.
“You are really the most annoying type of person to talk to. A crucial prerequisite for being a smart ass, is to actually know what you’re speaking about.”
Haha, I may be annoying to you, but that’s probably because I do know what I’m talking about. As I’ve stated already, I don’t have a problem with your opinion, but I do have a problem with the flaming way in which you stated it.
For example, upon joining the discussion I said “You can dismiss portability for yourself, but it really doesn’t make sense to dismiss the utility for others.” Do you really think it’s appropriate to respond with “It makes perfect sense. I’m not in the business of caring about what lazy developers prefer. Anyone using HTML/JS to write run anywhere app is pretty much the worst kind of developer I know.” I was hoping you’d have the dignity to take that back – it was hurtful, untrue, and in poor taste.
“My principled jab is the people who, when strictly talking about app development, would throw away user experience for the sake of having a write once run anywhere scenario.”
There are certainly some cases where I’d agree that native apps are better. But why do you have to “jab” people at all? Just state your opinion without an insult. I may not always be agreeable, but I do try to stay friendly. It may have been naive, but I only got involved in this thread to try and disarm the tension that had built up. If you cannot agree, then simply agree to disagree. Don’t interpret this condescendingly, but I plead you, for the sake of osnews, next time try to respond in a pleasant tone. I’m serious about that, fighting over HTML/native apps is really silly, I don’t care who’s “right”, I just want to have fun discussing it.
Edit: This applies to everybody! (said the self-proclaimed blog nanny ) Seems like osnews has too many regular flamewars going on, which frequently overcrowd the purposeful discussions. It’s more fun when people are friendly.
Edited 2012-09-17 00:55 UTC
I’m very principled… One of my first principles is that I don’t believe that development is the process of becoming attached at the hip to a particular vendor’s idea of the right way to do things. HTML/JS might not be perfect, but it has one very significant feature that no other development platform has – no one company is steering the boat.
And since you want to resort to name calling… What is being content to slop up Microsoft or Apple’s latest dog food for the sake of it making things easier for you? That sounds pretty lazy to me.
My development platform is the world – yours is just the latest gadget fad…
XAML and C# are open specifications. There is nothing proprietary about them. They’re also a great deal better than HTML/JS, so let’s drop the straw man.
Anyone is free to implement a XAML parser. I’m not tied to any particular platform because my back end code has no ties to any platform. Its all generic .NET code. It works across the myriad platforms I’m interested including a great majority of the smartphone market.
I don’t? The only parts Cocoa or XAML are the native front ends. A great majority of my code is portable across platforms.
I custom tailor my apps for the platform, while you try to make HTML fit everywhere , user experience be damned.
That’s true, and fine. I just don’t see how its a counter point to excuse poor tooling.
I don’t particularly find that to be a win. Dynamic typing make code less readable at a glance, and especially in the case of JS leads to ridiculous decisions like the lack of an integer system.
I think this is a good idea and would rock for JS. Default static typing with optional dynamic typing.
Well, I think that there are certain design issues in JavaScript that make it relatively harder to make fast. That’s the crux of this argument.
I don’t know man, as a developer, poor performance makes me sad.
Code reuse greatly alleviates this. Its middle ground between two extremes. Not quite write once run anywhere, but not quite single stack either.
I don’t think its improving fast at all. The next improvements to the fundamental technologies are still years off. In the meantime, the deficiencies are unacceptable to me.
Silverlight was never meant to replace the web, only augment it. Remember, HTML5 didn’t exist at that time. The only way to write RIAs was using a plugin. Silverlight was the best at this.
It doesn’t mean byte code is categorically bad for the web. Hell, the generated , minified, gunk that is JS on most sites is a lot less human readable.
While good, you said it best, it lacks cohesiveness. For Knockout you need to select elements out of the tree and apply bindings and view models manually. There is no declarative databinding because markup can’t be extended.
Its not really declarative then, its just a hack. The point is, its an app platform, not the web, you’ll need a runtime anyway, so Mozilla used suboptimal tech for a really terrible reason. Developers should reject this crap.
No they have not, because the markup hasn’t been leveraged in doing so. JS has. Its not the same thing. Its perfectly reasonable to say HTML is terrible and inadequate, and say XAML is a lot better.
XAML has been around since 2004. Its been used in WPF, Silverlight, Workflow Foundation, XPS, WinRT, Windows Embedded, the Xbox 360, etc
The key difference now is that its part of the Windows Division, not the Developer Division. Its gone from a dev platform to an inherent part of Windows, with all the legacy implications that it brings.
+1 As I cannot vote
Even though I get paid to develop mostly web applications, for me HTML is for documents.
It is insane what people try to bend the browsers to do, spending sometimes days fighting with HTML+CSS+JavaScript, for what would be a few function calls in a native application.
And in the end they complain that it still does not look integrated. Of course it does not, HTML is for documents!
But at the end of the day it is all about the money, so the customer gets what s/he asks for.
HTML5 is also about freedom:
http://mashable.com/2012/09/05/grooveshark-html5-player/
This only works until the handset manufacturers start blacklisting web sites.
Trying to force HTML5 on developers. For fucks sake. It is not up to par. Just fucking stop.
Kill this OS with fire. Hell, even WebOS with EnyoJS is better
I think the initial idea of this effort is not to force anything on developers, but to use it as an experiment to broaden the usage and design perspective of Web APIs.
https://wiki.mozilla.org/Booting_to_the_Web
I.e. the effort will benefit browsers running on conventional OSes, and no one forces you to use these kind of web flavored only systems.
I welcome alternative and open OSes for mobile, but I’m not a fan of the “do everything with web technologies” approach. Despite the claims of this article, such an approach is always suboptimal. And the programming environment sucks in my opinion, I hate when I have to program anything for web browsers. I don’t see how JavaScript applications are supposed to run faster than Android’s Java apps. I would much prefer a system that runs native-compiled applications… like Moblin/Tizen/whatever. If and when Wayland is viable it would be a pretty nice base for a phone UI (since all phones have accelerated OpenGL ES now).
I would really like to see smartphones that are actually fully-fledged PCs that can be docked to a base station (like the Motorola Atrix) and run a full desktop… but not like that tacked-on system that runs inside Android, but the same exact system that runs on the phone (just a different UI on a different screen). Now add multi-monitor support and you have something very special.
ARM is missing a unified system-level platform like the PC/x86 enjoys, due to various different non-standard SoC implementations. ARM need to get their act together and create a common platform that SoC vendors can implement. Hopefully that’s what HSA will lead to. Otherwise I fear Intel will catch up to them in power efficiency and beat them on flexibility, platform standardisation and openness.
Suboptimal tech usually wins, unfortunately. Firefox was an anomaly, based on historical precedent we should all still be stuck in IE.
I still prefer a native Qt environment like MeeGo, but I’ll sacrifice that for true freedom in Firefox OS. If lightning strikes twice, I’m a potential customer.
I wouldn’t want the stagnation that is happening in the x86 platform to extend to ARM SoCs. If it weren’t for AMD, we would all be stuck with a 32-bit ISA on x86, with an overly costly and less than optimal upgrade path to Itanium. Unfortunately we’ve lost the 3rd party chipset market on x86 due to having too few CPU players.
I hope the ARM SoC market stays the way it is with even more competition and SoC options coming into it.
adkilla,
“I wouldn’t want the stagnation that is happening in the x86 platform to extend to ARM SoCs. If it weren’t for AMD, we would all be stuck with a 32-bit ISA on x86, with an overly costly and less than optimal upgrade path to Itanium. Unfortunately we’ve lost the 3rd party chipset market on x86 due to having too few CPU players.”
Haha, I was actually very disappointed with AMD when they told the world they were going to extend the life of x86 with a 64bit variant of it. I sincerely thought that we would be migrated to better architectures by now if AMD hadn’t anchored us right back to the x86 platform (albeit with some improvements). The AMD64 ISA still suffers from a lack of GP registers compared to alternatives, which necessitates complex hacks like register renaming. The opcodes are still highly inconsistent, increasing the amount of logic needed to parse them. The whole architecture is shrouded in subtle legacy designs.
I guess we have to wait for a newcomer to replace x86-64, but now that x86 is 64bit that could take a while (x86-64 could conceivably last a few decades like the x86 did).
Given the choice over Itanium and x86-64, I’d take x86-64 anyday.
I wasn’t suggesting the choice for 64 bit desktop supremacy was only between intel processors, it’s completely ironic that AMD64 is mostly responsible for keeping intel in the lead for 64bit processors.
I don’t think so – if you’d take AMD out of the equation, the world would just mostly continue buying x86-32 until really hitting that 4 GiB limit …at which point MS would just save the day by forcing PAE in their then-new (and in that alternative reality) Vista/7, and that would be it (assuming Intel management wouldn’t come to their senses prior to that)
Generally, over time everything collects & grows in legacies… And considering the most likely “better” alternatives, can you really say with a straight face that we would be better off without x86-64? (consumers in general – sorry, nobody cares about asm, OS, compiler devs )
Still, in a few years you might more or less get what you want(?) – the Loongson ~MIPS chips have hardware-assisted x86 emulation. Considering all the x86 licensing issues, it’s of course unlikely to show up in “current” (in the future) ~Western products …my guess: it’s there (and being worked on) to be ready when P5, MMX, P6, SSE patents lapse in the coming decade (I think) – that subset of x86 should allow running virtually all really important(tm) legacy software, a perspective probably very appealing to the Chinese, in their supposed quest to technology independence.
So, you just have to move to PRC to experience it, or at least to the areas likely within their sphere of influence in the future ;p (that should be SE Asia and large parts of Latin America and Africa …though who knows, maybe more ;p )
zima,
“Generally, over time everything collects & grows in legacies… And considering the most likely “better” alternatives, can you really say with a straight face that we would be better off without x86-64? (consumers in general – sorry, nobody cares about asm, OS, compiler devs )”
Actually, yes I do think so. x86-64 is just the latest in a number of extensions to the antiquated architecture. Sure it’s “good enough”, but I think x86-64 won at the expense of competitors which will continue to me marginalised for at least another decade.
“Still, in a few years you might more or less get what you want(?) – the Loongson ~MIPS chips have hardware-assisted x86 emulation. Considering all the x86 licensing issues, it’s of course unlikely to show up in “current” (in the future) ~Western products”
There’s so much legal BS going down right now that I suspect you might be right. These are sad times.
Edited 2012-09-16 18:59 UTC
I’m not convinced. And perhaps I didn’t state the opening thought explicitly enough: even if x86-64 never happened, the world would mostly just continue buying x86 (-32…) CPUs even now, because they are largely definitely more optimal than the alternatives – NVM some quirks of their architecture (and some solitary grumbling ~asm coders). Without x86-64, they’d still be better at what they do, with great band-per-buck, just with PAE forced a bit sooner by the dominant operating systems…
I don’t see any serious competitor to x86-64 that was stifled by it. MIPS, Alpha, SPARC, PA-RISC were killed or out of the performance game due to Itanium (…empty promises) – from which AMD64 saved us[/i] – and anyway they would go against the Wintel ecosystem and unlikely to give anything so nice as the inexpensive power of x86 in the last decade, driven by its established economies of scale. PowerPC …I doubt IBM would be better for us if given, well, power over ~PC market (then there’s that “The memory management on the PowerPC can be used to frighten small children” Linus quote).
ARM shows up everywhere now, anyway – but it’s unlikely this would happen much sooner (ARM doesn’t even have its own 64 bit version out yet… and, before the establishment of “new mobile ecosystem”, lack of binary compatibility with x86 would be a major showstopper)
But also, don’t be so pessimistic …it will show up, but not in “current” consumer toys (no x64, no SSE2 until 2022 or so, no SSE3 much longer)
Edited 2012-09-21 00:19 UTC
I guess MS would just save the day by forcing PAE in Vista/7, in that alternative reality (if Intel wouldn’t wise up sooner) …overall, probably not much of a difference to us.
And 3rd party chipsets were likely going out anyway, due to increasing integration of x86 platforms (so not exactly stagnation, and even similar in spirit to ARM SoCs). Anyway, if there would be more x86 chipsets thanks to there being more x86 CPU players …those chipsets wouldn’t really be 3rd party, wouldn’t they :p (plus, while we might despair the loss of ULi or SiS, I won’t miss VIA chipsets; but BTW SiS, x86, and SoCs… http://en.wikipedia.org/wiki/Vortex86 )
Edited 2012-09-16 14:23 UTC
River Trail is an Intel Labs project that enables data-parallelism in web applications by leveraging multiple CPU cores and vector instructions in the boundaries of the JavaScript programming paradigm.
http://software.intel.com/en-us/articles/opencl-river-trail/
May I ask a very silly, naive question? Is there any real, organized marketing push behind this, or is it just a bunch of geeks hobbying around on the basis of “If we build it, they will come”?
If you don’t have a handset maker committed to building a phone around this OS and bringing it to market at least a year before you expect to hit 1.0, then just admit that you are hobbyists. Nothing wrong with that, everyone needs a hobby.
Look at Linux’s stagnant market share in desktops/laptops (as opposed to its leading position in other sectors). If regular, non-geek people are not prepared to strip Windows off their PCs and put Linux on, they are certainly not going to strip off iOS/Android/Win8 and install FirefoxOS. Not.Gonna.Happen.
So don’t tell us how you’re doing this technically. Technical problems are there to be solved. Tell us how you are going to get millions of Chinese-made FFOS phones into peoples’ hands.
Why would manufacturers choose this over Android? Because it’s free? Android is also free.
Because it gives better performance on cheaper hardware? If manufacturers cared about better performance, someone would have purchased WebOS by now, making HP an offer they couldn’t refuse.
Better performance on cheaper hardware is NOT in the manufacturers’ interest. It makes the customers hang on to their existing phone a little bit longer instead of performing their civic duty of constant upgrading.
I’m afraid this won’t be the first technically elegant solution that died and withered because its creators remained aloof of the commercial realities of the world they lived in [sigh … BeOS]. Nor the last.
Most companies don’t want a Google monopoly on mobile:
Firefox OS (Gecko): https://blog.mozilla.org/blog/2012/07/02/firefox-mobile-os/
Tizen (Webkit): http://www.tizenassociation.org/en/
HTML5 without a browser is probably the best option to avoid a new monopoly.
Edited 2012-09-14 08:58 UTC
“We don’t like the fact that one part of the value chain of our business is tightly controlled,†Carlos Domingo, director of product development at Telefonica’s digital unit, said in an interview. “In the case of the emerging countries it’s worse, because it becomes a monopoly by Google.â€
http://www.bloomberg.com/news/2012-07-04/telefonica-bids-to-own-the…
I am always in favor of more choices, so I would like to see Firefox OS succeed. But one potential problem jumped out at me from the article:
That sounds like a security (and tech support) nightmare waiting to happen. I hope they are thinking about things like this.