Linked by mufasa on Mon 10th Aug 2009 12:25 UTC
Web 2.0 The web browser has been the dominant thin client, now rich client, for almost two decades, but can it compete with a new thin client that makes better technical choices and avoids the glacial standards process? I don't think so, as the current web technology stack of HTML/Javascript/Flash has accumulated so many bad decisions over the years that it's ripe for a clean sheet redesign to wipe it out.
Permalink for comment 377918
To read all comments associated with this story, please click here.
Comment by iq-0
by iq-0 on Mon 10th Aug 2009 16:06 UTC
iq-0
Member since:
2009-07-28

Funny, but I never thought of browsers as being either fat or thin clients. I think of them more as interpreters of a markup language that try to visualize the information for you. They are clients in the "you are my supplied and I'm your client" but not in a "server <-> client" sense. They do fulfill part of that job when downloading content, but that is actually a very small part of their function.

The whole "browser as a {thin,fat}-client" starts to be more true in an ajax setting. But given some new developments it can also act as a standalone application environment. But it's original purpose is still used pretty much by 90% of the people on the internet: viewing content.

Even in a "web 2.0"-like setting, content is still one of the main motivator (blogs, micro blogs, movies, internet radio, foto-sites, encyclopedias, ..). If you strip away all those additional features and functions you can still get a reasonable view of the content.

The biggest problem with the current "stack" is that some things can't be formatted as content because content specification has lagged behind. The current HTML 5 work is a great step into closing that gap, but with the renewed browser "battles" we can expect faster progress in this area.
Perhaps in the near future everybody will have the joy of viewing vector drawings without any proprietary plugins, but as readable content.

And I do think that text markup is the way for any open protocol. For it forces different parties to show others how they specify things. So others can (if they want to) easily create compatible works, something which is hard to second-guess in a binary only world). And you have a very rich and expressive environment. Sure you can make greatly expendable binary protocols, (e.g. ASN.1 based DER). But you have to be very careful not to have id clashes. So we use OIDs which (to be globally unique) are pretty large. And centralizing the unique identifiers is just stagnating development. So the gain is not that big. And seeing as to how many parties made insane bugs in independent binary interpretation of the same protocol is not very encouraging in that it's more reliable.

I think we can do better, but I think that a real solution is in the direction of better separation of "chrome", actual content and meta-data. Here independent implementations have the best way to distinguish themselves into getting the same basic thing but better. For real standards are determined by and through the majority using multiple interpretations of those standards.

Reply Score: 3