Linked by mufasa on Mon 10th Aug 2009 12:25 UTC
Web 2.0 The web browser has been the dominant thin client, now rich client, for almost two decades, but can it compete with a new thin client that makes better technical choices and avoids the glacial standards process? I don't think so, as the current web technology stack of HTML/Javascript/Flash has accumulated so many bad decisions over the years that it's ripe for a clean sheet redesign to wipe it out.
Permalink for comment 377965
To read all comments associated with this story, please click here.
Eddyspeeder
Member since:
2006-05-10

Okay let's bring this back to the topic of the originating article: the author suggests many changes at a higher level. My point simply is that if you want to "change", it would be best to just be radical about it.

Now why is it a hack? (Although, I must admit that "hack" was too strong a word. My apologies.) Let me illustrate this with yet another protocol example:

SMTP is the most common protocol for sending mail. It was never intended to be *the ultimate solution*, considering the fact that it was called "SIMPLE mail transfer protocol". All it does is move things around. The problem with that is its spam handling, which was not anticipated at the time. So we went and created workarounds.

Now I'm not saying the current (E)SMTP implementation is necessarily bad - it works, so we can live with it. But my point is, again, that if you want to turn things around, why not start at the root, which has been left unattended for way too long?

There is no doubt about it that the pioneers of the Internet would have done things differently. They were, however, limited by the technological advances and insights at that time. But why is it so weird to claim that what I see more like a "firmware update" would vastly improve the Internet as we now know it?

HTTP *works*. Otherwise the Internet would not have become what it is today. But I'm not for the "if it ain't broken, don't fix it" mentality. HTTP was not meant to do what it does today.

Besides, let's not only look at today. Let's consider what lies 10 years ahead of us. ZFS and other file systems are anticipating the increase of data load. Soon our Internet will be fast enough to not even need compressed video any longer. If OSes already have to turn their entire file system around, why should this not be necessary for the Internet? It will have its limitations in its current form.

I think the only reason why it has not been updated so far is because it will require everything and all to move on as backward compatibility would only add more problems over the solutions it would bring.

Reply Parent Score: 2