Linked by mufasa on Mon 10th Aug 2009 12:25 UTC
Web 2.0 The web browser has been the dominant thin client, now rich client, for almost two decades, but can it compete with a new thin client that makes better technical choices and avoids the glacial standards process? I don't think so, as the current web technology stack of HTML/Javascript/Flash has accumulated so many bad decisions over the years that it's ripe for a clean sheet redesign to wipe it out.
Thread beginning with comment 377941
To read all comments associated with this story, please click here.
You're forgetting one crucial thing
by Eddyspeeder on Mon 10th Aug 2009 19:05 UTC
Eddyspeeder
Member since:
2006-05-10

All nice and good, but I do not find it sensible if the HTTP protocol is not drastically changed.

First replace the wooden foundation with reinforced concrete. Then fix the plasterwork. Not the other way around.

Recent milestones are HTML5 and IPv6, but the Internet's foundation has gotten stuck with the 1997 HTTP 1.1.

Just to put things in perspective: 1997...
- the heyday of Yahoo, Homestead, XOOM and websites using frames
- the days of Flash 2.0, HTML 3.2 (4.0 was introduced in December that year) and the introduction of CSS 1.0
- the time before Napster (estb. 1999), Google (estb. 1998) and YouTube (estb. 2005)

We have been stalling an HTTP protocol overhaul for way too long. We are sending huge files using the hyper TEXT(!) protocol (which is really a hack if you look at it objectively), for which the *FILE* transfer protocol (FTP) had originally been created.

Reply Score: 2

sbergman27 Member since:
2005-07-24

We have been stalling an HTTP protocol overhaul for way too long. We are sending huge files using the hyper TEXT(!) protocol (which is really a hack if you look at it objectively), for which the *FILE* transfer protocol (FTP) had originally been created.

Errr... what exactly is wrong with that? We're using ssh instead of telnet, too. Hack?

Reply Parent Score: 2

PlatformAgnostic Member since:
2006-01-02

What's the point of that when you can invent new problems to solve on shaky foundations and put lots of effort into making concrete out of plaster?

For instance, we've had some pretty nice high-performance binary RPC mechanisms, but all the rage these days is XML-RPC over HTTP (via AJAX). Or even better, JSON. At least the acronyms sound cool.

Reply Parent Score: 3

google_ninja Member since:
2006-02-05

XML-RPC makes sense sense for web services because a) it goes over port 80, which means less firewall headaches, and b) it is xml, which is designed to be parsable by anyone (including humans)

JSON makes sense because XML is incredibly verbose, and if you are communicating back and forth with javascript anyways you may as well be using its native object notation anyways. It still uses text, again, because you want it to work with web servers.

Binary RPC calls make sense when you control both the client and the server, and performance outweighs interoperability.

Reply Parent Score: 4

Delgarde Member since:
2008-08-19

We are sending huge files using the hyper TEXT(!) protocol (which is really a hack if you look at it objectively), for which the *FILE* transfer protocol (FTP) had originally been created.


Um? So? Naming aside, what's the problem? HTTP really has nothing to do with hypertext - it's just a simple protocol for accessing files, much like FTP. In what way is it a hack?

Reply Parent Score: 2

Eddyspeeder Member since:
2006-05-10

Okay let's bring this back to the topic of the originating article: the author suggests many changes at a higher level. My point simply is that if you want to "change", it would be best to just be radical about it.

Now why is it a hack? (Although, I must admit that "hack" was too strong a word. My apologies.) Let me illustrate this with yet another protocol example:

SMTP is the most common protocol for sending mail. It was never intended to be *the ultimate solution*, considering the fact that it was called "SIMPLE mail transfer protocol". All it does is move things around. The problem with that is its spam handling, which was not anticipated at the time. So we went and created workarounds.

Now I'm not saying the current (E)SMTP implementation is necessarily bad - it works, so we can live with it. But my point is, again, that if you want to turn things around, why not start at the root, which has been left unattended for way too long?

There is no doubt about it that the pioneers of the Internet would have done things differently. They were, however, limited by the technological advances and insights at that time. But why is it so weird to claim that what I see more like a "firmware update" would vastly improve the Internet as we now know it?

HTTP *works*. Otherwise the Internet would not have become what it is today. But I'm not for the "if it ain't broken, don't fix it" mentality. HTTP was not meant to do what it does today.

Besides, let's not only look at today. Let's consider what lies 10 years ahead of us. ZFS and other file systems are anticipating the increase of data load. Soon our Internet will be fast enough to not even need compressed video any longer. If OSes already have to turn their entire file system around, why should this not be necessary for the Internet? It will have its limitations in its current form.

I think the only reason why it has not been updated so far is because it will require everything and all to move on as backward compatibility would only add more problems over the solutions it would bring.

Reply Parent Score: 2