As a web developer, I can tell you that HTTP-requests are painfully slow and any decent front end developer optimises the content to use as few requests as possible, and to combine as many resources as possible into one request. The reason why it’s so slow? Firstly, the Request (the ping to ask what resource you want from the server) and Response (the header sent back to state the file type, size and caching information) headers are uncompressed. This is the first thing SPDY rectifies:
Google describe the basic principle of how SPDY works in one line:
The use of SSL gives them safe passage through proxies and legacy network hardware, as well as increasing security for all users of the web—this is most welcome giving what some backwards countries are planning to do.
SPDY multiplexes the resource requests, increasing overall throughput. Fewer costly TCP connections need to be made. Whilst HTTP-Pipelining can allow more than one request per TCP connection, it’s limited to FIFO (and thus can be held up by one request), and has proven difficult to deploy—no browser ships with support enabled (The FasterFox add on for Firefox does enable HTTP-Pipelining, but at the cost of compatibility with some servers)
Results have been good, Google’s goal is a 50% increase in speed. Under lab conditions "The results show a speedup over HTTP of 27% - 60% in page load time over plain TCP (without SSL), and 39% - 55% over SSL."
An interesting feature of SPDY is the ability for the server to push to the client. At the moment the server cannot communicate with the browser unless a request is made. Push is useful, because it would allow web apps to receive a notification the instant something happens, such as mail arriving, rather than having to poll the server at an interval, which is very costly. AJAX apps like GMail and Wave currently use a faux-push hack, whereby an HTTP-Request is left open by the server (it never hangs up on the browser) keeping the AJAX in a suspended state whereby the server can add information to the end of this hanging request and the browser receives it immediately. SPDY will allow for much greater flexibility with server push, and bring web apps that bit closer to the desktop.
Google are quick to stress the experimental nature of SPDY, all existing work has been done under lab conditions, and they are uncertain how it will perform “in the real world”. There are also a number of questions still be answered about packet loss and deployment (compatibility with existing network equipment is key to adoption, especially when you have such awful network operators like AOL in the game).
All in all, Google are not looking to outright replace good ol’ HTTP, rather to augment it with new capabilities that compliment its purpose to serve you content. I’m glad that Google are willing to question even such a well established corner stone of the Internet as HTTP—if we don’t ask questions then we will never discover better ways of doing something. Google are really pushing every field of Internet engineering to get this big mess we call home moving in a positive direction—they can hardly expect Internet Explorer to be waving the banner of progress after all.