Linked by Kroc Camen on Thu 12th Nov 2009 19:30 UTC
Google Google have created a new HTTP-based protocol "SPDY" (pronounced "Speedy") to solve the problem of the client-server latency with HTTP. "We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support."
E-mail Print r 2   · Read More · 65 Comment(s)
Thread beginning with comment 394335
To read all comments associated with this story, please click here.
by geleto on Thu 12th Nov 2009 20:32 UTC
Member since:

I see one downside to this. With only one TCP connection loosing a packet will pause the transmission of ALL resources till the the lost packet is retransmitted. Because of the way the TCP/IP congestion avoidance works (increase speed till you start loosing packets) - this will not be a rare occurrence. There are two ways around this - use multiple TCP streams or better - use UDP.

Edited 2009-11-12 20:33 UTC

Reply Score: 2

RE: Downside
by cerbie on Thu 12th Nov 2009 23:42 in reply to "Downside"
cerbie Member since:

...but the resources to be retransmitted are also now smaller and more efficient, helping to negate it. So, if it becomes a problem, do a little reconfiguration, and change default recommendations on new pieces of network infrastructure. The networks will adapt, if it's a problem.

If it ends up working out, it can be worked into browsers and web servers all over, and many of us can benefit. Those who don't benefit can safely ignore it, if it's implemented well. We all win. Yay.

The Real Problem we have is that old protocols have proven themselves extensible and robust. But, those protocols weren't designed to do what we're doing with them. So, if you can extend them again, wrap them in something, etc., you can gain 90% of the benefits of a superior protocol, but with easy drop-down for "legacy" systems, and easy routing through "legacy" systems. This is generally a win, when starting from proven-good tech, even if it adds layers of complexity.

Reply Parent Score: 4

RE: Downside
by modmans2ndcoming on Fri 13th Nov 2009 04:10 in reply to "Downside"
modmans2ndcoming Member since:

oh... yes... lets use UDP so we can get half a webpage... a corrupted ssl session, non-working or wrong working sites.

Yes... UDP is the solution to everyone's problems.

oh wait no... its not because it is a mindless protocol that does not care of something important is lost or if it is wasting its time sending the data to the other end.

Reply Parent Score: 6

RE[2]: Downside
by geleto on Fri 13th Nov 2009 17:29 in reply to "RE: Downside"
geleto Member since:

You can implement detection and retransmission of lost packets on top of UDP. The problem is the in-order transmission of packets in TCP. Because of it when you loose a packet all received packets will wait till the lost is retransmitted. With UDP you can use the data in the new packets right away, no matter if an older packet is missing and has to be retransmitted.
Imagine the situation where you are loading many images on a page simultaneously, a packet is lost and because only one TCP connection is used - all images stall till the lost packet is retransmitted.

Edited 2009-11-13 17:31 UTC

Reply Parent Score: 1