Linked by Thom Holwerda on Wed 25th Jan 2012 22:05 UTC, submitted by twitterfire
Internet & Networking "Google's efforts to improve Internet efficiency through the development of the SPDY (pronounced 'speedy') protocol got a major boost today when the chairman of the HTTP Working Group (HTTPbis), Mark Nottingham, called for it to be included in the HTTP 2.0 standard. SPDY is a protocol that's already used to a certain degree online; formal incorporation into the next-generation standard would improve its chances of being generally adopted."
Thread beginning with comment 504718
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: no binary protocols please
by Alfman on Thu 26th Jan 2012 15:03 UTC in reply to "no binary protocols please"
Alfman
Member since:
2011-01-28

FunkyElf,

"The guys who initially developed this stuff had much much less powerful technology than we to today and they did just fine with text protocols."

I don't completely agree. Obviously ordinary users wouldn't care about a text protocol, so it's just techies. Text was very nice when we lacked tools to decode and analyze packets, but these days anyone with the need to view raw traffic shouldn't have trouble getting a hold of the tools to decode it. The most likely scenario is that the tool which captures the traffic would also transparently convert it to a readable format (ie wireshark & tcpdump). So even techies shouldn't care.


The big question is how much extra bandwidth and cpu overhead is consumed by using text protocols? The way HTTP is used today, I think the overhead is negligible compared to the large payloads. However, I can conceive of future scenarios where the HTTP overhead discourages the use of HTTP to manage traffic context.

Today's HTTP has issues with asynchronous/bidirectional communications, but assuming it gets fixed, there's alot of new potential applications for it. Consider future applications where the client can open simultaneous bidirectional data channels to the server in a single multiplexed HTTP pipe. Let's say it's a video conference/whiteboard/application sharing utility all running over one HTTP browser connection. The overhead of using a text protocol to manage the multiplexing should start to raise eyebrows.


Besides, I don't know of anyone who's complained that SSH's protocol isn't human readable (or even it's telnet precursor, who's handshake was also in binary).


Edit: Did anyone say anything about converting to a binary HTTP standard?

Edited 2012-01-26 15:10 UTC

Reply Parent Score: 2

Neolander Member since:
2010-03-08

Text also has advantages beyond human readability though, like the availability of many quality parsers and the fact that text-based protocols abstract away endianness issues in the underlying text transmission protocol.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

Neolander,

"Text also has advantages beyond human readability though, like the availability of many quality parsers"

My opinion is that binary protocols don't need to be difficult to implement, sometimes they're even easier to implement than the text parsers. It could go either way: SMTP is very difficult due to the complexity of ASN encoding, which is designed to serialize arbitrary hierarchical objects, others like Modbus borderline on trivial.

One of the problems with pure text is that we have to scan strings of unknown length which is far less efficient than knowing string lengths up front. It's the difference between pascal strings and c-strings. With terminated strings, every single byte has to be checked. When the length is specified, data can be copied/compared at least 8 bytes at a time on many architectures.

Also, text based protocols like HTTP have to accept superfluous whitespace and 3 different types of newline encodings, all of which makes parsing even less efficient.

All this says nothing of unicode support, which if required makes the delimiter scans even more difficult due to erroneous byte matches within multi-byte unicode sequences.


"and the fact that text-based protocols abstract away endianness issues in the underlying text transmission protocol."

Yes, endianness is a problem, however let me point out that it would be equally problematic for text-represented numbers if humans had not standardized on big-endian numeric representation for themselves. There's nothing stopping binary protocols from standardizing a byte order (network byte order) in the same way.

Reading/writing textual numbers requires multiplying/dividing decimal digits one at a time. When using binary numbers, they can be processed in their entirety with a possible opcode for byte swap. Also, since every binary byte represents a valid number, no ASCII digit tests are required.


All in all, binary protocols are more efficient, but the question goes back to how large is text overhead compared to the payload itself? Probably not much. Only once HTTP is used for highly asynchronous/multiplexed communication will it's overhead start to overtake the payload.


I think there is (legitimate) fear that binary protocols would be extended in proprietary & undocumented ways to make it incompatible, and that would be a damn shame. This is why it'd be crucial for the standard to mandate clients/servers break the connection on non-standard requests, it would force developers to get their act together with regards to total standard compliance.

Reply Parent Score: 2

Zifre Member since:
2009-10-04

Have you ever written parsers? It is vastly easier to parse binary protocols than text. Even JSON, with a relatively simple grammar, is much harder to parse than BSON, for example.

Endianness is not really an issue, if the protocol simply defines it to be one way or the other. Also, some binary protocols are byte-based, like UTF-8.

Reply Parent Score: 2