Linked by Thom Holwerda on Wed 8th Feb 2012 23:15 UTC
Internet & Networking "While the file-sharing ecosystem is currently filled with uncertainty and doubt, researchers at Delft University of Technology continue to work on their decentralized BitTorrent network. Their Tribler client doesn't require torrent sites to find or download content, as it is based on pure peer-to-peer communication. 'The only way to take it down is to take the Internet down,' the lead researcher says." In a way, the efforts by Hollywood and the corrupt US Congress is actually increasing the resiliency of peer-to-peer technology. Karma.
Thread beginning with comment 506574
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: "pure" P2P
by Valhalla on Fri 10th Feb 2012 03:13 UTC in reply to "RE: "pure" P2P"
Member since:

Yes, I find this interesting aswell, particularly the decentralized bit. I read up on the p2p methodologies a while back with torrents, e2dk, direct connect being examples of centralized networks and kademlia, DHT (partly), winny, share, being examples of decentralized networks.

Centralized networks relies on a server which provides vital information neccesary for file sharing, this is quite efficient but also has a huge vulnerability as the network is totally dependant on these servers operating and if they go down, so does the network functionality.

Decentralized networks are those where each peer take on part of the burden handled entirely by a server in a centralized setting and therefore has no central point of functionality, leading to a network where it can lose any peer and still continue to function as before.

From a network robustness standpoint it's obvious that decentralized networks are better, but there are as always other factors, such as efficency. A single purpose server in a centralized setting is more efficient in spreading necessary information to peers than it a decentralized network where more bandwidth use/cpu use is required for relaying the same information.

Other areas where centralized networks can be more attractive is how they can be community/interest targeted, and often with their own set of rules pertaining to how much peers must upload in contrast with how much they are allowed to download.

Then we have anonymity, popular networks such as bittorrent and ed2k/kademlia have no anonymity to speak of, the closest thing is protocol obfuscation but that is targeted entirely at preventing isp-throttling.

The reason why the demand for anonymity has been low is because there's a very slim chance of legal reprecussions while using networks like bittorrent today, and also that anonymity measures 'waste' bandwidth.

I found it very interesting that in Japan, where online copyright breaches are much more likely to cause legal problems the two major p2p applications (Winny, Share) are built from the ground up to be anonymous.

Obviously there's no real anonymity given that you need to expose your ip address in order to join any p2p network, however the difficulty in proving what an ip downloaded is what these 'anonymous' networks are based upon. Both Winny and Share require the user to allocate a large chunk of hd space as an encrypted buffer not only for the data they are interested in, but also data they will be relaying from one peer to another. It's this relaying of data which obfuscates the source->destination ip addresses and makes it very hard to prove who downloaded what from who'm. Naturally this makes for a less efficient network, as not only do each user need to spend their bandwidth on what they want, but also on relaying lots of data they have no interest in.

It will be interesting to see whether these types of pseudo-anonymous networks will start to rise in use here aswell if we end up with harder anti-piracy measures resulting in more resources being spent on identifying and prosecuting online piracy.

Oops, became a bit longwinded ;)

Reply Parent Score: 3

RE[3]: "pure" P2P
by Alfman on Fri 10th Feb 2012 06:03 in reply to "RE[2]: "pure" P2P"
Alfman Member since:


I didn't know about the Japanese P2P differences. Different motivations will yield different solutions.

Yes, any network that intends to obfuscate the IP addresses is going to be inherently inefficient. That seems somewhat unsolvable, at least without the cooperation of the ISPs. If they routinely reassigned IP addresses and didn't keep records, that would achieve the desired anonymity, but of course they don't do that.

"A single purpose server in a centralized setting is more efficient in spreading necessary information to peers than it a decentralized network where more bandwidth use/cpu use is required for relaying the same information."

I actually disagree, a P2P medium (one which is designed for efficiency mind you, and hasn't been subjected to other compromises) should be more efficient than a centralized system. However, as I haven't actually analyzed the problem before, I'd like to do so now.

Let's say there are 2M viewers watching an hour long television program. The entire program is 500MB. 1M watching it live, the other 1M watch it later on demand.

The required bandwidth for one instance is about 1.2Mbps. The aggregate across 1M active viewers becomes 1.2Tbps during the live broadcast. Of course, you'll have to distribute this across many different data centers, but that's a boatload of trucks traveling through our tubes.

Now consider an optimized P2P network taking the form of a tree where peers can also download from one or two upstream peers.

Obviously there's a whole spectrum of peer bandwidth possibilities, but to keep the example simple: assume 500K have enough upstream bandwidth for one other peer, 250k have enough upstream bandwidth for two other peers, 250k have no upstream bandwidth for any peers.

Lets say the entire broadcast is originated from a single 10Mbps node which supports 8 direct downstream peers, lets call this level 0. For optimal topology peers with the best connections connect first.

Level 1 has 8 peers, 8 total
Level 2 has 16 peers, 24 total
Level 3 has 32 peers, 56 total
Level 4 has 64 peers, 120 total
Level 5 has 128 peers, 248 total
Level 6 has 256 peers, 504 total
Level 7 has 512 peers, 1016 total
Level 8 has 1024 peers, 2040 total
Level 9 has 2048 peers, 4088 total
Level 10 has 4096 peers, 8184 total
Level 11 has 8192 peers, 16376 total
Level 12 has 16384 peers, 32760 total
Level 13 has 32768 peers, 65528 total
Level 14 has 65536 peers, 131064 total
Level 15 has 131072 peers, 262136 total
On level 15, 118936 can support two downstream peers, 12136 support only one
Level 16 has 250008 peers, 512144 total
Level 17 has 250008 peers, 762152 total
On level 17, 237856 can support one downstream peer, 12152 support none
Level 18 has 237848 peers, 1M total

Assuming a pessimistic latency of 500ms per level, those on level 18 have a latency of 9s, which could be considered good or bad, however considering the the initial 10mbps uplink, I think it's great. And assuming these streams are being recorded, they would also be sufficient to send the files to the folks who watch the program later on demand.

I see my estimates for video bandwidth are way too high, looking at the bandwidth used for youtube streams, even the HD videos are only 450kbps, so factoring the smaller load on peers, the tree fanout above would be much greater than the binary division I've illustrated, making the tree half as deep, however I leave it as is.

A real streaming network would have to account for dynamic conditions, less ideal topologies, and fault tolerance, however I think my numbers were pessimistic enough to leave some wiggle room too. Large broadcasters would have more initial distribution points anyways.

To tie this all back into the discussion of efficiency, not only does P2P significantly lower the burden for the distributer, but having peers transferring amongst each other inside ISP networks off the internet backbone helps too. This should probably be factored into the protocol.

I envision a P2P set-top box one day that works just like this. If my neighbor's box has a show I want, my box will just download it from his. On the other hand, services running on the edge of the networks is probably the opposite of what companies like ms & google want, since they make more money being centralized gatekeepers. I believe that if the media conglomerates weren't holding us back, we'd already have the technology to build far better content distribution systems than the industry has been willing to do.

"Oops, became a bit longwinded ;) "

Also guilty, but I love CS topics!

Reply Parent Score: 2

RE[4]: "pure" P2P
by Valhalla on Sun 12th Feb 2012 06:53 in reply to "RE[3]: "pure" P2P"
Valhalla Member since:

I actually disagree, a P2P medium (one which is designed for efficiency mind you, and hasn't been subjected to other compromises) should be more efficient than a centralized system.

I'm sceptical, a centralized system knows of all peers in it's network aswell as all files (assuming we are talking of a filesharing network) and can compose the necessary information transaction as a simple server->node operation, meanwhile a network where every participant takes the role of 'server' the operation of (for example) finding other nodes sharing certain file would require sending out a message traversing the network, using bandwidth of other nodes who have no interest in that file but which still needs to participate in the search.

Glancing over your example it seems it just covers the actual efficiency of spreading a file amongst several peers once all peers have been identified.

The increased use of bandwidth I described in decentralized networks is due to that of locating other nodes with information you are interested in (as in traversing the network). Once those nodes have been located it will be no more inefficent than that of a centralized network as it will be direct transfers against the identified nodes without involving the rest of the network.

I certainly won't argue against the efficiency of distributing a file through a p2p network where peers can spread parts of the file between themselves once they have them rather than waiting for full file completion, but that's not what I was discussing as I was comparing bandwidth effienciency of centralized vs decentralized networks.

I believe we are discussing different things.

Reply Parent Score: 2