Linked by Thom Holwerda on Wed 8th Feb 2012 23:15 UTC
Internet & Networking "While the file-sharing ecosystem is currently filled with uncertainty and doubt, researchers at Delft University of Technology continue to work on their decentralized BitTorrent network. Their Tribler client doesn't require torrent sites to find or download content, as it is based on pure peer-to-peer communication. 'The only way to take it down is to take the Internet down,' the lead researcher says." In a way, the efforts by Hollywood and the corrupt US Congress is actually increasing the resiliency of peer-to-peer technology. Karma.
Thread beginning with comment 506890
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: "pure" P2P
by Alfman on Mon 13th Feb 2012 03:38 UTC in reply to "RE[6]: "pure" P2P"
Alfman
Member since:
2011-01-28

Valhalla,

I wish we had more time to discuss this, but I'd like to leave with one last thought with regards to your conclusion here:

"When you search for a file in a decentralized network you will send the file search request to the peers you know and they in turn will have to propagate that search to peers they know and onwards troughout the network. Peers which have no interest whatsoever will have use bandwidth to further your requests and return results which leads to a less efficient use of bandwidth than in the aforementioned centralized server setting and is also slower."

Hypothetically, we could design the network such that each peer only has to handle a subset of the searches. When a peer joins the network, it picks a random number (lets say 0-255) called the "query subset" and advertises this number to peers, then it's only responsible for answering queries corresponding to this subset - it won't receive queries for any other subsets.

Just as a simple example of how one might distribute the queries, we could do an 8bit CRC of each keyword, and then only forward the queries to peers matching the corresponding query subset. This way, the number of peers to search for any given keyword would be divided by a factor of 256.

Any resources we want to publish that don't correspond to our own query subset would need to be published through a delegate peer having the correct query subset.

Clearly there's more initial overhead for peers publishing through a delegate, however this initialization can be done in bulk transfers and with far less frequency than the routine searches taking place.

On a sufficiently large network, the distribution of peers handling a query subset is hopefully decent. But instead of just assuming that will be the case, an improvement could be for peers to dynamically take on under-represented subsets and give up over-represented ones. This way a peer's query subset would be defined by a 256 bit mask rather than a 0-255 number. On a very small network, each peer would take on several subsets each, while on a very large network they'd only need to service one query subset.

Whatcha think?

Edit: I don't think anyone would misinterpret me this way, but just in case... the search delegate would only be responsible for answering queries on behalf of the original peer. But the file contents would still be served directly from the original peer and not the delegate.

Also, it's not obvious that 256 is an optimal granularity. I can think of other algorithms that might automatically subdivide the search space into arbitrarily small pieces, but that complexity might have it's own overhead and if the pieces are too small then we become very dependent upon connectivity to individual peers. So care must be taken to balance out competing factors.

Edited 2012-02-13 03:53 UTC

Reply Parent Score: 2