“While the file-sharing ecosystem is currently filled with uncertainty and doubt, researchers at Delft University of Technology continue to work on their decentralized BitTorrent network. Their Tribler client doesn’t require torrent sites to find or download content, as it is based on pure peer-to-peer communication. ‘The only way to take it down is to take the Internet down,’ the lead researcher says.” In a way, the efforts by Hollywood and the corrupt US Congress is actually increasing the resiliency of peer-to-peer technology. Karma.
I keep hearing about these “pure” P2P systems. What I don’t understand is, how does the client know what peers it can connect to? Wouldn’t it need to get some sort of list from a central location at least initially?
Can anyone explain this?
From the article:
One thing that could theoretically cause issues, is the capability for starting users to find new peers. To be on the safe side the Tribler team is still looking for people who want to act as so called bootstraptribler peers. These users will act as superpeers, who distribute lists of active downloaders.
Who do you trust?
If a single superpeer is compromised every downloader is exposed.
Every uploader as well, for all practical purposes.
That is a fair point – but the design of Tribler does not address anonymity at all. It doesn’t even try to hide itself or it’s peers – it simply removes the reliance on centralized servers to index the torrents.
Ok, but it still relies on super peers. If the super peers are all down, no one can connect to the other peers. I’m not really sure how you find super peers, but if you can do it, so can those who don’t want file sharing. It wouldn’t be difficult to just dynamically block access to all/any super peers.
If you block access to superpeer, some new superpeers emerge. It’s kind of easy to get the IP of superpeers published (use some random website, email, twitter, facebook, jabber, whatever you want).
Now, it’s going to be really hard to shut down that kind of network. Of course it’s possible, but it would requires a freaking huge amount of monitoring and analysis. Plus, most likely, the shutdowners will be one step behind the filesharers, so it’s kind of an endless game. So yeah, one of the only way to efficiently shut down such a network would be to just shut down the internet
There was a paper the other day about a protocol that’s dynamically morphing so it cannot be recognized by deep packet inspection tools. Add this protocol to Trbiler and you get something really, really strong.
I didn’t not get the impression that it “relies” on superpeers. The way I read it was that, in order to solve the bootstrap problem, a small group of “always up” peers would be used to help the network get going.
This simply addresses the bootstrap issue – once you have a working mesh network all it takes to join it is knowing the ip address of a single peer (any peer) and as long as that peer has seen a few other peers your golden. I am assuming clients will “remember” peers they have connected with – so essentially once a client has joined the network it no longer really needs the superpeers at all.
Think of it like the phone network. Superpeers are just 411 (directory services). If you know the number to directory services, you can lookup up phone numbers – but once you know a persons number you don’t need 411 anymore.
Bill Shooter of Bul,
“Ok, but it still relies on super peers. If the super peers are all down, no one can connect to the other peers. I’m not really sure how you find super peers, but if you can do it, so can those who don’t want file sharing. It wouldn’t be difficult to just dynamically block access to all/any super peers.”
That’s only if the network requires “super peers” in the first place, If there are no super peer designations, then censorship of the network would require blocking all peers. This may not be impossible, but is much more difficult than blocking a smaller list of super peers.
Requiring a super peer makes the P2P model more of a hybrid than “pure P2P” in my opinion.
werterr,
“To be on the safe side the Tribler team is still looking for people who want to act as so called bootstraptribler peers. These users will act as superpeers, who distribute lists of active downloaders.”
Granted I don’t know the details of Tribler at all, but I can’t see a technical reason the bootstrap peers must be any different from ordinary peers. Any peer with a static IP should do just fine for getting peers onto the network.
If there is some special task that these super peers will need to do, then that could lead to security trouble. Ideally peers don’t exchange any information about other peers except what’s required for connectivity purposes. This opacity would be good for privacy, but not so good for statistical analysis of the network.
“Pure P2P” systems need to be boot strapped somehow.
Once bootstrapped with some initial peers, the network can expand by itself to learn new peers. As long as enough peers are online, the network has a very good chance to recover itself.
The old freenet network was an example of this type of design. However the fact that the network exchanged peer information to repair and optimize the network implies that an attacker can join the network and build a list of peers over time. This enabled one to get IPs of peers in the network, which was considered a security problem in regimes like china where the simple fact of running an anti-censorship technology can land someone in trouble.
They developed a new freenet protocol and called it the “dark net”. The principal difference is that this protocol does not exchange peers, and the user must enter “trusted” peers manually. This has/had so many obvious scalability problems that it was a terrible idea from the get-go in my opinion, but it was supposed to allow peers to operate with much better confidence that no one outside the trusted peers would know that they’re part of the network. So, the state wouldn’t have a way to identify peers by simply joining the network.
In practice, the freenet darknet between anonymous users is practically useless because users go to a clear IRC channel to exchange peer lists, which is far less secure than the previous freenet since it leaks even more information than before. And it tends to create very long if not completely broken routes between members who exchange peer information in the IRC channels at different times.
I’d be interested in hearing anyone else’s take on this subject.
Darknets for file sharing are simply trust networks – they are only as trustworthy as the people you let into them. As such, it is all rather pointless to me, since they eventually succumb to their own popularity – once you reach the point that you no longer know everyone you can no longer trust it.
Its fine to a point for a small group of peers who actually do know each other – but then you never really gain the advantages you have with large P2P networks (namely diverse content and multiple seeders to speed up downloads).
Tribler does not seem to even try to behave like a darknet. There is no address anonymity as far as I can see – it is simply decentralized. You would of course need a few “superpeers” to bootstrap things, but once it got going it would be self-maintaining. That is the point I think – not address anonymity. It’s not really anything like Freenet, where anonymity is actually the primary goal.
galvanash,
“That is the point I think – not address anonymity. It’s not really anything like Freenet, where anonymity is actually the primary goal.”
Yes, I was trying to highlight the two different levels of anonymity using freenet as an example. A decentralized P2P network shouldn’t be a “darknet” if it’s to be scalable. However it’s still possible to protect the privacy of the traffic so third parties don’t know what’s being transfered between known peers.
Yes, I find this interesting aswell, particularly the decentralized bit. I read up on the p2p methodologies a while back with torrents, e2dk, direct connect being examples of centralized networks and kademlia, DHT (partly), winny, share, being examples of decentralized networks.
Centralized networks relies on a server which provides vital information neccesary for file sharing, this is quite efficient but also has a huge vulnerability as the network is totally dependant on these servers operating and if they go down, so does the network functionality.
Decentralized networks are those where each peer take on part of the burden handled entirely by a server in a centralized setting and therefore has no central point of functionality, leading to a network where it can lose any peer and still continue to function as before.
From a network robustness standpoint it’s obvious that decentralized networks are better, but there are as always other factors, such as efficency. A single purpose server in a centralized setting is more efficient in spreading necessary information to peers than it a decentralized network where more bandwidth use/cpu use is required for relaying the same information.
Other areas where centralized networks can be more attractive is how they can be community/interest targeted, and often with their own set of rules pertaining to how much peers must upload in contrast with how much they are allowed to download.
Then we have anonymity, popular networks such as bittorrent and ed2k/kademlia have no anonymity to speak of, the closest thing is protocol obfuscation but that is targeted entirely at preventing isp-throttling.
The reason why the demand for anonymity has been low is because there’s a very slim chance of legal reprecussions while using networks like bittorrent today, and also that anonymity measures ‘waste’ bandwidth.
I found it very interesting that in Japan, where online copyright breaches are much more likely to cause legal problems the two major p2p applications (Winny, Share) are built from the ground up to be anonymous.
Obviously there’s no real anonymity given that you need to expose your ip address in order to join any p2p network, however the difficulty in proving what an ip downloaded is what these ‘anonymous’ networks are based upon. Both Winny and Share require the user to allocate a large chunk of hd space as an encrypted buffer not only for the data they are interested in, but also data they will be relaying from one peer to another. It’s this relaying of data which obfuscates the source->destination ip addresses and makes it very hard to prove who downloaded what from who’m. Naturally this makes for a less efficient network, as not only do each user need to spend their bandwidth on what they want, but also on relaying lots of data they have no interest in.
It will be interesting to see whether these types of pseudo-anonymous networks will start to rise in use here aswell if we end up with harder anti-piracy measures resulting in more resources being spent on identifying and prosecuting online piracy.
Oops, became a bit longwinded
Valhalla,
I didn’t know about the Japanese P2P differences. Different motivations will yield different solutions.
Yes, any network that intends to obfuscate the IP addresses is going to be inherently inefficient. That seems somewhat unsolvable, at least without the cooperation of the ISPs. If they routinely reassigned IP addresses and didn’t keep records, that would achieve the desired anonymity, but of course they don’t do that.
“A single purpose server in a centralized setting is more efficient in spreading necessary information to peers than it a decentralized network where more bandwidth use/cpu use is required for relaying the same information.”
I actually disagree, a P2P medium (one which is designed for efficiency mind you, and hasn’t been subjected to other compromises) should be more efficient than a centralized system. However, as I haven’t actually analyzed the problem before, I’d like to do so now.
Let’s say there are 2M viewers watching an hour long television program. The entire program is 500MB. 1M watching it live, the other 1M watch it later on demand.
The required bandwidth for one instance is about 1.2Mbps. The aggregate across 1M active viewers becomes 1.2Tbps during the live broadcast. Of course, you’ll have to distribute this across many different data centers, but that’s a boatload of trucks traveling through our tubes.
Now consider an optimized P2P network taking the form of a tree where peers can also download from one or two upstream peers.
Obviously there’s a whole spectrum of peer bandwidth possibilities, but to keep the example simple: assume 500K have enough upstream bandwidth for one other peer, 250k have enough upstream bandwidth for two other peers, 250k have no upstream bandwidth for any peers.
Lets say the entire broadcast is originated from a single 10Mbps node which supports 8 direct downstream peers, lets call this level 0. For optimal topology peers with the best connections connect first.
Level 1 has 8 peers, 8 total
Level 2 has 16 peers, 24 total
Level 3 has 32 peers, 56 total
Level 4 has 64 peers, 120 total
Level 5 has 128 peers, 248 total
Level 6 has 256 peers, 504 total
Level 7 has 512 peers, 1016 total
Level 8 has 1024 peers, 2040 total
Level 9 has 2048 peers, 4088 total
Level 10 has 4096 peers, 8184 total
Level 11 has 8192 peers, 16376 total
Level 12 has 16384 peers, 32760 total
Level 13 has 32768 peers, 65528 total
Level 14 has 65536 peers, 131064 total
Level 15 has 131072 peers, 262136 total
On level 15, 118936 can support two downstream peers, 12136 support only one
Level 16 has 250008 peers, 512144 total
Level 17 has 250008 peers, 762152 total
On level 17, 237856 can support one downstream peer, 12152 support none
Level 18 has 237848 peers, 1M total
Assuming a pessimistic latency of 500ms per level, those on level 18 have a latency of 9s, which could be considered good or bad, however considering the the initial 10mbps uplink, I think it’s great. And assuming these streams are being recorded, they would also be sufficient to send the files to the folks who watch the program later on demand.
I see my estimates for video bandwidth are way too high, looking at the bandwidth used for youtube streams, even the HD videos are only 450kbps, so factoring the smaller load on peers, the tree fanout above would be much greater than the binary division I’ve illustrated, making the tree half as deep, however I leave it as is.
A real streaming network would have to account for dynamic conditions, less ideal topologies, and fault tolerance, however I think my numbers were pessimistic enough to leave some wiggle room too. Large broadcasters would have more initial distribution points anyways.
To tie this all back into the discussion of efficiency, not only does P2P significantly lower the burden for the distributer, but having peers transferring amongst each other inside ISP networks off the internet backbone helps too. This should probably be factored into the protocol.
I envision a P2P set-top box one day that works just like this. If my neighbor’s box has a show I want, my box will just download it from his. On the other hand, services running on the edge of the networks is probably the opposite of what companies like ms & google want, since they make more money being centralized gatekeepers. I believe that if the media conglomerates weren’t holding us back, we’d already have the technology to build far better content distribution systems than the industry has been willing to do.
“Oops, became a bit longwinded ”
Also guilty, but I love CS topics!
I’m sceptical, a centralized system knows of all peers in it’s network aswell as all files (assuming we are talking of a filesharing network) and can compose the necessary information transaction as a simple server->node operation, meanwhile a network where every participant takes the role of ‘server’ the operation of (for example) finding other nodes sharing certain file would require sending out a message traversing the network, using bandwidth of other nodes who have no interest in that file but which still needs to participate in the search.
Glancing over your example it seems it just covers the actual efficiency of spreading a file amongst several peers once all peers have been identified.
The increased use of bandwidth I described in decentralized networks is due to that of locating other nodes with information you are interested in (as in traversing the network). Once those nodes have been located it will be no more inefficent than that of a centralized network as it will be direct transfers against the identified nodes without involving the rest of the network.
I certainly won’t argue against the efficiency of distributing a file through a p2p network where peers can spread parts of the file between themselves once they have them rather than waiting for full file completion, but that’s not what I was discussing as I was comparing bandwidth effienciency of centralized vs decentralized networks.
I believe we are discussing different things.
“Glancing over your example it seems it just covers the actual efficiency of spreading a file amongst several peers once all peers have been identified.”
Fair enough, however in my example, what consumes the majority of our resources? Traffic identifying peers, or transferring the content? Consider that identifying 1M peers would only take approx 1GB of bandwidth (allowing for a few packets of a couple hundred bytes). Whereas the media itself might take 100X that every second when distributing from centralized sources.
Clearly a centralized server is fundamental in solving the bootstrapping problem. But once the peers are bootstrapped I think P2P can work it from there. For protocols like bittorrent, the server is polled every 5-25minutes to update peers for a given resource, but other networks like KAD manage this through peers alone. I think either approach can work.
“Once those nodes have been located it will be no more inefficent than that of a centralized network as it will be direct transfers against the identified nodes without involving the rest of the network.”
I’m trying to understand whether you mean all clients downloading directly from the central servers? Or using the servers to coordinate the P2P downloads among the clients?
If I were downloading a 25K file, the P2P setup traffic could be significant. But do you agree that when those setup costs are amortized over a 25MB payload or more, that overhead shouldn’t be significant?
Out of curiosity, I just opened up amule and it itemizes the KAD overhead if your interested. It’s seems to be only a few KB per GB of transfered data, which I think you’ll agree is marginal.
“…I was comparing bandwidth effienciency of centralized vs decentralized networks.”
“I believe we are discussing different things.”
Indeed. I’m afraid I’m not understanding the distinction. I suppose a P2P network might use more aggregate bandwidth throughout the system, however ideally the bandwidth takes place within ISPs off the backbone of the internet, where efficiency is most crucial. Once there’s a peer at every ISP, in theory the number of peers could increase by several magnitudes without dramatically increasing backbone bandwidth if the P2P network is not dependent upon centralized servers.
Of course, there’s too much hand waiving here to make concrete assertions, but I think we can agree it would make for an interesting research topic.
Certainly content transfers, I was just saying that a centralized network will be more efficient (as in bandwidth and time spent) in relaying the necessary information needed to locate peers with content you want than that of a decentralized network.
No argument here, kademlia which you mentioned seems to be a very effective decentralized network and from what I gather the Emule/Amule network would be dead without kademlia since servers have been taken down.
Again no argument.
No, I was talking about when having located other peers on a decentralized network which have data you are interested in. Once that is done it’s just as effective as a centralized network since it’s all peer to peer transfer of the relevant data. It’s locating the peers with data you want that is what is less efficient compared to a centralized network.
Perhaps a better explanation, when you connect to a centralized network you connect to some sort of server to which you identify yourself and in a file sharing network you also identify the files you share. When you want a file you contact the server which knows all peers/files in the network (since all peers have to identify themselves to it in order to gain access to the network) and it will return a list of peers with that file.
You can then connect to with these peers with or without the interference of the server. This is fast and efficient (assuming of course that the server can handle the load of file/peer requests).
When you search for a file in a decentralized network you will send the file search request to the peers you know and they in turn will have to propagate that search to peers they know and onwards troughout the network. Peers which have no interest whatsoever will have use bandwidth to further your requests and return results which leads to a less efficient use of bandwidth than in the aforementioned centralized server setting and is also slower.
That said I still think we are in full agreement as I am in no way against decentralized networks, in fact I think their advantages outweigh the disadvantages and if I were to design a p2p network it would be decentralized, likely based upon kademlia.
And yes it’s certainly an interesting research topic!
Valhalla,
I wish we had more time to discuss this, but I’d like to leave with one last thought with regards to your conclusion here:
“When you search for a file in a decentralized network you will send the file search request to the peers you know and they in turn will have to propagate that search to peers they know and onwards troughout the network. Peers which have no interest whatsoever will have use bandwidth to further your requests and return results which leads to a less efficient use of bandwidth than in the aforementioned centralized server setting and is also slower.”
Hypothetically, we could design the network such that each peer only has to handle a subset of the searches. When a peer joins the network, it picks a random number (lets say 0-255) called the “query subset” and advertises this number to peers, then it’s only responsible for answering queries corresponding to this subset – it won’t receive queries for any other subsets.
Just as a simple example of how one might distribute the queries, we could do an 8bit CRC of each keyword, and then only forward the queries to peers matching the corresponding query subset. This way, the number of peers to search for any given keyword would be divided by a factor of 256.
Any resources we want to publish that don’t correspond to our own query subset would need to be published through a delegate peer having the correct query subset.
Clearly there’s more initial overhead for peers publishing through a delegate, however this initialization can be done in bulk transfers and with far less frequency than the routine searches taking place.
On a sufficiently large network, the distribution of peers handling a query subset is hopefully decent. But instead of just assuming that will be the case, an improvement could be for peers to dynamically take on under-represented subsets and give up over-represented ones. This way a peer’s query subset would be defined by a 256 bit mask rather than a 0-255 number. On a very small network, each peer would take on several subsets each, while on a very large network they’d only need to service one query subset.
Whatcha think?
Edit: I don’t think anyone would misinterpret me this way, but just in case… the search delegate would only be responsible for answering queries on behalf of the original peer. But the file contents would still be served directly from the original peer and not the delegate.
Also, it’s not obvious that 256 is an optimal granularity. I can think of other algorithms that might automatically subdivide the search space into arbitrarily small pieces, but that complexity might have it’s own overhead and if the pieces are too small then we become very dependent upon connectivity to individual peers. So care must be taken to balance out competing factors.
Edited 2012-02-13 03:53 UTC
It is possible to have a pure peer-to-peer protocol where initial communication from a new node would use broadcast/sudo-random IP “pinging” on specified communication port(s). Depending on the total number of connected nodes, it may take some time to stumble on to another node, but once another node is found, the initial node would be inserted into a graph. At this point nodes can communicate set of known positive points of other nodes found. In order to keep the spam down, once the node graph reaches certain size, the nodes can slow down discovery rate. So, as more nodes join the graph, each node would send discovery messages at a slower rate x^n. Thus discovery will never stop, but it will keep spam down.
The more popular this protocol, the more nodes/users exist in the IP space, making it faster to stumble upon another node, and the faster all of the partial graphs would join into a single connected network.
This would be a true P2P network.
RMSe17,
“It is possible to have a pure peer-to-peer protocol where initial communication from a new node would use broadcast/sudo-random IP ‘pinging’ on specified communication port(s).”
An internet-wide broadcast seems like a bad idea to me. First of all, it doesn’t make sense to use a fixed port for this kind of P2P protocol (NAT users often don’t have a choice about their port anyways). So the search space is really a cross product between the public IPs which could be running the P2P client and the ports it could be running on. This is essentially tantamount to each P2P user doing an internet-wide port scan.
If it’s not clear why that’s a bad idea, let’s just guestimate some numbers. There are around 46 bits of search space. Assuming there are 10M publicly reachable nodes distributed randomly in the search space, what are the odds of reaching another peer?
10M/70,368,744,177,664 = 1 chance in 7,036,874
So let’s just say on average it takes 3.5M packets to reach one’s first peer. Let’s say these raw packets are 200bytes a piece, that’s 700MB bandwidth just to find one peer, and this is assuming no retries are needed due to heavy loads and dropped packets or peers which aren’t running 24/7.
All this just for one user of one P2P client. What if all decentralized systems located peers this way? Everyone on the internet would receive useless background packets connecting to random ports.
Lets not even contemplate this scheme on IPv6.
No, it’s better for most users to bootstrap it by getting a list from the source where they found the software. Also, anyone could publish an independent & random bootstrap list for new users to add as well.
Skynet ?
I could be misreading this, but isn’t this fairly similar to the concept behind the gnutella protocol? I’m curious if they might run into the same spam/false file problems.
Sounds to me like it is.
What they need to do is re-implement torrent sites (and the associated mechanisms for trusted torrent publishers and minimally-broken community moderation) on top of a decentralized network. (Which will be more difficult, since it always is when there are no users with authoritative admin/mod/god powers for all other moderation to be policed by)
Re: P2P spam.
Ideally, the P2P community members could vote/rank the resources by quality and those could be viewed by everyone. With the right kinds of digital signatures, I believe it would be possible to aggregate the data in a verifiable way. However I see no way to prevent nefarious users from generating unlimited signatures and “stuffing the ballot box”. At best, our digital signatures could be made deliberately inefficient to slow them down. An alternative might be to limit the voting to once per IP, but that has it’s own problems.
There’s another possibility, again looking to freenet… people can publish their own digitally signed channels and post them to the P2P network such that the distribution network is totally decentralized, but the individual channels are still completely controlled by their owners who posses the signing keys. Therefor, some channel owners would publish lists of media which they’ve vetted (ie scanned for viruses, spam, etc) and other users would look to those channels to find what they want without the spam. In this regards it wouldn’t be too different from centralized torrent sites today.
Another idea about dealing with P2P spam…
It might be possible to weigh the votes by how closely their digital keys aligned with your votes in the past. This would be sort of like the “other users who share your interests also enjoyed…” feature on netflix.
This way, the spammers who rapidly generate fraudulent signing keys to sign spam would have zero weight/credibility in your aggregate because they never voted in agreement with your votes in the past. So, this way a spammer could generate 1M duplicate spam votes and still not have any significant weight in your aggregate.
In theory the spammer might deliberately vote up a couple non-spam resources in order to gain credibility for a future spam resource, but consider what would happen when many spammers use this strategy. In order for a spam vote to be given much weight by the user’s client, the upvotes for non-spam would have to significantly outnumber the upvotes for spam, and so it might actually improve the quality of resources overall.
Obviously sharing votes like this has privacy implications, but it seems to be a clever approach to the problem never the less.
It looks like that idea may have occurred to them already:
http://www.tribler.org/trac/raw-attachment/wiki/SimilarityFunction/…
(Until they catch up with demand, the paper’s name is “A scalable P2P recommender system based on distributed
collaborative filtering” and Google Cache has a copy at http://tinyurl.com/85fzdxh )
This paper (titled “P2P collaborative filtering with privacy”) also looks relevant:
http://journals.tubitak.gov.tr/elektrik/issues/elk-10-18-1/elk-18-1…
Edited 2012-02-10 05:07 UTC
Watch this video…
http://www.youtube.com/watch?v=DX46Qv_b7F4&feature=related
I’m sure the cat and mouse game is similar to what is going on with Tor
That same search will lead who ever is looking to each person so there is nothing changed here. Instead of some big company getting closed they will go after the little guy who thinks he can’t be sued or jailed.
Yeah baby, I love modern technology. Freedom forever.
Given the goose stepping actions recently by the US DOJ, Congress and Senators pushing the US Internet Censoring legislation bills SOPA,PIPA it is inherent on Internet users to be fully aware of 100s of US laws,jurisdiction and how far reaching those proposed laws would reach.
Recently, popular MSNBC commentator Rachel Maddow allegedly pointed out that Internet Online Anti-Piracy political sponsors Dennis Ross, Roy Blunt allegedly STOLE Internet Photos,Images THEMSELVES! Of course no one expects federal agents to swoop into the homes and offices of Ross nor Blunt seizing their servers and computers. However,such a pass could hardly be expected to occur with YOU. Remember not realizing the photo,image or file was copyrighted is no excuse nor defense in the eyes of the US Dept.Of Justice.
Israels best US Senatorial Friend (allegedly referred to as the original Dr. Evil) Joe Lieberman is now pushing for Son of SOPA,PIPA and a US Internet Kill Switch (being sold to use under US Martial Law) so these issues along with Lieberman’s US National Security cloaked ‘ACTA Treaty’ are not simply going away.
I am certainly not advocating sharing copyrighted materials in a manner which could be construed as breaking some (likely unknown) or known US law.
I am however advocating that forcing US and Global Nations Internet users to learn and conform to 100s of arcane and at times entirely hidden USA (undisclosed under National Security) laws is absurd and nearly impossible to comply with as well as conform entirely to.
Why push for Tribler, or the new P4P protocal when Anomos is already widely available in various platforms (Win-X,Linux,OS-X), Light on Resources, FREE and Open Source which goes beyond what Tribler is trying to perhaps even accomplish if successful?
Anomos is a small free and open source Anonymous, Encrypted stand-alone BitTorrent Client which is a step in helping provide a needed barrier to masking file content thru USA DNS servers, masking the users ID and helps to relieve the burden of being forced into the role of Internet Policeman by the US DOJ agents on USA Internet DNS servers and ISP providers.
Reference:
Anomos http://anomos.info/wp/
Anomos is:
Anonymous, Encrypted BitTorrent
Free and Open Source Software
A Standalone Client – No Background Services
Easy To Use
Available for Windows, GNU/Linux and OSX
“Anomos is a pseudonymous, encrypted multi-peer-to-peer file distribution protocol. It is based on the peer/tracker concept of BitTorrent in combination with an onion routing anonymization layer, with the added benefit of end-to-end encryption. By combining these technologies, we have created a platform where by no party outside of the trusted tracker will have any information about who a peer is or what they are downloading.”
“Anomos is designed to be easy to use – you won’t even be aware of the security that it provides. Anybody who is already familiar with BitTorrent won’t have to do anything differently, other than use ‘atorrent’ files rather than ‘torrent’ files.”
Since the files are encrypted from end to end, that also provides a layer of security imo. A file server lacking the means to decrypt them internally or personally via it’s owners also would be an additional layer of security imo. The Anomos client decrypts the .atorrent files on the fly with minimal speed difference nor resource needs.
Why re-invent the old .torrent wheel with Tribler, throw down more support on .atorrent files and move into the future of safer file sharing inside a global police state.
ASmith,
Thanks for the link to Anomos, it’s good to learn about other projects. But wow, that was more of a bad sales pitch than an informative post. Am I right in classifying it as a bittorrent client with tor-like onion routing?
That certainly would have merit to some but it doesn’t seem to overlap with Tribler because it’s still dependent upon the same centralized servers from what I gather. So perhaps Anomos could also benefit from the work of Tribler?
If you’re on the Anomos team, you should have them fix the broken links for the faq.
No Alfman, I am not on the Anomos team, I’m pointing out the obvious choice of what I feel is already a superior and safer torrent handler/client protocal.
Why Bittorrent Over Tor Isn’t A Good Idea
https://blog.torproject.org/blog/bittorrent-over-tor-isnt-good-idea
Anomos P2P near 360 degree total distribution scheme setup by Anomos appears to not only be unique but imo superior also. Not sure if you are up on spiralgraphs Alfman, (nifty swirly art) but the Anomos Website has some nice artsy photos you can look at regarding how wide the non-centralized distribution scheme is. Torrent’s require a tracker, Tribbler, Anomos is no exception. HOWEVER, the Anomos tracker doesn’t require nor need anyones IPs and a rule for trackers would re-enforce that with no logs either. Encrypted files, No IPs.
As to my taking the time and effort to actually address WHY someone should consider safer forms of torrent file sharing as you spew Alfman ‘a bad sales pitch’ well any worthwhile reply to that flaming comment would simply be deleted so I won’t waste my time and effort on that.
Edited 2012-02-10 07:53 UTC
ASmith,
“Why Bittorrent Over Tor Isn’t A Good Idea”
That’s a good warning, thanks for looking it up.
However I didn’t want to imply Anomos was literally bittorrent over tor, just asking whether the overall architecture works that way, because from a cursory overview that seems to be the case.
“Torrent’s require a tracker, Tribbler, Anomos is no exception. HOWEVER, the Anomos tracker doesn’t require nor need anyones IPs and a rule for trackers would re-enforce that with no logs either. Encrypted files, No IPs.”
I think you missed what they’re trying to achieve with Tribler’s decentralization, they are creating a P2P network *without* a centralized tracker because they view that as a vulnerability to the network. If nothing else, read the first paragraph of the Tribler article which explains that. Anomos and Tribler may be able to benefit from each other’s work in the future.
“As to my taking the time and effort to actually address WHY someone should consider safer forms of torrent file sharing as you spew Alfman ‘a bad sales pitch’ well any worthwhile reply to that flaming comment would simply be deleted so I won’t waste my time and effort on that.”
I didn’t mean to offend. I’m just interested in the CS aspects. For the record though, I don’t have the ability to delete your posts, and I wouldn’t do so if I did.
Fair enough Alfman. Yes at first glance one could easily think the Anomos atorrent client handling is quite similar to that of Tor.
Like what Tribler is trying to accomplish, Anomos operates in a decentralized tracker/peer fashion which address’s the current police state locate, smash and grab on centralized torrent tracking servers.
Like Vuze’s Swarmstream, Anomos also haystacks torrent streams which makes it a headache to discern content and orgins.
Unlike Tribler, uTorrent, Deluge, Transmission and Vuze torrent handlers, Anomos also encrypts the actual files, not just the connection and they are decrypted on the receiving end on the fly. ANYONE obtaining a small encrypted snippet of a much larger file would not be able to claim xyz in any logical nor legal fashion, it would in fact be identical to a randomized data stream. Agency’s demanding proxy server logs wouldn’t find any torrent peers IP’s listed on them if users had used Anomos atorrent clients.
IF proxy (Push-Pull Mode) servers cached small parts of the Anomos encrypted atorrent files, again those encrypted snippets would not yield anything that would/could be found to be more than a randomized data stream. IF the proxy server had no means to locally decrypt that atorrent data stream themselves you have a legal in my opinion and entirely plausible deniability which ultimately and effectively shifts the burden of proof off of proxy server owners.
What I would like to see is a Anomos Plug-In available for Transmission, Vuze and the Deluge torrent clients. If Anomos is lacking anything, it is widespread use and hence bit torrent popularity. If anything the recent loss of multiple centralized torrent trackers around the world could end-up Alfman in bringing that to pass just as it is now driving the Tribler project.
It’s naive to assume usage patterns or encryption can’t and won’t be targeted. At least UK already has something against non-disclosure of passwords to encrypted data (“identical to a randomized data stream”) – all it takes is for this penalty to be larger than that for p2p copyright infringement (easy to push such laws, considering the encryption can hypothetically hide, say, terrorist plans or child porn). Plus there will be always enough evidence on users machines (heck, names of files could become enough)
BTW there really doesn’t seem to be a difference between the two, as described…
And I’m fairly certain that a randomized data stream is not legally identical with encrypted copyrighted content, not even close (also I’m guessing you’re not a lawyer, too)
Edited 2012-02-16 00:09 UTC