Speaking with Wired editor David Rowan at an event launching the magazine’s March issue, Tim Berners-Lee said that although part of this is about keeping an eye on for-profit internet monopolies such as search engines and social networks, the greatest danger is the emergence of a balkanised web.
“I want a web that’s open, works internationally, works as well as possible and is not nation-based,” Berners-Lee told the audience, which included Martha Lane Fox, Jake Davis (AKA Topiary) and Lily Cole. He suggested one example to the contrary: “What I don’t want is a web where the Brazilian government has every social network’s data stored on servers on Brazilian soil. That would make it so difficult to set one up.”
A government never gives up a power it already has. The control it currently has over the web will not be relinquished.
If so, why did he support DRM in HTML, which goes completely in different direction (i.e. giving control to gatekeepers)?
It only grants them control over the content they themselves serve, not over other sites’ content. That does not make them gatekeepers, then. ISPs, for example, are gatekeepers since they can limit your access to others’ content, too.
The scope of gatekeepers can vary. It still goes in different direction since it takes away control from the user, that was the main point.
Edited 2014-02-07 00:52 UTC
You already have fruits of that DRM-everything in your computer, like secure-boot, android.drm, etc. Now closed DRM-blackboxes becoming an official w3c ‘open standard’. How is that even related to ‘only about there content’?
Gatekeepers for pushing DRM into everything. Since by definition DRM can only work if your whole computer is ‘secure’ (against yourself) its by no mean ‘only about there content’ but ‘only about you and your computers’.
Edited 2014-02-07 19:28 UTC
Because DRM is a necessary evil to get industry support. He probably fully knows it goes against a decentralized web and that by promoting a decentralized web, it breaks DRM even more. DRM is a dud, and even content producers, especially independent ones, will want and find ways to get around it. They mostly already have done by finding an audience outside of traditional distribution channels and so this will continue even with DRM in HTML5. Possibly even more so.
We have to think (and act) more like corporations if we want to defend free standards. Get their finance-powered support but screw them over at the same time. It’s for their own good.
I doubt it. Publishers need the Web more than the Web needs DRM obsessed publishers. DRM is never really needed, so there was no reason to bend to crazy demands.
Edited 2014-02-07 03:26 UTC
No, you need complete support for HTML5. The goal is HTML5 adoption and without DRM “support”, corporations will sabotage adoption of it.
Publishers do need the Web more than vice versa, but they don’t know that. They’ll sabotage standards compliance because they can. They don’t understand it hurts them, but that’s the real world for you.
This is not bending to crazy demands. This is handing them a dud in exchange for a gem. It’s a game of chess, and you’re basically complaining about giving up a pawn in exchange for a queen.
As I said, it doesn’t sound convincing. Let them try – they’ll be left out, not the Web. They’ll crawl back sober and without crazy DRM demands. Unfortunately W3C bent to those lunatics and put DRM in the standard.
It’s not a bait that they are getting here. It’s basically prolonging the sickness (DRM). While the trend for it is to die out, the standard will keep it around longer.
Edited 2014-02-07 07:49 UTC
They did try. One such try was netflix with Silverlight and it failed like Flash-DRM, like killing mp3, like Sony’s MD DRM. By its nature DRM does excludes anything open so it needs to be all closed, the full chain. From ‘secure’ boot over opensource-stacks and alternate formats/implementations to non-mainstream groups not lucrative enough to target also.
Believing DRM is about the content is, sorry, stupid. Its about control. Controlling what the user can and cannot do with the products, including hard- and software, he or she purchased. No alternate OS, no other media-player, no second hand market, no unlicensed equipment, no way to innovate around or on top without agreements, license, profits from your business. No way to pause that song when you go to toilet, no way to prevent your media-player to track what you are watching, eating and doing while watching.
Don’t be silly, this is about controlling you and not about those 1-2 pirates copying that stupid hollywood movie since they will always be able to do that, and its enough if one does, but the mass of data and control we all lose in that power-shifting process cannot be such easily regained.
Edited 2014-02-07 19:07 UTC
Exactly, DRM is about control, I never argued with that. So it all goes back to my original question. Since DRM takes away control from the users, why did he support it, while now he talks about giving control to users back. It’s hypocritical.
It is. See his statement in that interview that “I don’t want is a Web where the Brazilian government has every social network’s data stored on servers on Brazilian soil”.
Background is at http://mobile.reuters.com/article/idUSBRE9B30UD20131204?irpc=932
Lets look at it from this angle:
Brazil forces Google, Facebook, etc to keep data from brazils on servers located in the country so brazils data-laws apply all of the time.
That means Google, Facebook, NSA, etc. cannot any longer bypass basic data protection laws. Users have control over there data again.
TimBL is against that.
Thing is, what Brazil does seems to be the only solution after its clear Obama’s NSA can continue like it did. If that solution isn’t acceptable then NSA needs to be stopped and basic laws to protect users need to be followed again. Laws should apply, also for Obama’s NSA.
Edited 2014-02-07 20:33 UTC
Doctorow likened it to Lysenkoism a while back.
This in reference to how said -ism became official soviet doctrine under Stalin, even tho the biologists working within the Soviet Union knew it was bunk.
Meaning that the lower echelons of Big Media knows functional DRM is a pipe dream, but the big wigs up top is so enamored to the idea that stating otherwise is a good way to end your career on the spot.
In the end Lysenko’s ideas were rejected completely. But it took a really long time.
Edited 2014-02-07 18:44 UTC
Disney posted the full animated version of “Let It Go” in HD to YouTube on December 6th.
81 million views.
Frozen is a Billboard chart topper in CD and download sales. The movie is fast approaching a billion dollar theatrical gross — and scarfing up awards as fast as they can minted out —
and that is only the beginning.
It is possible to find an audience outside of traditional channels in any media. But when the system is on full throttle and the tracks are clear, all bets are off.
Because it’s better to have *one* standard for DRM than a multitude of proprietary crappy plugin-requiring implementations.
Another advantage of one implementation: you’d need to crack it only once
They’re essentially defining a DRM plugin interface standard, so we’re not better off than before, but worse: a plugin interface that isn’t useful for much else than DRM.
And the interest groups already found out that they could use that interface for other media than video, too. Like text.
So at some point, nytimes.com might ship its articles in a format that requires a DRM plugin (* not available on all platforms) to read. We were really lucky that the W3C standardized that, otherwise no-one could read it!
No, it’s better to have multitude of proprietary crappy plugin-requiring implementations so it will be completely obvious that it is all junk and normal people would avoid it as a plague. Making it a standard only gives them an excuse to prolong its usage.
Edited 2014-02-07 08:54 UTC
No it will prolong the situation of having to deal with shit. They would have just stuck with the flash player for the next 2 decades.
Unlike you I’m a web dev, and all I want to do is be able to interact with a DOM object is a sane way. I couldn’t give a f–k whether it has DRM.
So as far as I am concerned unless you actually work in my industry your opinion is moot and I honestly don’t care.
Edited 2014-02-07 08:58 UTC
This affects Web users not less than Web developers. You don’t care about DRM? Then don’t use it. Flash or not Flash. As simple as that.
Edited 2014-02-07 09:58 UTC
Well it either DRM as a standard or I have to figure out someone’s crap implementation (where they will likely want me to shell out huge amounts of money for meager support) and build something around that, the result is likely to be sub-optimal and that will affect our users.
Like it or not compromises have to be made. This is the reality of the situation and it won’t change. If you don’t want to accept that, that is up to you … but it won’t change the reality itself.
Edited 2014-02-07 10:05 UTC
This the best possible argument against DRM standard – if using DRM is a pain for developer, increased costs for publisher and problems for end-users, there’s less motivation for using it. Death of DRM (one way or another) is more important then the ability to consume multimedia content via internet.
Quite simply what I am saying is:
* It is going to exist.
* It is better there is a proper specification that all browsers can implement properly.
There there isn’t 100s of different players (which is the situation we have at the moment), which requires me putting in a obfuscated and minified JS code to embed it on a page.
It is now stiffled with the fact that it is absolute pain to establish and to use on both sides. As a consumer I am satisfied with this situation, because practically it means that I can simply avoid DRM-protected content. But if…
…it becomes easy to use on both sides, there’ll be nothing stopping content providers from using it, spreading it everywhere among other deseases. This will spell the end of digital content consumption for me, which is nowhere near pleasent perspective. For me.
Well then there will be the situation where you have to use Flash or they will take it off the web entirely and lock it in as much as possible, which is what they will eventually do and you will have to use device X if you want to see something and it will never come out on anything else.
It will be worse than cable TV.
But hey we never had DRM in our HTML5 video … yay!
You put it so that DRM acceptance has to grow. I just can’t see that coming: DRM is nowhere near being new, and digital content distribution doesn’t exactly tend to go HTML+DRM way either, so both the current standing (some DRM scathered throughout network) and potential reduction of DRM use seem more likely to me then actual growth of DRM acceptance either as standard DRM technology or as those ugly flash players.
Also note, having no standard DRM and flash-based DRM solutions both allow to make use of “youtube-dl” and friends (until recently I didn’t even know Youtube has so much advertisment), and (arguably more importantly) makes it easier to press on content providers with their hard dependency on proprietary technologies.
FWIW standardized DRM technology is already futile effort from content providers’ POV – being able to implement compliant playback software effectively equals to being able to surpass protection. How much time do you expect to pass between hypothetical standard DRM suppot in Firefox release and DRM surpassing software release? I’d bet on negative numbers.
So apart from happy web developer I really see no winners in case of DRM standard.
Oh so I have to wait years to see a movie or TV series because I have to wait for it to be released here on DVD or Cable TV. When if there was DRM, I could choose to see it straight away on any device.
While the idealogical warriors of entertainment freedom fight it out, I can’t watch stuff I really want to watch easily online because of your bullshit unless I have some shitty plugin.
f–k you and your non-existent cause.
Edited 2014-02-07 20:42 UTC
That is exactly as it always is: people have different, incompatible values. FWIW I find watching something when it is released (be it an offset measured in minutes or months) a complete non-issue, most likely the same way as you regard my disapproval of DRM. Different people have different values.
Sorry I was overly harsh.
That would be great, however, at least as it currently stands, this is not what we will be getting.
If there is no change we will get a situation similar to the “works on IE only” landscape a couple of years back.
I know that this sounds ridiculous, because nobody would ever want this to happen again, but due to lack of any form of interoperability in the currently discussed options, this is more fact than speculation.
One of the key points of the EME specification is that it neither specifies a single DRM solution nor does it say anything about reuse of any such solution between different browsers.
A website owner who decides to license, for example, Google’s Widevine DRM system, will reach all users of Chrome and Android.
However, unless Google licenses Widevine to Microsoft or Apple, their browser or platforms will not be able to access the protected content.
The website owner is therefore faces with two choices:
1) also license Microsoft’s PlayReady and Apple’s FairPlay
2) Let users of either company’s browser know that they have to switch to Chrome to see the content.
As lucas_maximums insightfully writes, he and other website developers really don’t want to deal with multiple players and that part is true.
The website code will be able to use the same JavaScript code to facility key exchange between the browser’s DRM module and the server.
Which understandably gets a lot of buy-in from said developers.
What they usually don’t know due to the lack of open communication, is that the need to deal with multiple incompatible options has not gone away, it has merely be shifted.
The problem is now either that of the service provider, who has the choice of licensing multiple solutions or forego certain customer segments, or the service consumer, who has the choice of having and using multple browser and devices or not being able to use certain providers.
The irony of the whole situation is that “the ideal solution” for all involved parties existed prior to the arrival of the iOS devices: the universally hated Flash.
Providers only had to license one DRM, users had to install only one plugin that would work across all their browsers, web developers only (more or less) had to embed a single player.
This was exactly my point. Standard makes DRM easier and less costly to deploy. It shouldn’t be any easier. The harder it is, the sooner it will die out.
Anyway, this whole whole argument about making it easier is kind of absurd. Arguing that DRM should be easier to use is like saying that abusive power should have easier time creating a police state.
Edited 2014-02-07 18:51 UTC
How is it an abuse of power exactly?
Seriously your arguments are ridiculous. A police state is nothing like not being able to see a video online that someone has asked you to pay for.
Get some perspective FFS.
The principle behind DRM is exactly the same as the principle behind dragnet mass surveillance which is the key component of the police state. That’s the basics of DRM. Police state uses the logic of preventing crime through rendering all and everyone a potential criminal (i.e. there no such thing as presumption of innocence in a police state). DRM uses the same logic – rendering all users as potential criminals by default. If you don’t see the parallel, I’m not sure how else to explain it.
And naturally, since the approach is nowhere new, DRM employs same methods as well – i.e. surveillance, obscurity and so on. Therefore from security and privacy perspective DRM can never be trusted. It’s always a security risk – i.e. DRM is essentially always akin to malware.
Edited 2014-02-07 20:13 UTC
However I see nobody getting beaten and imprisoned without trial for watching a video. I do see people being taken to court for copyright infringement … they don’t have courts in Police states.
You are being ridiculous.
If it uses an OAuth 2 factor authentication policy it totally 100% can be.
Edited 2014-02-07 20:33 UTC
That is very shortsighted attitude. Don’t you care about your clients? Your customers? Consumers? Oh, if it makes your life better, screw everyone else?
That’s 90% what’s wrong with the world, right there.
Lets get things into perspective, it is content on a website. You know there are some pretty horrendous things going on that actually affects people and then there is DRM on a youtube video.
There are two choices either we have a standard for DRM or there will be some bullshit plugin system we will have to deal with. If content providers don’t have either of those choices, they won’t make it available on the web.
I’d rather have the option of dealing with a standard and documented way, that is built into the browser than the alternative. The alternative is having to deal with multitude of crappy very similar but slightly different implementations of essentially the same thing, in something like flash.
I do care about clients, customers, consumers and other developers. That is why I support a sensible compromise, rather some ridiculous moral crusade over something that I don’t think is that important.
I am more worried about outright censorshop such as China’s great firewall and the “think of the children” crap in my home country than the implementation detail of audio and video in the web browser.
Edited 2014-02-07 11:22 UTC
I actually meant not caring about other people, just yourself. Thanks for the lecture about perspective though, it was well written.
I also want a good solid compromise, either no DRM, or DRM that works properly, and does not keep people from accessing the product that they want, I’m fine with either one. Developers have the right to protect their product, if they so desire, and consumers should have the right to enjoy the products that they buy.
My point was about the attitude of your original post, in which you said:
“Unlike you I’m a web dev, and all I want to do is be able to interact with a DOM object is a sane way. I couldn’t give a f–k whether it has DRM.
So as far as I am concerned unless you actually work in my industry your opinion is moot and I honestly don’t care. ”
It really has nothing to do with DRM, and everything to do with people.
These are mutually exclusive – DRM is a way to protect the product from consumer’s will to consume it freely. I am a software developer, but I really fail to understand, why the hell should I be entitled to some special protection. It is just as ridiculous as if electrician asked you to move out of your town while he fixes your power cords because he is entitled to protect his skills from being acquired by watching him working.
Given the outrageous amount of “piracy” going on, the multimedia content industry continues to grow both in absolute numbers of entities and in revenues. Definitely doesn’t look like something in need of protection.
People regularly go into a bike shop and ask why a seemly simple job that only takes minutes gets charged £5 or £10 for? Because you are paying for all the expertise that you have required to make that job only 5 or 10 minutes. I spend a lot of time mucking about with Bicycles in my spare time and I regularly tell people “I can fix it, but it would take me maybe and hour whereas he can have it done for your in 10 minutes”.
The same with selling software, you are paying for the expertise and knowledge that allowed you to get to that point. As you and I both know it is no easy journey, so I think a software developer should get to dictate terms on how is licensed and how it that is applied.
It not like there isn’t other options.
I agree about piracy not being a massive problem for large studios.
Precisely. The problem is that there was opinion from shmerl saying it was junk. That is wholly driven by the fact that he opposes DRM in all forms.
If the opposition of the DRM in the spec was more similar to this:
http://www.osnews.com/thread?582551
I would have a different attitude towards this opinion, because it is based on critical look towards the spec rather than being idealogical.
DRM is simply unethical in all forms, since it’s an overreaching preemptive policing. Making something unethical a standard is unacceptable.
However above I didn’t talk about that specifically. I was talking about pragmatic negative outcome of making it a standard – i.e. prolonging its usage.
Edited 2014-02-07 18:58 UTC
Why is that a problem? It isn’t for me.
Seriously if the DRM is shit, I will just find a way to rip it into the format I want to deal with. If it isn’t I will happily comply.
Edited 2014-02-07 19:20 UTC
No, it is not “simply unethical in all forms.” Ethics are a matter of opinion, there is no universal standard of ethics.
Edited 2014-02-07 19:29 UTC
It is unethical in all forms. I explained above why.
If you claim that ethics are all relative and some deem mass surveillance and police state ethical (and thus treat DRM the same way), there is no point to argue further, since such debate won’t produce anything useful
[q]It is unethical in all forms. I explained above why.[q]
No you didn’t, you made a lot of assertions. Assertions aren’t fact.
Until you provide one single fact that supports your opinion you are being idealogical.
I’ve noticed you haven’t replied to these challenges because you have nothing except for your dogma.
[qThere are two choices either we have a standard for DRM or there will be some bullshit plugin system we will have to deal with.
[/q]
The problem is that they are currently going for a third option.
The encrypted media extension (EME) does neither specify a standard DRM system, nor any way to register or query for installed DRM systems.
A lot of opposition would falter if there were indeed a move toward DRM standardisation.
In fact quite a few of the discussion participants currently opposed to EME have explicitly asked for requirements a standard DRM system would have to fullfill, only to be told that either there are no such requirements or that they are secret.
Unfortunately this is not what you will get. Nobody is working even in the general direction of that goal.
Unfortunately this is what you will get, just less platform coverage than Flash.
A very sensible point of view. Wouldn’t it be great if at least a single one of the EME proponents would have the same one?
I am always puzzled how the group managed to make even informed people believe that there was going to be a standardized or unified solution.
Maybe so many wish this to be true that they are believing it inspite of all the evidence countering it?
Thanks for explaining why there is the opposition against it.
I am going to have to read more about it, since most of the blogs I have read were complaining about the DRM angle (which I don’t care about).
For once I’m gonna side with lucas_maximus. You see, that same argument works for BOTH sides: if you were to deny the DRM-mechanism inside the browser you’d effectively deny a lot of people access to the content unless they installed a proprietary plugin like e.g. Flash or Silverlight. And since all such plugins do much, MUCH more than just a simple DRM-blob that handles nothing more than decoding audio/video they also provide a much larger target for malware authors. So, basically, by denying the DRM-mechanism you’re promoting malware. You know the content-providers will not switch to non-DRM delivery-methods, that’s just a fantasy and it won’t happen.
My problem wasn’t his stance with DRM, but his stance in regards to other peoples opinions.
There is no reason to trust DRM blob used with EME any more than Flash plugin. If anything, it’s more obvious that Flash shouldn’t be trusted. A less obvious blob is easier to ignore, but it doesn’t make it any more secure.
It’s like arguments of those who say that unobtrusive DRM is better than obtrusive one, since it doesn’t stand in the way. In reality, it’s the opposite. A hidden surveillance camera is way more nefarious because it’s hidden and people don’t pay attention. The obvious one at least makes one aware about it.
Edited 2014-02-07 19:04 UTC
But flash is unobtrusive in Chrome and soon will be in Firefox.
You can do DRM by just using a simple OAuth like system on the web and it won’t cause any negative side affects.
DRM isn’t the problem, shitty implementations thereof is. Almost nobody (expect probably people like you) have a problem with how Steam does DRM, because it is a good implementation.
How DRM is a surveillance system is beyond me.
Edited 2014-02-07 19:17 UTC
I explained the parallel behind the principles of DRM and police state approach in another comment: http://www.osnews.com/thread?582578
Edited 2014-02-07 20:10 UTC
Sorry you can draw parallels between anything if you want to. There is either an Axiom or there isn’t.
Police states are like the soviet union. At no point does looking at entertainment that has DRM imposes draconian restrictions on me.
You aren’t convincing me.
Edited 2014-02-07 20:30 UTC
If you don’t see a parallel it’s your problem.
I see you parallel as ridiculous. If you can’t see it is ridiculous that is your delusion, not mind to paraphrase you.
If you have an argument that is based on facts and well tested ideas, rather than your own personal hatred of DRM and making silly comparisons between horrifying regimes and what at the end of the day is a specification … I am all ears.
Edited 2014-02-07 21:11 UTC
No, it just means he doesn’t agree. Stop being an elitist dick.
Are you so sure about that?
Is there still DRM on mp3s these days? Is it really hard to buy music online without DRM these days?
In any case, I can tell you first hand that of all the technical books I bought in the last couple of years, none of them have DRM on them. I bought ebooks from PacktPub, PragProg, Manning and O’Reilly. I buy them straight from the publisher and can download them as a normal pdf or an epub. I am also subscribed to Linux Journal and that magazine is also available as a non-drm pdf or epub.
Some publishers are proud to brand their online offers as “DRM-free”. Some publishers start to realise that the only people that actually suffer from DRM are the ones that are willing to spend money on them.
It’s really very easy to find those technical ebooks online, and a lot of people just pirate them. I buy it because I think both the author and the publisher deserve to be rewarded. Double so because they offer their books DRM-free. Next to that, if you pay attention, a lot of those publishers have a lot of special deals… so most of my ebooks, I actually bought at 50% of the ebook price on their site.
I definitely would not have bought those ebooks if it did contain DRM… it would just make using them a lot more inconvenient for me.
Interesting argument and it isn’t without merit. Good luck convincing large studios of it.
That’s how it is today, but sadly, it really should be the other way around. It’s the ones that are offering the content that should convince the ones that want to consume the content.
Most people sadly don’t realize they are buying a product with locks and sometimes even personal information trackers on it.
You are really out of touch with the common people. Normal people are happy to install whatever they’re told because they’re not expected to understand these issues.
It depends. A lot of commoners complain about DRM when they notice it (remember Simcity fiasco?). So the more DRM impacts usability, the less anyone sensible wants to use it. This applies to both users and developers. This is exactly why it shouldn’t be made easier to use in any way.
Edited 2014-02-07 19:08 UTC
DRM doesn’t affect usability in Steam since it is basically praised here by most.
The implementation is the problem, not DRM itself.
No, DRM is the primary problem. Abysmal implementation can help getting rid of it. Implementation which masks the main problem is therefore worse. I think I made my point clear enough, if you don’t see DRM as a problem in itself, then you won’t evaluate implementation the same way as me.
Edited 2014-02-07 20:02 UTC
You made you dogma clear. nothing else.
If you see analogy of police state as too abstract, here is a closer one: saying that anything related to DRM should be a standard to make it easier to implement and use is like saying that there should be a standard for writing malware, so malware developers won’t have hard time, and users won’t have to deal with malware which stands in the way. Unobtrusive and easy to make malware is the goal… It sounds as absurd to me, as the same thing said about DRM.
Edited 2014-02-07 20:24 UTC
Police states torture people for thought crime. You wouldn’t even be able to post your ideas without being imprisoned and possibly tortured.
Recently China has said that NK was too horrific … the country that brought us the events of Tiananmen Square said NK was too horrific for them.
NK current year is 113 because that is the number of years since the original great leader ( I wish I was making this stuff up).
Christopher Hitchens went there:
http://www.youtube.com/watch?v=P8-Vr_r36Fg
If you think that is like DRM. f–k you.
I find it sickening that you liken the anti-piracy measures to actual human rights abuses of a real police state. You should be ashamed of yourself.
Edited 2014-02-08 22:13 UTC
If you are interested in demagogic arguments – don’t waste my time please. I made my point clear and I’m sure you perfectly understand it. Your “misunderstanding” is simply trolling in this case.
Edited 2014-02-09 00:22 UTC
At this point, Juche is probably virtually like a religion…
I disagree. Doesn’t matter much because those making the decisions are not listening to common sense, but all the same DRM is the problem. The problem is it doesn’t work – it doesn’t stop what it is intended to stop…
What good is DRM on Steam when virtually every game that goes up for sale ends up in a torrent about 2 days later. Steam’s DRM was cracked just a few months after it came into existence, and has stayed broke pretty much continually ever since. Whats the point?
Ive seen people hold up Netflix as an example of DRM “that works”… It doesn’t work – at all. Their network security works – you can’t access Netflix without a valid account. But DRM? There are hundreds of videos sitting on thepiratebay right now that were ripped straight off of Netflix… Last year the entire season of House of Cards was up as a torrent literally hours after it went online on Netflix. Netflix has good network security. Their DRM however is shit (like everyone elses). It doesn’t work and never has.
If it is audio/video content and you can see it and hear it then you have it – that simple. No DRM will ever change that. Pirating is always going to happen, content producers would be better served trying to make paying customers happy rather than continually making them regret they didn’t just download the damn movie or whatever from bittorrent…
All that said, TBL is right, let them have DRM in HTML5. It won’t work either. Sure, it might technically work, but it doesn’t actually stop pirating and maybe they will realize it is all a big waste of time and get over it once and for all.
It won’t help them to realize anything. They had enough opportunities to realize it in the past. Adding this garbage to HTML5 won’t make things any better – only worse.
It is not only goverments, it is from private companies, social networks, search engines… doing centralised collection of user data.
Worse than Brazil hosting Brazilian records, it is an US company hosting somewhere (not necessary on US soil) data about everyone.
Sometimes they exploit that information, sometimes they give a copy to the NSA, sometimes a random employee use that information for his/her own benefit.
Sometimes they give some information to foreign governments about individuals which does not even violate US laws (particularly about freedom of expression), because they want to keep doing business in the country.
Brazilian and German and other goverments tactics are not great, but they can be a positive factor to undermine the centralised collection of data by some companies.
Tim’s concerns seem to be around the political trends, however I’m concerned about the technical trends as well.
Everywhere we look the full connectivity of the internet is compromised. Network Address Translation on home routers and at some ISPs are forever causing problems that require numerous hacks like UPNP, STUN/ICE, client Algs, manual port forwarding, server based proxies, VPNs, all of which are confusing, difficult to develop/deploy, and anything but reliable.
It’s just going to get worse as ISPs are forced to deal with the IPv4 depletion. As for IPv6, as much as we desperately need it to work – good luck reaching anyone on there while everyone else is still on the IPv4 network.
Many businesses have firewalls that intentionally block most protocols outside of HTTP. Often ICMP is blocked, which breaks connection related feedback and breaks the autodetection of maximum segment size, creating further issues. Even the ISPs are blocking ports to differentiate packages.
My parents house has business account for cable internet, which up until recently blocked virtually all inbound connections. I ended up using an outbound VPN on a non-standard port to provide remote access to the machines. This worked, but how disappointing that I needed to fight port blocking using an outside server. They changed their policy quite recently, now the inbound ports are unblocked but they added a 150G bandwidth cap, which I guess is progress.
Corporations don’t really mind that we can’t reach each other since they like keeping our data for data-mining. However I’d rather see greater adoption of secure peer to peer technologies, I guess they have no business incentive to develop that.
Out of curiosity, how many people here would be interested in P2P social network? There’s a chance I might be motivated to work on one.
Edited 2014-02-07 01:45 UTC
There a dozens upon dozens of such attempts out there already, you really need more than just “P2P” for it to catch on. Just saying it’s P2P and social network doesn’t actually say anything useful or meaningful about it and therefore it’s really hard to say whether it’s interesting or not.
Come back when you can outline your plan in more details and I’ll tell you whether I hate it or just dislike it.
The words “social network” is the ones i do not like in that sentence.
nagerst,
Haha, yea I think those have gotten a bad reputation among many of us, including myself. However that’s the reason to push for a different kind of network where it’s not centralized. Scalability & availability is achieved via P2P networking. Security is achieved through encryption.
I have no interest in facebook (and friends) because I don’t approve of the kind of future internet they are building. The problem (for me, maybe not for you) is that I have to opt out of collaborating with friends & family due to these objections, I don’t get any pictures or news of them, some of them don’t even do email any more. This disconnect is a real shame since the internet is supposed to connect us.
We shouldn’t have to give these up because corporations are greedy and want to control everything. The trouble is that nearly all collaborative tools being used today seem to be tied to corporate agendas, which is unnecessary, harmful, and limiting. Even centralized services like webex, billed by the hour, are taking over. We should have decentralized alternatives where we -the users- are in charge. Anyways that’s my spiel.
I think the biggest obstacles are not technological, bot rather getting enough people to use the network to make it viable. That’s really why I’m trying to get a feel for whether people even would want something like it. It’s been in the back of my head for a while, I’m bringing it up now because it’s sort of related to this article.
I did search but once I excluded everything that’s bad quality, alpha stages, or abandoned, there doesn’t seem to be much out there. I would be happy to look at some specific examples especially if you’ve tried them. Are you using any of them. Why/why not?
I wouldn’t mind writing up a paper on details but for now I’m merely asking if there’s even any interest in it at all. For all I know maybe the reason P2P social networking is not popular is because nobody actually wants them in the first place.
You know what I always thought would be neat? Build a social network on top of git. I don’t mean like github (which is a social network built around git) – I mean actually implement the network using git.
Each user’s data (all of it) would exist as a local repo of sorts. Everything they post or share would be stored in it and their friends would just pull deltas from them…
Users who want to “friend” each other exchange SSH tokens and clone each others repos.
You just have to write software to manage the whole process and make it simple (easy right?). Some scheduled or on demand task goes out and syncs to all of your friends repos. If one is offline, too busy, etc., it could even clone from a mutual friend who has a more recent delta than you.
Stitch all the data together into some kind of timeline and you have a rudimentary social network. If you want to do something “realtime” (like send a message or a file), you just store it in that friends clone and do a push back to them immediately.
I know… crazy. Probably. You would still have all the same problems with ports, and it would not scale well either if you had say 500+ friends, although I can think of a few ways that might be mitigated.
Upside is if you could build it and make it work it you would automatically get tons of street cred from programmers
Imagine a tiny little bit of questionable content like child porn sneaked in there, and you’ll see the problem with the system in the current legal climate.
galvanash,
That’s interesting proposal. Although for the record this would be targeted at people who haven’t got a clue what git is and don’t want a git account. The network should support live collaboration with text/photos/video etc, and I honestly have a hard time envisioning live traffic going through git, which feel abusive. If we went that route it seems likely that we’d end up needing to fork git itself.
I can see how it might be useful to have a version control system to track documents, that’s a great idea. But I would think it’d be more useful as an addon than as an underlying protocol.
Probably true, and I really appreciate ideas along these lines. I’m quite open about the technology. I’ve been considering being compatible with the ConnectionPeer API built into HTML5 browsers for users to login and join the P2P network using nothing but a web browser to supplement a locally installable version.
Well the notion is kind of crazy and Im sure there are fundamental issues that would need to be identified and resolved, but I did want to clarify a few things anyway…
My thought was to abstract git into the plumbing, i.e. you wouldn’t need to know what git was or that it was even being used. Also, git doesn’t have “accounts” – its just software on your machine, maybe your thinking of github? I certainly don’t mean using that.
Maybe. I just envision a persons “social identity” is the sum of all their information as it is built up. It has history, ie. everything they share, files, posts, whatever, just goes into this same bucket (temporally indexed) as change sets. It all sort of matches the semantics of a revision control system.
Since git is a distributed revision control system, it’s semantics make sense if you want to make all those change sets available to lots of different people. And it already has well established methods for securing traffic from snooping as well as managing permissions for read/write access to outside parties.
If you had to do that I would say don’t do it… Like I said it is kinda crazy and under close scrutiny roadblocks would probably pop up. But it would have some interesting behavioral properties… All your friends would act like a backup to your identity – you could restore it from any of them (if you know your token). You could graph changes over time to key elements, etc.
You would have to buy into the notion (admittedly weird) that you are not in fact building a communications medium in the sense that things are “talking” to each other over a data channel. If you built something like this using git what you would be doing is simply instrumenting git to do what it already does (distribute change sets) – the actual software running on top of everything would see ALL data as local at ALL times – you just kick off a process to synchronize everything as needed.
You would probably need to structure data certain ways to take advantage of git’s behavior, but at the end of the day you would be building an application that works on purely local data, and then an OOB process that distributes and consumes deltas when needed – two entirely separate things (at least that is how I see it working).
Yeah, that is a big problem with the git idea – it really has nothing to do with the web, web browsers, etc. I doubt there would be any way to make it work in a browser short of having a local web server on each machine working off of local data.
Edited 2014-02-08 00:26 UTC
OTOH having a dedicated fancy client could be attractive and differentiate the social network from others?…
You’re still asking the wrong questions here. Ask yourself, what would a P2P-based social network offer Average Jane and Joe that they would appreciate? And would that be enough to offset the things they lose in the transition? Just saying “P2P” or “you are in control of your own data” isn’t going to persuade anyone except perhaps the most paranoid geeks, but the hassle of moving people over to a new network that no one has even heard of and all that is going to be a major stumbling-block. Inertia is a dastardly thing, you know.
Also, how were you planning to actually do all the plumbing? Would all the files be automatically shared throughout the P2P-cloud, or would people have to keep a machine online at all times for their own stuff to be accessible to others? If you didn’t automatically share everything then all their stuff would be inaccessible whenever they lose power/internet and/or when they turn off their computer, immediately rendering the whole thing rather useless. Let alone the outbound bandwidth most people have isn’t enough for multiple people to stream videos or other high-quality content from them in the first place. On the other hand, if you did have all the data automatically shared throughout the cloud, how would you then protect the privacy of said data? And what would happen if these people forgot their password, for example?
The point here is that there are a lot of obstacles there and for your idea to ever fly you need to be able to offer something so god damn compelling that it over-weighs them all. Can you do that?
WereCatf,
Believe it or not, this *was* what I was asking. You perceive that I wouldn’t like the answer, however it is never the less useful to hear from the naysayers because frankly they’ll initially make up the majority. If we could address your reservations, then perhaps it has a shot in hell at working
Software engineering is the part I love working on! It’s actually similar to the redundant data backup service I sell to my clients. You need enough redundancy to mitigate outages and even complete loss, but not so much as to impede scalability. With the data backup I strive for one copy to be near in order to maximize performance, and another to be be geographically farther away to minimize common failure modes (ie lightning / hurricane). In the backup case the servers are on 24/7, however additional redundancy would be appropriate to compensate for the on/off nature of peers, this would be an excellent topic for an in depth study.
We can’t perform miracles, but if they can use skype today, then I don’t see a problem.
Let me turn the question around, what would be compelling enough for you to use it?
I’m not trying to persuade you to cease working on your idea, I’m just hoping to bring your head down from the clouds, so you have some realistic expectations. By all means, if you can do it then go for it. Alas, you’re not answering any of the questions I pose for you, so I’ve pretty much said all I can. If you do start working on your P2P social-network I would only wish that you write here on OSNews once it’s somewhat useable so the rest of us can take a look at it, ok?
I don’t quite think Skype is comparable. Skype’s video-quality is quite lacking, it streams webcam-video which consists of mostly static background and a moving head, and the audio-stream is specifically optimized only for human voice — try streaming e.g. music over it and you’ll see what I mean. I’m not saying it’s impossible or anything, but many people don’t have the bandwidth to sufficiently stream Youtube HD-quality video to even one recipient, let alone to multiple.
I’m not big on the whole social-networking thing, so I make a poor target to base any ideas or plans on. Really, I can’t think of a single compelling reason. If it can match most of the things Google+/Facebook offers and the simplicity, but with more efficient privacy-settings I suppose it’d be enough for me. Alas, as I said, I’m very much the opposite of heavily social people.
WereCatf,
I’m sorry I don’t have the answers you want right now since this is just an idea in the back of my mind. However I really wanted to learn whether there is even a market for this at all. I’m a little confounded at why you think my head is in the clouds. I’m confident of my technical ability to build it, but I’m anything but confident that people would use it. I have not convinced myself that there’s a market for it, hence the nature of my questions.
Sure thing.
I kind of hinted at this in another post, neither am I. Part of my motivation, apart from being a fun project to work on with others, would be to keep the internet viable as a decentralized medium. We’ve lost so much ground in the last decade.
A lot of good points.
I don’t think we need a P2P social network. I don’t think that will work. What we do need, is a decentralized social network. This needs to be similar to the way email works. Every one can setup his own email server (or you can ask for an account on someone else’s email server) and every email server can talk to every other email server. So that means we would need an open standard protocol for “social network” servers to talk to each other.
You would also need to provide a solid reference implementation that would be usable on production servers.
Ideally you would be able to find thousands of people or companies willing to host “social network” server. Next to that, ideally, you would also be able to hype up the product and warm up millions of people about it that are eager to create accounts and use this new social network when it launches.
Social networks only work when everyone you know is using it. The power of Facebook (I don’t like it) is that nearly everyone in your circle of friends is using it. It would be extremely hard to dethrone facebook.
I always say that if the internet wasn’t created/grown in the context of universities, something like email would not exist. Companies would much more prefer a messaging system that is only usable with an account on their own servers. Companies much more favor closed, locked-in services.
And still… “decentralized” is the core of the internet. The internet would simply not work without the decentralized services that it’s based on.
Btw, ever noticed that practically *all* services that one can register for on the internet ask for an email-address during registration? You know why? Because it allows anyone with an email-address to communicate with anyone else who has an email address, no matter on which server the email is hosted, no matter to which person or company that server belongs. As a user, I can choose with which company I host my email, or I can even chose to host it on my own server.. it does not matter: anyone else with an email address will be able to communicate with me.
Btw, I know there have been some efforts for decentralized social networks, and the open protocols that go with it. None of them actually seem to be used by any non-negligible number of users however. Diaspora probably made the most public noise about it, but from what I read had a disappointing implementation.
Either way, I can only hope “decentralized” protocols will make a come-back. I do doubt it however. The only ones that have enough power to push that kind of thing, are big companies… and it’s not in their interest to do so. Imagine that Microsoft, Google, Facebook would use a decentralized chat protocol? The sad thing is that for as far as I know that both Google and Facebook chat protocols are actually built on top of jabber, but they disabled the decentralized behaviour (federation) on purpose. (Jabber is a decentralized protocol…)
There are already several ones out there. I have not tried any of them nor do I remember any names, but there’s always at least one new contender announced every year.
It’s not much of a social network if you can only find people there who you have manually added. You see, if it worked like that all the parties that wanted to see each other would always have to manually establish the connection first, and that kind of removes a lot of the “social” from it. Also, with large friend networks it would quickly become a major drag on the networks, what with hundreds or even thousands of connections.
And now your data would again be at the mercy of others.
The protocol is actually XMPP, not Jabber. Jabber is a misnomer.
Edited 2014-02-07 22:06 UTC
When I try to find back an old connection, I typically type his or her name in Google and see if I can find him or her. I don’t log in on facebook and check there, then log in on Google+ and check there, and then once again login on LinkedIn and check there.
I assume that on a social network, one typically wants some “public” information available for everyone and other information only available for people that are trusted. The public information would be discovered by search engines.
Isn’t it like that nowadays too…? If you want to discover information about people that are more privacy-minded, you’ll have to first manually establish a connection with those people?
I think you get me wrong. I basically want federation to work. That means that every individual can host his own server with his own data, or can create an account on someone else’s (individual or company) server. But every individual is free to choose a server that he or she trusts. The same way that everyone is free to chose an email account on a server that you trust. You dislike Google, then you are free to chose an email account with Microsoft for example, or run your own email server. None of those options is gonna limit in any way who you can email to. (or actually, nowadays it can limit you because of the rules that companies like Google and Microsoft use to limit spam….)
Aside from that: right from the moment you share any piece of data with anyone, you are at the mercy of others. You can take all the precautions you want, but as soon as you send data out, you’re at the mercy of others.
When you send email, you are at the mercy of the people that maintain that server you send the email too. The same goes for XMPP. Those servers can also contain software to collect information on you, even though you don’t have an account in there. For example if you email a lot to people that do have a google or hotmail address….
Well… I think Jabber is the original name of the protocol, but you are right.
Well, normally people just go to Facebook and search there, ie. within the service they want to add someone to. Googling for people generally results in a whole lot of irrelevant stuff, blog posts, comments, whatnot, and takes more time than just searching within the social-network that you’re using.
Soulbender,
I wouldn’t rule it out. It’s the age old debate between having intelligence at the client level or at the server level. What would Tim Lee think?
Thinking down the line though, P2P has a few advantages in terms of scalability since everyone contributes to the network. Otherwise we’d need to buy a lot more servers. It’s easy to *say* the server’s are the user’s responsibility, however I suspect that would immediately turn off millions of would-be users who don’t have a server and don’t want to pay even a modest fee to try it. As much as I’d like to be selfless and let everybody use my servers freely, a flash flood of new users would immediately cripple me. Who’s going to open their wallet to pay for the dynamic scalability that we’d need for the server based solution? I don’t want the project to be at risk of collapsing each year due to $20k in reoccurring bills that I cannot afford to cover. Also, as you noted, google/facebook might take the federated solution for themselves and break it for others on purpose.
Putting these issues aside, I don’t know that P2P and federated are mutually exclusive. It should be possible to build a P2P and federated hybrid network. The federated servers could use the same protocol and be totally transparent to other P2P nodes, only with a different configuration and policy. You could configure your client to delegate communications via your server instead of the P2P network. This way, we’d have the option to run a server to store/distribute data (say a corporation), but people without a server could still join as a P2P node.
Would you have concerns over a hybrid model?
Edit: OSnews is not really meant for brainstorming, and it’s going to cut us off soon. However I really like the discussion we have going, would anyone else participate if I setup another channel to exchange ideas?
Edited 2014-02-08 18:08 UTC
You are definitely right in the sense that P2P effectively brings advantages.
But… I have to say you really like challenges, don’t you? I think that going P2P is bringing a lot of extra complexity with it. That is more a gut feeling than anything else. There are a lot of things that seem harder for me to implement in a P2P based model. How would you define a “person” (or “persona”) in that model? How would you be able to identify a “persona”?
I personally would like to base everything on public-private key cryptography. This would allow people to identify themselves by signing messages (or “stuff”). It would also allow to encrypt messages using the public key of the recipient.
Another thing on a P2P network: what about data persistence? When I switch off my laptop, are people still gonna be able to read my messages, my comments? Or how would that work?
To be honest, the thing that I personally find most important is that the whole system is based on open protocols and does not promote lock-in. So that means I am free to chose the software I like, run it on any system I like and still be able to communicate with anyone else in the network, no matter what system or software that other person uses. So, if you pull it off on a P2P based system, I would be happy.
Federated system with real servers would of course be nice. But when designing a P2P system, I would take into account “federation” during design, but probably first focus on having a working pure P2P system. And then focus on federation in a second step.
Just to be realistic though, like “Werecatf” already mentioned, there’s already several projects that tried to build an “open” social network and none of them succeeded in really becoming “visible”.
I would definitely like to share ideas about this a bit more!
Actually, you replied to a post by snowbender.
There was one already…
It was called NNTP.
JPollard,
This?
http://en.wikipedia.org/wiki/Nntp
I don’t understand the relation to a P2P social network?
No kidding. That’s kind of ironic because the entire point of NNTP back in the day was to centralize storage (usually on University/ISP servers) to conserve diskspace and bandwidth.
There is nothing P2P about it unless every person is running their own NNTP server (which totally defeats the intended purpose).
Things like this have to create their own interest rather than rely on existing interest, so if you really want to do it, you’ll have to make it and then make it usable.
One thing I searched for and found recently is that the BitTorrent guys are looking to create something similar. It’s more like what the original vision of the cloud was going to be, rather than going back to the IBM hypervisor/client model.
Could you please extend on this, kwan_e?
Do you mean this: http://getsync.com/ ?
I’d be interested, IF you could create something that a non-geek would want to use (and most of them are going to when it is built). That is not an easy task.
First rule: It neads to be easy, easy, easy, easy, easy, easy. If it isn’t as simple as clicking a few buttons, it’ll be a geek plaything for a few months and then it will be abandoned as “no one uses it”.
No matter what fancy technology is behind the curtains, the installation, the setting up of a online identity, the managing and back up of the online identity and the connected data (status updates, photo’s, etc.) needs to be something a kid of five can understand.
It probably should be a cross between an online profile, an IM, and Twitter/Facebook Timeline. You can find people through their profile. Once you’ve reached their profile, you can see their online status and browse their status updates and pictures. If they are online, you can directly message them, otherwise they get an offline message. Having the option of push updates from people you want to follow would be a bonus. Maybe add the possibility of sending voice messages like Whatsapp now has?
Maybe it could have a tabbed interface. Have the first tab be your own profile/timeline, the second the search and contact list tab and the rest of the tabs you can open to have the profiles of your friends easily accessible.
Also, minimal network hassle. Apply every trick in the book to pierce network obstacles any way you can. Getting on “Friend Swarm” should be a click on the connect button. The program does it’s thing and a few minutes later you are connected.
It needs to run well on multiple platforms. If it doesn’t run and install easily on Windows, OS X, Linux, BSD, Haiku, iOS, Android, Ubuntu Touch, Firefox OS, Windows Phone/RT, etc. it’s not going to reach critical mass. So it would need packages that have minimal dependencies on the underlying OS and it needs to be a “one click” install, if possible.
Alpha’s can be “compile from source”, but the beta program should get me, my mother, my neighbor and the kid two cities over excited.
I know what you mean, this has been a concern of mine too. But I’m more optimistic. Which I’ll explain below.
My biggest concern is unencrypted protocols and storing data unencrypted at central locations (“in the cloud”, Google Chrome sync. and all these things and 80% of all mail goes through a few webmail providers: Google, Yahoo, Outlook, that is just rediculous email is a decentralized protocol !).
I do see some good trends:
– HTTP/2.0 will be encrypted
– WebRTC (which most of the time people associated with just video) is actually a encrypted peer2peer protocol in the browser. But not just for video, but any data. It’s like the old Skype and does NAT hole punching (current Skype, Faceetime and so on all go through servers now !). Research has shown about 90% of the time data will flow directly between peers. The other 10% of the time a relay is needed, but the relay only sees encrypted data.
– I’m more optimistic about IPv6 then you are.
I’m seeing better IPv4 NAT implementations now. Have a look at MAP: http://en.wikipedia.org/wiki/IPv6_transition_mechanisms#MAP
Which don’t have to go through a ‘carrier grade NAT’. As a customer you just get fewer IPv4 ports.
The biggest problem is people are still going to Google and the others to store their data. There are decentralized alternatives and if people used them, more developers would build for that.
Edited 2014-02-07 09:55 UTC
And this is why people need to get after their ISPs to get off their asses and do their job. Some are doing a reasonable job such as Comcast and TWC but a whole lot are being super lazy and it is epic fail.
I’d be curious of the results. Sometimes I wonder how such thing would work, thinking how one would go around doing it (network fully p2p, like freenet or tor? A forum of sorts / imageboard hosted inside of tor-like network?)
You must keep in mind “cute cats and porn theory of online services” though. First questions is: how would such social network make it easier for people to share photos of cute cats? Second issue: in a working online service, there are always people who want to share porn …which might complicate things.
PS. I’m not a programmer though, so I can’t be of much help…
Edited 2014-02-11 01:26 UTC
zima,
That doesn’t matter, you can still share ideas! Or even designs would help. If this is really going to happen, then we’ll need lots of input from everybody, and eventually testers.
I started setting up a wiki however I don’t see much point in publishing a wiki before I’ve made time to populate it a bit. That will happen, it’s just not ready. In the meantime, if anyone wants to discuss more or get notified when it’s ready you can email me [my username] @ evergrove.com. We could do an email distribution list.
Edit: I do hope we can get people actively involved, but contact me anyways even if you are passively interested.
Edited 2014-02-11 11:48 UTC
Problem is, I don’t have the feel of what’s doable and what’s not; my ideas would be probably pretty worthless. But I’ll get in touch…
Actually, the control they have now is because of a number of reasons:
– the web isn’t decentralized enough (lots of things are stored at Google, Apple, Microsoft, Yahoo, etc.)
– lots of web traffic is still unencrypted (http instead of https)
If both are solved, it will be really hard, if not impossible for the government to do the bulk-collection they are doing now.
And people need to stop trusting companies to do the right thing (for example storing their information unencrypted at Google).
Edited 2014-02-07 09:22 UTC
Not true at all. Google’s servers (and the like) is not the only source of data collection. The actual lines making up the internets framework are physically compromised for example. Additionally, encryption is somewhat of a false security blanket. The methods typically employed might make it difficult for your neighbor to jump on your wifi or look at your packets, but governments have the tools the blow through it. These are just a couple of things that have been reported on recently, and there’s plenty more.
The idea that you are going to somehow get a secure and/or private experience at this point is day-dreaming at best. Believing otherwise only puts you at a further disadvantage.
I would put that a little bit differently:
Encryption can solve the bulk collecting of _data_ they are also doing now (why does an instant messaging client send it’s friends list over HTTP instead of HTTPS ?).
Examples:
– replace HTTP/1.x with HTTPS and HTTP/2.0
– replace VoIP with WebRTC
I’ll admit just implementing the examples above will not solve the privacy/meta-data-collection problem.
Solving that one is a lot harder, you’ll need to encrypt first as above and then put everything on top of something like Tor.
Even then, you’ll still be doing transaction. Like financial transactions which can identify you.
One way to solve that would be to have a form of digital cash, something like Bitcoin, a crypto currency.
That will allow you to download a piece of information, data, software and pay for it annonymously without the website knowing who you are or where you are from. The Tor exit node only seeing where you are going. The Tor relays only seeing encrypted traffic and the first Tor node only seeing where you are from.
Have a look at this handy graph/tool from the EFF:
https://www.eff.org/pages/tor-and-https
But this will all will not be enough, there is still a lot more work needed to example make your browser not identify return visitors. And encrypt your traffic in such a way that visiting the same website twice doesn’t look similar in size so it can be fingerprinted again.
That is the really, really hard problem. That can’t be easily solved. We’ll need to change how the files that form a website are retrieved. Possibly in a distributed nature. They would then be neded to be signed so they can be collected from anywhere. Kind of like how bittorent does it.
Getting everything deployed is a really, really long process and really, really hard to get right, I wouldn’t say it is an impossible problem.
The IETF, the people that write standards for the Internet, have officially commited themselves so solving the bulk-collection problem.
If that means encrypt all the things, but no prevention of meta-data collection I don’t know. They want to solve it I think. But it isn’t an easy problem, it will need a change in mindset.
Edited 2014-02-08 17:47 UTC
Here is the IETF meeting where the decision was made:
http://www.youtube.com/watch?v=oV71hhEpQ20#t=23m19s
ilovebeer,
Good encryption defeats passive attacks. This means that even when an adversary is monitoring the channel, they cannot decrypt it. All they can do is learn traffic end-points, which might be used to infer relationships.
It would require an *active* attack to actually decrypt the data (like substituting HTTPS certificates with fraudulent ones). This should not be difficult for an agency to pull off, because it’s likely they have the signing keys of at least one of the CAs. However this would be detected fairly quickly if they attempted it on a massive scale.
My recommendation would be to communicate using encryption tools that don’t delegate security to third parties (CAs).
We need a new system, an improvement on the CA-system.
But if you don’t want to do any key exchange with people you don’t even know, you’ll need some kind of third “party” (organisation(s) or technical solution).
The best candidate we have right now is DNS, as we already depend on it. Or do you know the IP-address(es) of your bank ?
DNSSEC/DANE is currently a standard, but DNSSEC hasn’t made it into any operating system or browser yet, at least not enabled by default for the Internet.
DNSSEC/DANE can be used to improve on the CA-system where you’d want to or used without the CA-system.
Edited 2014-02-10 22:57 UTC
Lennie,
“We need a new system, an improvement on the CA-system.”
If by “improvement on the CA-system” you mean eliminating CA’s entirely, then yes.
The way PKI works though doesn’t give us much choice, it isn’t possible to validate a certificate without either having pre-exchanged keys somehow and/or use a third party to vouch for it. Maybe in the medium/long term the use of mobile technology will make it trivial & automatic to exchange keys with everyone we know while they’re in the same room without users having to go out of their way to use things the manual way like GPG. So, my bank certificate could be verified periodically while I am physically at the bank, but when I’m back home I would not be dependent on a 3rd party CA to verify it. As a side benefit, the keys could be associated with a picture of the individual/company at the time it’s acquired.
Even though I consider this solution nearly ideal, I have to concede that common mobile devices may not be sufficiently secure to take on this role, at least not today. It would be terribly unfortunate to trust security keys to a compromised device: software vulnerabilities, human error, adversary access to unguarded phone…
http://techbeat.com/2014/01/nsa-able-hack-iphone/
http://www.cbsnews.com/news/winter-olympics-2014-privacy-hacking-so…
“The best candidate we have right now is DNS, as we already depend on it.”
I completely agree that it’s the best way to go, even though the culmination of changes accrued over time are making things more fragile as we move forward. In this case:
https://www.dnssec-deployment.org/index.php/2012/03/some-fragments-a…
In hindsight, there are lots of things we should have done differently.
Edited 2014-02-11 05:50 UTC
Yes.
I’m sorry, but my bank is working hard to get rid of as many bank offices as possible. It’s a good cost reduction for them. Buildings, personal.
Your solution also doesn’t work for that webshop on the other side of the world.
If you think it has anything to do with what was done in the past with old protocols and possible wrong choices then you would be wrong.
There is something which the crypto (and also DNS) people have figured out a long time ago: Zooko’s Triangle
Zooko’s Triangle is a description of desirable properties of naming systems, of which only two can be implemented at any one time. They are:
Secure: Only the actual owner of the name is found.
Decentralized: There is no “single point of failure†that everyone trusts to provide naming services
Human Readable: The name is such that humans can read it.
It is similar to the project management triangle for Fast, Good and Cheap.
And crypto constraints are much harsher than project management. There is no outsourcing parts to a cheap labor country.
And don’t get me started on fragments (sometimes pictures tell people more than words):
http://dnsreactions.tumblr.com/post/61101236858/when-signing-a-doma…
At least Linux now has options to control it these days:
http://dnsreactions.tumblr.com/post/74051600184/linux-3-13-stops-fr…
Even just changing DNS-providers is hard with DNSSEC:
http://dnsreactions.tumblr.com/post/49349616260/when-you-transfer-a…
https://www.sidn.nl/en/news/news/article/secure-verhuizen-van-dnssec…
Edited 2014-02-11 08:45 UTC
Lennie,
It wouldn’t need to be mutually exclusive to anything else.
Keep in mind that often times guarantying forward security is more important than the actual identity of a person. For example, if I meet other programmers on IRC and we talk for a few hours or even weeks before deciding to start up a project together, their identity is probably less important to me than assuring it’s the same people every time we chat. Sometimes our trust is built on past interactions instead of identities. If I go back years later, a CA couldn’t tell me it’s the same guy, but a key exchange system could tell me that.
On sites like craiglist, it’s usually not the actual identity of the contractors that matters so much as the fact that we’re talking to the same person every time. It wouldn’t matter where they are in the world. CA’s aren’t really useful in these cases especially since our names are not unique. Even business names aren’t unique, which can pose an issue when looking up information about them (say an employer). The advantage of domain names is that they truly are unique (as long as they’re not misspelled).
In my opinion there were mistakes. We might build a system that makes sense, and then we might justify every incremental evolution of that system over time. Yet when we step away and look at what we have as a whole, some things are regrettable due to their complex/inconsistent/fragile nature today.
Edited 2014-02-11 11:07 UTC
Sure, but what I meant is, it isn’t reason why we don’t have a good system.
The real reason is there no system yet that solved the Zooko’s Triangle.
There are systems like namecoin that are bitcoin inspired, but they have their own problems.
Identity we build over the years.
Pure wisdom.
Genuine is, I Say, not others.
So memory is the key.
Do you remember exactly
how I walk, how I smile,
wich things make me smile,
make me infuriate,
make me dream?
So Your memory is My identity.
So My memory is yours.
🙂
Robinson Crusoe
does not need a identity