Mozilla plans to establish an automated process which would verify that binaries contain only the code found in the official source repositories, and not spyware secretly added during the build process at the behest of government intelligence agencies. In a blog post entitled Trust but Verify, CTO Brendan Eich and R&D VP Andreas Gal note that governments “may force service operators [such as Mozilla] to enable surveillance (something that seems to have happened in the Lavabit case)” and pledge to develop systems which will make Firefox resistant to this form of tampering.
Even if this whole idea sounds paranoid (the threshold for paranoia seems to have moved since June last year!), it looks to me like a very commendable effort. Unless I’ve misunderstood, it won’t offer any more assurance than compiling the code yourself, but that’s more than most of us get now.
However, they’ll need to take real care to avoid it being false reassurance. It’s great that the blog post cites Ken Thompson’s seminal Turing Award Lecture, but what’s proposed won’t tackle this. The whole point of the lecture was that checking the full source (of both the program *and* the compiler being used) isn’t enough to guarantee there are no back doors.
It’s a really interesting idea anyway, especially if they actually implement it. If they can get it right, so that end-users can have justified trust, it’ll be impressive.
Edited 2014-01-14 10:39 UTC
flypig,
“Unless I’ve misunderstood, it won’t offer any more assurance than compiling the code yourself, but that’s more than most of us get now.”
Well, for most it’s probably even more secure than compiling the code ourselves because most of us don’t even conduct a cursory inspection of the code first. There’s a very good chance that a source code based back door can be installed undetected even if it’s not concealed. If no one looks at it, it might as well be commented in all caps “HERE BE A BACKDOOR”.
The benefit of using the code verification network is that the entire network would need to be compromised simultaneously in order to not trigger a suspicious hash conflict. If ANYONE on the network is doing a good job monitoring the code for backdoors, then a hash verified binary copy is probably more secure than a copy compiled from source by an end user.
The only problem with this is that the hash signature *should* be verified out of band. If I download X and hash(X) from the same compromised channel, then I really cannot know for sure that they aren’t both compromised. In theory, certificate authorities are the answer for man in the middle attacks such as this, however fraudulent certificates happen, and it’s almost a sure thing that some of the CAs are a front for covert government operations anyways.
Edited 2014-01-14 17:23 UTC
I see your point: the equivalence would be to installing from source, where I have some certainty that the source I’m using is the same as the source other people are using (and presumably auditing).
Yes, but it does still rely on this happening. Whether I compile from integrity-checked source, or use a binary that has been verifiably generated from a given source tree, I still have to rely on the assumption that someone else audited it, the libraries it relies on, and all previous versions of the compiler (since there’s no way I’m doing that myself!).
It’s a good initiative and reducing the requirement to trust a single organisation makes a lot of sense. If only I could apply the same technique to all of the other technologies I use regularly.
flypig,
“I still have to rely on the assumption that someone else audited it, the libraries it relies on, and all previous versions of the compiler (since there’s no way I’m doing that myself!).”
Yes of course it might even go down to firmware & hidden hardware (like the System Management Mode that’s inaccessible even to the OS itself). I think this is where diversity helps overcome the security implications of monoculture.
If I can cross compile a binary on ARM hardware running linux and you can compile it on amd64 running windows, and we end up with the same binaries, then it rules out large swaths of the system that might be compromised. The compiler itself is a weak link here, however I don’t know how to solve this monoculture problem? If we used a different compiler we know right off the bat that we’d end up with different binaries even if nothing was wrong. How can we prove that nothing is hidden in the compiler?
“It’s a good initiative and reducing the requirement to trust a single organisation makes a lot of sense. If only I could apply the same technique to all of the other technologies I use regularly.”
I agree, this might be a good model for all open source projects in the future. Mozilla is just one small part of a large collection of software we use. This model depends on open source software, closed sourced software cannot be independently compiled outside the influence of those who want the backdoors added.
Perhaps, we could have a very strict meta-compiler to sign code. Something like a pass 1 with no code rearrange or optimization and totally independent of the target architecture. Once the base system was audited, we could just scale it to upper levels. It may work for the software stack but, of course, not at hardware/firmware level. For that diversity and network auditing will continue to be required.
But I think it’s probably the best solution we have.
I believe it is even possible these days to compare with the output of multiple open source compilers.
And obviously we can the code history in git which checksums everything.
There can be no privacy and no protection in a country where government can force companies to reveal customers’ personal data and encryption keys.
I won’t be surprised when they start to arrest people for attempts of hiding secrets from the government.
I’m not sure where you’re based, but certainly in the UK this is already the case: http://www.theregister.co.uk/2009/11/24/ripa_jfl
Edit: from re-reading your comment I now realise you’re already aware of what I mentioned. To elaborate a little, I’m not sure forcing disclosure of keys is any different from arresting people for hiding secrets.
Edited 2014-01-14 14:21 UTC
But how do you know nothing has happened to mozilla’s tool?
Coxy,
I’d also like more details, but from the sounds of it the intention is to preemptively reveal the existence of bugs planted at the source (ie mozilla) and kept secret via court mandated gag orders. It’s not intended (as far as I can tell) to protect users who’s systems have already been compromised.
So for example, they might have 1000 volunteers who independently compile binaries from public sources and publish the SHA1 hashes. A simple utility like sha1sum can then be used to verify the authenticity of the binaries. I suspect it will become incorporated directly into the downloader, perhaps even with automatic reporting that a “covert” modification was detected.
Note that this would not necessarily protect individual users from being targeted downstream. However it does protect Mozilla’s from being forced to incorporate backdoors at the source and then being legally gagged from talking about it. The idea is that a backdoor in mozilla’s official binaries would quickly get caught by a hash mismatch between the corrupted official source and the binaries compiled by the volunteers.
The problem with software is that you must trust/certify the whole stack to be reasonably sure there is no backdoor on it.
If you take Firefox from any Linux distribution, for example, it uses many system libraries to do its sub duties, and if a critical sub layer is compromised, so to dump goes your security. The same can be said about other platforms.
For people really worried about security, take a look on the links below.
http://istruecryptauditedyet.com/
https://madiba.encs.concordia.ca/~x_decarn/truecrypt-binaries-analys…
The last link means that binary auditing can be done, it is a hard work and we may need a better/specific tool set to be able to do it right even on open systems. I don’t think it is possible on closed ones. For them network packets auditing seems to be the unique way. Of course, not from the computer you are using.
Once out of the box there is no way to put the trust back. Paranoid we became.
acobar,
“The problem with software is that you must trust/certify the whole stack to be reasonably sure there is no backdoor on it.”
Are you talking about installing new software on system that already has backdoors on it? If so, I don’t think that’s what they’re trying to solve here. It doesn’t stop the government from hacking our boxes, rather it tries to tackle the problem where a government coerces mozilla to compromise THEIR end and gags them from talking about it.
So in other words, I think mozilla is searching to solve the gag order problem, which is relevant given how the companies involved with Prism weren’t allowed to talk about it. So mozilla asked “How can we give our users assurances that we are not including backdoors that we are not allowed to talk about?” This network is a clever solution because if ever they were coerced to add a backdoor in the binary, *they* wouldn’t have to tell anyone about it, the network would reveal it automatically.
Not specifically. I am talking about the difficulties related to binary files that rely on libraries to do part of the job, what is precisely what we have on our systems now. For example the packets we send and receive pass through the network stack and as so, if that part of the system is “compromised” (for example, if it must has a backdoor because NSA demanded it to be on all systems distributed from a company or individual) the claim that Firefox in particular has no backdoor or whatever means nothing.
acobar,
Either 1) it was already present on the system (as libraries or whatever) or 2) the bug/vulnerability is installed with firefox.
Maybe a user won’t care about the distinction, but it is very significant in determining which party is responsible. We can’t reasonably expect mozilla to protect us from backdoors that are already lurking in our system.
I think there’s a bit of misunderstanding about mozilla’s goal here, which is protecting mozilla itself from a scenario in which they are under a gag order and cannot talk about secret modifications in the binaries.
This may fall short of the unconditional full software stack protection users would like, however given that mozilla does not provide the whole software stack and is not an antivirus developer, it is an unreasonable expectation from mozilla alone. Maybe those familiar with firefox-os could chime in on it’s security?
Edited 2014-01-14 20:16 UTC
acobar,
“It is a good catch but, as I said, we don’t rely only on Firefox binary. This really should be expanded to go deeper on systems to be truly useful but, granted, it may not be what Mozilla has in mind, as you rightfully pointed. ”
We were cross talking, and I hadn’t seen your response… so yea I pretty much agree that it needs to be expanded to the whole software stack.
We’d need something else for it to truly be useful as an end-user system verification mechanism. Since we cannot trust the system we’re running on, it might help to have a read only live-cd that scans all executables in the OS/hard drive and compares them to the legit hashes from other users on the network.
It is a good catch but, as I said, we don’t rely only on Firefox binary. This really should be expanded to go deeper on systems to be truly useful but, granted, it may not be what Mozilla has in mind, as you rightfully pointed.
Edited 2014-01-14 19:34 UTC
Looks like SourceForge will be in trouble.
They insert browser toolbars and all sorts of other junk into the installers.
Mozilla seems to be one of the few places of public trustworthiness still available, if any trust remains available anywhere at all still. My fullest support of Firefox OS is based purely on this apparent fact.
The FOSS community could have long ago collectively/cooperatively built viable alternatives to all those services that now bend the entire world’s population over and have their way with them. MVNO’s, mail and search engines could have/can be created by the community with legally binding primary goals that would protect their user’s privacy and autonomy to the fullest extent of the law as part of their core charters, instead of the current commercial offerings who exploit their clients to the fullest extent of the law. Albeit, with a “Trust-Us” smiley faces on them.
Advertising based services need not all be intrusive and based on tracking their users. Generic ads have been the total mainstay of broadcasting since its beginnings. It’s true that it’s a less profitable form of advertising because without them keeping user-tracking logs the NSA/all government snoops would have no reason to hire their services. So bedding down with that pure-evil isn’t then an option.
It’s too bad however, that most Americans have traded their birthright for the mess of pottage that is convenient access to pointless information regarding everyone they know’s every bacon-burp and form of their flatulence. And let’s not forget all the nifty coupons fed to them as they go about their daily lives. Or the supreme idiocy that is those Borg Wannabe’s sporting Google Glass and the like.
Knowledge IS power.
Knowledge specific to you, is power OVER you.
“Freedom” cannot, will not, survive privacy.
Surely all those tinfoil-hatter’s of recent decades are now looking much more prescient than pin-headed.
Democracy does not exist without privacy.
I think democracy isn’t great, but probably has the most potential to let us remain free.
And the basis of a good democracy is voting with a secret ballot.
So privacy is freedom.
Mozilla is also working on other projects. Think of how they implemented Firefox (bookmarks) Sync. It will encrypt the data on your computer and then ‘store it in the cloud’. You can choose it to store it on their servers, or you can download the software and run it on your own server.
I’m seeing some other lights.
I’m seeing more open source people looking at secure boot. Trying to make it work in a way that keeps the user in control. And maybe even be able to verify that the software running on the machine is actually the software they installed.
Obviously secure boot is a dangerous solution. Especially how it is currently organized for PCs, as Microsoft is the one singing the default keys.
I’m seeing encrypted protocols that were already being worked on by W3C and IETF (both of which get support from ISOC I believe) are getting more attention and even fixing some older protocols to make it viable to use them in everyday life.
Like freedom, privacy is an illusion in today’s world. The software running on chips inside your computer/devices can’t even be trusted so should people really feel more secure? We crossed the line into the surveillance age a while ago and there isn’t any going back. The idea that Mozilla, or any other company for that matter, is capable of `protecting me` in any capacity from a government that has already proven it will do what it likes, when it likes, anywhere in the world it likes, …is either destined to fail or intended to create false trust in the first place.
Your house is watched from space. Your movements are recorded whether it’s “traffic” cameras watching you and/or gps running on your cellphone in your pocket. All of your banking and finance is tracked. Everything you do on the internet in the “privacy” of your own home is tracked. Your phone calls are recorded. I could spend all day continuing this list… Instead I will say welcome to the world you’ve been living in, whether you knew/know it or not.
Edited 2014-01-14 18:09 UTC
the technical goal is to have reproducable builds, authentifiable commits, IMO.
Basically, get closer to at least what Linux provides, but for Firefox. If possible, better.
This is called a canary. You put a marker in public that proves that something hasn’t been tampered with. And even if you are not allowed to talk about the tampering people will notice it when the marker disappears.
This technique is now also used in several (financial) reports. Tech companies are writing “we didn’t receive any data requests” in those reports every quarter. Even if such a request would come and includes a “you cannot talk about it” gag order those tech companies can simply remove the “we didn’t receive any data requests” from their reports. This way they never talked about it, but people know anyway.
This technique is not meant to directly protect against any tampering. It is only meant to let people know about the tampering even when you are not allowed to tell people about the tampering
Edited 2014-01-15 00:12 UTC
They allow hundreds of CAs and any of those can be compromised (and the xpi extensions are “signed” by those same certs).
Of course everyone trusts the Turkish government! Doesn’t Verisign and Google and Apple send their private keys to the NSA as soon as they are generated?
What happened to Diginotar?
They need to fix the SSL/CA system – that is the screen door in back – instead of replacing a steel door with a vault door in front.
And the CA store is probably not part of the binary.
Also note that any “CA approved” javascript in the background can run, so a MITMed images.amazon.com can completely rewrite (Javascript has the power to delete the current page and replace it with anything else) and redirect the amazon.com page. Or if I had a cert for “google-analytics.com” I would own most “SSL-only” sites.
They also need to build-in noscript, or at least have “don’t run 3rd party javascript” by default. Some places have over a dozen other sites that supply javascript.
tomz,
I don’t think it’s stupid or irrelevant. Javascript, CA’s and compromised binaries are all different types of security problems that warrant various solutions. There’s no magic bullet to solve them all in one shot.
This actually bugs me a lot since it’s so prevalent (including here). Many clients demand web developers link in 3rd party analytic scripts without considering security or privacy. Yet this gives entities like google, along with all the CAs/agencies who might impersonate google, complete access to the websites whether they’re SSL protected or not. I am disappointed that this terrible security practice has become the status quo and that everyone is so willing to voluntarily throw away the security of their site.
Yea but then you break some legitimate things too. I think javascript ought to incorporated separate sandboxes such that third party scripts would never have the opportunity to directly hijack the first party code or the browser.
I have to ask myself how often this known security flaw gets exploited?
Edited 2014-01-15 04:20 UTC
Well, one thing I did notice from a number of security conferences si that TLS encryption/authentication seems to give the NSA a lot of trouble.
So this year I am installing TLS all over the place.
-Hack
hackus,
“Well, one thing I did notice from a number of security conferences si that TLS encryption/authentication seems to give the NSA a lot of trouble.”
TLS foils *passive* monitoring, however it is very likely in my opinion that the NSA has access to several CA root signing keys, if so then it would enable them to construct an *active* monitoring proxy similar to what is described here:
http://www.zdnet.com/how-the-nsa-and-your-boss-can-intercept-and-br…
Since the proxy’s certificates are signed by “legitimate” CA keys, ordinary HTTPS users are none the wiser. However my educated guess is that the NSA would only use this technique against targets rather than for blanket surveillance to reduce the risk of getting caught.
I’ve used a very similar technique with stunnel to be able to use wireshark against encrypted traffic. (using self signed certificates rather than official CA certificates obviously)
https://www.stunnel.org/index.html