Okay, this is potentially very big news that really needs all the exposure it can get. OpenBSD’s Theo de Raadt has received an email in which it was revealed to him that ten years ago, the FBI paid several open source developers to implement hidden backdoors in OpenBSD’s IPSEC stack. De Raadt decided to publish the email for all to see, so that the code in question can be reviewed. Insane stuff.
De Raadt received the email from Gregory Perry, currently the CEO of GoVirtual Education. Ten years ago, while he was CTO at NETSEC, Perry did some consulting work for the FBI’s GSA Technical Support Center. Perry’s NDA expired recently, and as such, he decided to contact De Raadt about what he had learned ten years ago.
“My NDA with the FBI has recently expired, and I wanted to make you aware of the fact that the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA, the parent organization to the FBI,” Perry details in the email, “Jason Wright and several other developers were responsible for those backdoors, and you would be well advised to review any and all code commits by Wright as well as the other developers he worked with originating from NETSEC.”
“This is also why several inside FBI folks have been recently advocating the use of OpenBSD for VPN and firewalling implementations
in virtualized environments,” he adds, “For example Scott Lowe is a well respected author in virtualization circles who also happens top be on the FBI payroll, and who has also recently published several tutorials for the use of OpenBSD VMs in enterprise VMware vSphere deployments.”
De Raadt takes these allegations very seriously, and wants the code to be audited as soon as possible. Since the IPSEC stack is already quite old, and the allegations are also old, the code has gone through several revisions and the like, but still, De Raadt wants it analysed.
“The mail came in privately from a person I have not talked to for nearly 10 years,” he details, “I refuse to become part of such a conspiracy, and will not be talking to Gregory Perry about this. Therefore I am making it public so that (a) those who use the code can audit it for these problems, (b) those that are angry at the story can take other actions, (c) if it is not true, those who are being accused can defend themselves.”
If this turns out to be true, you could wonder what else has been put in open source projects. Scary thought.
I wonder if openssh would also come into question.
Only if developers can be paid in money.
It’s real bad when a United States government agency can secretly bribe developers of an OS whose developers seemed to take pride in the fact that their code didn’t have certain restrictions (ie, regarding cryptography) that it would be forced to contain if it was released in the US. So much for the “it came from Canada, the US can’t touch it” claim–apparently it’s just a completely false sense of security.
It’s not just sad, but disturbing that this happened–to OpenBSD, of all the OSes. And even more so that this was planted in the OS ten f***ing years ago. Come on, really, the *other* developers never noticed this until an e-mail was sent to Theo just now? Now, I’m not slamming open source, so don’t take it that way–but isn’t open code supposed to prevent this kind of stuff? And such a security- and code-correctness-focused OS like OpenBSD didn’t catch it?
This is extremely disturbing. I’m a US citizen, and let me be the first to say f*** you Government. And all ten of the OpenBSD developers that decided to take the bribe money secretly give the government extra power in a security-focused (or hell, ANY) OS.
And who knows what other OSes are affected, as the link says–considering it’s open source and possibly shared with other operating systems. Or *if* they are really affected–hopefully it’s just a bunch of bullshit.
Edited 2010-12-15 00:21 UTC
EASY there, nothing is known. Let the code auditing begin.
Edited 2010-12-15 00:27 UTC
True that is scary.
But the other side of the coin is the NSA developing SELinux.
And whose to say there aren’t any backdoors in that?
Exactly… if this turns out to be true, then SELinux will become just as questionable. I don’t give a shit which “branch” or “agency” of the US government it is: FBI, FDA, DEA, NSA, blah blah blah… whatever it is, it just destroyed the credibility of the ENTIRE United States government in my view. Not a good thing (and certainly not the first time: why the f*** is Stevia, the sweet-leaved and as far as we know quite harmless plant, only allowed to be sold as a “dietary supplement” unless one of its relatively untested extracts are used? You know, that one stevioside that Coke and those other companies are betting behind and making trademarks for, and forgetting about the rest of the plant…
And to be fair, not a hard thing to do, considering I’ve always questioned the government to begin with. I mean, seriously, what the f*** is up with these drug laws?
Yet tobacco–a prime killer in the US–is 100% legal as soon as you’re 18 (most people I know started far sooner), alcohol is legal when you’re 21 (yet another dangerous one and most people I know–including myself–have started at a younger age), yet marijuana is one of the safer ones illegal for decades for no good reason, and Salvia divinorum is becoming illegal in many of the states without proper scientific testing (including already my own). Meanwhile, I can get a nice opioid high from the endorphins by simply running a mile or two or eating hot peppers, and my cats are allowed to get stoned off their ass on catnip. Seriously, what the f***? Good thing the government can’t prevent my brain from producing opioids when my body is under stress!
I hope this does turn out to be false, but if not, I at least hope for the most minimal affect possible (ie., the least systems possible affected). The thought of this, as a Linux user and not an OpenBSD user, is still disturbing. If true… what else could they have done? I think I can get rid of all backdoors by the US government by not running a closed monopolistic OS (AKA Windows), and yet, it turns out that they may have snuck some backdoors in open source software as well, making anything connected to the Internet vulnerable to the US government.
Edited 2010-12-15 06:29 UTC
I wanted to add this link about high-fructose corn syrup to my last post, but the time has expired:
http://www.youtube.com/watch?v=dBnniua6-oM
Alcohol without the buzz… every bit as physically dangerous, but 100% legal… even for kids. [Pepsi… Coke… Mountain Dew… RC… virtually every pop/soda out there as long as it contains High Fructose Corn Syrup (and usually as the second ingredient on the list, beside carbonated water.)] The “made with natural sugar” types are relatively rare, and while still bad, their main side-effect is rotting teeth (bactera love sugar).
And about the natural high from hot peppers (and running, and other physical stress):
http://en.wikipedia.org/wiki/Endorphins
Really, test these damn drugs before immediately making them illegal because OMG! THEY GET YOU HIGH! I got a nice buzz after eating tacos with 6 habanero-type peppers earlier, it made me feel good, it didn’t kill me. I also felt really damn good after running the mile years ago in school for the EXACT SAME REASON. And I wouldn’t naturally have opioid (opiate) and THC (cannabis) receptors in my brain if nature forbid me from doing drugs.
Edited 2010-12-15 06:47 UTC
You’re just lucky I wasn’t elected president of Trilaterial commission. I was going to start adding Corn syrup aerosols on top of wind turbines, and in the exhaust of cars to ensure a more even distribution amongst the populace. Oh well, there’s always next year. Until then, enjoy the kinect radiation and the broadband radon gas delivery system.
In all fairness, HFCS hasn’t had even a tenuous link to liver damage, while the negative effects of alcohol on the liver are well understood.
While there is some evidence that rat metabolisms may respond to high fructose corn syrup differently than regular, ordinary sucrose, a causal relationship between HFCS and obesity and other forms of poor health has yet to be demonstrated.
All I know is it makes me feel “bleeeeh” and break out.
So I avoid it like the plague.
oh no not the HFCS! That fructose/glucose mixture is so totally different from and worse than regular sugar (sucrose, which is broken down during digestion to, wait for it, fructose and glucose)
Yeah, they’re BOTH bad. Thanks for pointing that out, as if I didn’t already know. 😐
But seriously, the only thing HFCS has over regular sugar is the fact that it will not rot your teeth as much (bacteria can’t use it to form plaque like they do with natural sugar). I’ve been taking steps on sharply lowering the amount of both sugar and fructose I take in (mostly by replacing pop with spring water), while adding hot peppers (which I always liked, but have become hooked on…). My primary carbonated beverage these days is beer (if you guessed that I was drinking when I posted my last post or two, you would be right ).
I am happy with stevia in my tea though. I have a plant from this year I’m attempting to keep going into next year. Plus there is a place nearby that sells NuStevia products (as well as highly-concentrated versions and other brands) as a last resort.
And I know that there are probably gazillions of carcinogens out there, but it’s nice to know that I’ve mostly quit consuming one of the chemicals that fuels cancer cells.
But I’ll take rotten/pulled teeth over cancer of any type, any day.
Edited 2010-12-16 09:23 UTC
I hope this does turn out to be false, but if not, I at least hope for the most minimal affect possible (ie., the least systems possible affected). The thought of this, as a Linux user and not an OpenBSD user, is still disturbing. If true… what else could they have done? I think I can get rid of all backdoors by the US government by not running a closed monopolistic OS (AKA Windows), and yet, it turns out that they may have snuck some backdoors in open source software as well, making anything connected to the Internet vulnerable to the US government.
This could very well be true, but remember that it was 10 years ago? After that most projects have undergone humongous amounts of iterations and the project management systems have advanced a lot compared to have they were back then. For any serious security-oriented application you nowadays need to either hack your way through the source-code management system or bribe the one or few people who have write access to the code in the first place.
Basically, if there was something there before it most likely doesn’t function anymore or has been stripped out during these years of advancement, and nowadays injecting something to source-code repositories of any decently popular F/OSS software is nigh impossible without it being noticed.
Not all parts of the code are undergoing equally active development trough the years, esp. if they implement some complex specification that few people have a grasp of. If something works, don’t fix it.
“And whose to say there aren’t any backdoors in that?”
The Red Hat developers who do almost all the maintenance on it now. They know the code base pretty well.
(Of course, you’d have to trust *them*. And I don’t know if they do say that. You’d have to ask them. Just they’re probably the people in a position to tell you.)
The good thing: Even if it is vulnerable now, everybody who is competent has the freedom to review and patch the code, or to pay someone competent she/he trusts enough.
With closed sources, you neither ever know whether there are backdoors (except with reverse engineering, which is a criminal act in some jurisdictions; gladly not in germany), nor are you able to patch it (except for cracking, not legal everywhere, too).
Edited 2010-12-15 08:48 UTC
The problem is that if this story is true, it have pretty much voided these assumptions.
Esp. openbsd is not just some random hack but a project run by dedicated, respectable team around the cult of security.
The OSS image ramification could potentially be disastrous.
What I mean by this: Yes, it is highly dissapointing. But at least there is the (remote?) possibility of fixing this (legally) now, whereas with closed systems, there is not that possibility, except maybe when you are in the position to menace vendors.
Uz.. did your FOIA request come back with documented evidence of the backdoor already? Perhaps you have completed a code audit and can point to where in the OpenBSD source tree the vuln has been created?
Maybe we should treat this as rumor until something more substantial than an email from some random persons supports it?
It’s even more sad that there are people who beleive this nonsense.
If you are scared or concerned about this article and what may be in Open Source software, you must read this, and think about its ramifications:
http://cm.bell-labs.com/who/ken/trust.html
This technique, which was actually implemented in early Unix systems (which were distributed with all the source code), allows for back doors to be included in source-based projects, while being invisible in the source code itself!
That means, for example, that your gcc compiler may have this embedded in it; you can thoroughly review every line of that source code and find nothing, but if you recompile it from source (to make sure that nothing is hidden in the binary), the back door is re-inserted into the new executable.
BTW, the original Unix hack was finally discovered by a student who was using dbx (which wasn’t part of the original Unix system — it was “brand new code”) to debug the login command, and discovered an odd “jump” in the program counter.
This technique may be made more complicated (and harder to find) by incorporating the same form of “hiding hack” in the available debuggers and assemblers as well.
Given today’s revelation, I would not be surprised to find this has already been done in common Open Source software.
Agreed. I first heard of the compiler-as-a-spyware trick about 6 years ago. Ultimately, even if your code is clean, the compiler you’re using to compile that code might not be. This is the TRUE threat.
And while I do love me conspiracy theories, if I was FBI/NSA/CIA, I’d do just that too. It just makes sense from an insecure agency point of view…
Edited 2010-12-15 00:50 UTC
It is easy to prove that gcc does not have the Thompson hack. (Technically, the proof shows either that gcc doesn’t have the hack or else all C compilers have the identical hack).
gcc is built using a bootstrapping process. First, gcc is built from its source code (written in C) using whatever compiler you have. Then the compiler is built again, using itself. As a check, the compiler is built a third time with itself and the object code is compared between the stage 2 build and the stage 3 build. It must be byte-for-byte identical or the test fails.
Furthermore, you can show (and people have shown) that you get the identical results if you start from Sun’s compiler or various older versions of gcc, and likewise for a number of other compilers. If the Thompson hack were present, you would get different results if you build from source code with a compiler containing the hack, than if you don’t.
I’m sure the hack has evolved since 1984, the year the Thompson trick was written.
Edited 2010-12-15 01:19 UTC
Oh no, all compilers are malicious!
What about hardware or firmware backdoors?
The US have already been afraid of Asian hardware for this reason, which could be a sign. I mean if the US believe hardware from China could have backdoors the reason is the US have at least been thinking about it, right?
Actually, the proof is not nearly that strong. Rather than requiring all C compilers to have it, only the set of C compilers on which this test were tried and passed must have it. Now, if a new C compiler, with a clean room design and test were written and the test passed, this would dramatically increase the confidence (it would be imperfect, since there may be some structural indication that this is a C compiler that an infected “booting” compiler would detect and propagate the hack). Also, libraries, assemblers, parser generators, etc., must also be checked.
Given sufficient resources it could be increasingly difficult to detect; however, the US Federal Government (FBI, CIA, NSA) would be one of the very few — if not only — entity with the resources to do it; further, the cost of doing so would be far higher than that needed to detect it.
Edited 2010-12-15 01:26 UTC
Don’t forget the kernel. Compiler binaries could be clean on disk, but compromised when loaded into memory. The kernel binary is clean too, but that was compromised by the boot loaded, which was in turn compromised by the BIOS. And that came about because the software controlling the manufacturing plant was compromised to embed the hack into every chip that came out.
Just how paranoid do you want to be? Because taking precautions is good, but it’s the first step on the road to madness.
I take my paranoia as far as considering moving as much as I can to UltraSparc machines.
The giant V880 (LOADED V880! BOW TO MY NERDNESS) sucks too much power to use regularly, sadly. Crazy fast if you’re doing parallel and/or memory-intensive stuff.(shocking if you look at the bus and individual proc speeds)
Why? OpenBoot is allegedly fully open source and can be, at least theoretically, replaced with OpenFirmware; though much would need to be ported back, or irrelevant.
It’s all interpreted code, and the bytecode can be decompiled on a different machine than the one that created it for auditing after compilation.
You basically get the same code with no comments. Little changes when compiling to bytecode, since you’re working with the VM pretty directly with a Forth implementation.
You need to know your ASM and Forth but that’s why I’ve got all these books laying about/in the mail.
I’m not stopping there, but it’s not for security reasons.
I’ll probably have enough done in a year and a half to write an article, but yeah… not doing this alone, btw. A smart leader finds smarter people to whom they can delegate.
Ah, but do you trust the hardware OpenBoot / OpenFirmware is running on? Or could it be subverting things right down at the hardware level?
That’s open, too. Well, the next generation is, and you can verify the behaviour of chips.
The architecture might be open, but doesn’t mean the physical chips actually match that architecture. And you can verify the behaviour of chips, *if* you trust the tools you’re using to verify it.
Sooner or later, you have to either trust people, or go insane.
Interesting, do you have links on this? I want to learn more. Who showed this? Where can I read?
I don’t see the proof.
The example hack shows how compiler injects malicious code to “login” program. If gcc is not “login” program, nothing would be detected.
A well known story, but not as easy to pull off as you might think. The original relies on special code in the compiler binary which a) recognises when it’s compiling itself (to re-inject the special code), and b) recognises when it’s compiling the login program (to implant the back-door).
Thing is, this *does* rely on that code being terribly clever. To work reliably, the compiler not only must recognise itself, but must also recognise future versions of itself. It needs to handle cross-compilation, e.g an x86 compiler producing an x86_64 target. And it needs to recognise when it’s compiling other compilers, e.g gcc compiling Clang/LLVM or visa-versa. It also needs to contain no bugs, lest it attract attention when it goes wrong.
Now, how many people do you think there are who could write code that clever to start with, *and* do so in such a way that it would never be noticed by any of the other smart people.
More than zero.
David A. Wheeler shows how if you have a second independent system available, this can be countered. The full details are in “Fully Countering Trusting Trust through Diverse Double-Compiling”
http://www.dwheeler.com/trusting-trust/dissertation/html/wheeler-tr…
The level of real code review within OpenBSD can never match that of bigger and better supported projects, which is why this went undetected for as long as it did.
The more people and nations that have a lot riding on the security of of an operating system, the less likely that it can be tampered without detection.
One last thing, it is time that everyone moves to git and to signed commits as done in the linux kernel so that there is complete traceability of any and all changes.
If you think this might be an issue with open source code, just stop and really think what it probably is like in most closed source software and operating systems.
I would not state that, I think the OpenBSD hackers are a lot more likely to find these kinds of things than any other open source project.
They really do take these things seriously. They look at correctness first (in the broadest sense of the word), most others are focused first on features.
Have you seen their code ?
Could be. Or it could be it went undetected because there’s nothing there to detect – remember, it’s still an unsubstantiated claim at this stage.
Or maybe it’s just not true but just some lame attempt at attention.
I’ve long suspected something similar has been implemented in Microsoft Windows. A back door that lets the government monitor users activities or communications. I always thought the entire anti-trust matter and planned split-up of Microsoft went away just a little too conveniently. As was Bill Gate’s quiet retirement. The government in the U.S. has not made aggressive moves to curtail Microsoft’s domination and questionable business practices leaving me to believe there’s an “arrangement” between it and the government intelligence departments. Just a theory.
I always thought it would be about money and power.
Where Microsoft had contributed campaign funding.
In this on OSnews article of today it was portrayed differently:
“At this point, the terrorist attack of 9/11 occurred. From her comments at the time, it seems clear that this event greatly influenced Judge Kollar-Kotelly’s view of this court case. Within weeks she ruled that the two parties must now settle, saying ‘The court cannot emphasize too strongly the importance of making these efforts to settle the case and resolve the parties’ differences in this time of rapid national change.'”
http://www.osnews.com/story/24131/How_Microsoft_Missed_The_Next_Big…
Edited 2010-12-15 01:40 UTC
I’ve pretty much always taken that for granted, and it’s not exactly harder to believe nowadays with the US showing exactly how much pressure it is willing to exert on domestic companies when they feel it’s in their interest.
http://www.computerworld.com/s/article/9141105/NSA_helped_with_Wind…
Now if this is true about OpenBSD (let’s hope not), then this Gregory Perry must be one of them whistleblowers who according to the US is an ‘ENEMY OF THE ENTIRE WORLD’ or some such nonsense.
Someone implemented backdoors or key leaking mechanisms in other areas of OpenBSD. Maybe not only in OpenBSD.
If I’d have something to hide, I would be very careful these days. I’d pay on attention on algorithms and software used for encryption and software used to communicate.
Edited 2010-12-15 01:09 UTC
Code your OS from scratch (bootloader included), for custom hardware, and using a long-forgotten Commodore64 hex editor which never was commercially successful in the first place. There’s no other way.
You can have my tinfoil hat, too.
Edited 2010-12-15 07:43 UTC
http://en.wikipedia.org/wiki/Amiga_Unix
Probably filled with all sorts of crap, but atleast you’d be pretty unique! ;D
I wonder if nmap detects the OS ..
…for whom this comes as absolutely no surprise. I’m actually more surprised that this surprises people…
Is there also a reason for you not being surprised?
Well its only to be expected now isn’t it? There was the story about how printer manufacturers have been embedding secret tracking codes in printouts for years at the behest of US secret agencies. There was also the warning issued by a German government IT security department against using Google Chrome because it wasn’t a good idea to trust all your data to one company. Also we know how easy it is to get American telecom companies to corporate with the government.
Theo De Raadt probably should have removed that guy’s phone numbers from the emails before publishing it online…
If I was the person involved, I wouldn’t have been terribly happy with my name mentioned in this context. The phonenumber, etc. is just fluff.
“I did not really intend for Theo to cross post that message to the rest of the Internet, but I stand by my original email message to him in those regards.”
The author of the mail
http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd
Someone ought to call Mulder and Scully to investigate that. Looks like those damn aliens are at it again!
Thank KAME, think ipv6, think this issue is in the protocol and OpenBSD was just a single vector.
Oh man we are SCREWED.
This is where proactive security comes in, Theo has forwarded the message as a form of full disclosure of potential risks.
Damien Miller (OpenSSH/OpenBSD) sent some comments to the list.
http://marc.info/?l=openbsd-tech&m=129237675106730&w=2
(“makes all bugs shallow”), if a few paid-off OSS devs can put backdoors into an OSS OS and they’re there for years with nobody noticing.
Well, it actually seems to hold true.
This isn’t a bug, it’s a _well-designed door_, not just your average hole that results from a bug.
It was probably carefully correct code, but what said code was doing was… evil.
Even then, generally when a closed project is opened, a _huge_ number of bugs are discovered and fixed.
Even through it will never be none, less is still less.
There are more security bugs reported for Linux than Windows/OS X, but they’re generally minor, and are generally fixed more quickly.
a bug is when someone makes a mistake in code. a “well-designed door” is a full feature. If you can’t catch the get_pwnd_by_fbi() function, then you sure as hell aren’t going to catch a bug.
A bug is not purposely hidden, a backdoor is. But you are obviously just trolling what with the ‘get_pwnd_by_fbi()’.
And you obviously don’t know what you are talking about if you think that code with a logic issue is easier to detect then code that does something completely different then it should.
“On two concert I’m should collective photo, but such small, fat, bald-headed technologist be insane”.
I’m not going to reply to something vague and nonsensical with anything but the same.
Checked your website to see what your first language is.
It seems to be English, and you normally write it well.
As such, I see this as obfuscation of any possible point.
That, combined with an anti-scientology post (I’m no fan of scientology, just to be clear) and it’s fairly clear you know the Rules of the Internet, and probably don’t talk about /b/ or talk about /b/. (not a typo, to anyone who hasn’t read that list of insane rules)
it was a glib response to a glib answer, thats it. My point (which I still stand by) is that it is much easier to find something that does something completely different then its intent, then something that does whats intended, but the author didn’t think of one of the cases that ends up happening to real world people. The person who originally wrote it didn’t manage to think of it, it usually means you could read it and not catch it either.
On the other hand, when you read a method that says that it is (for example) reading a file, and instead uploads your passwd file to a remote website, it is hard to miss what it is doing. There are ways you can obfuscate it, but mostly that just makes things unreadable. Blobs of unreadable code should be targets for refactoring, which would expose the issue.
The only way that they would even be comparable is if the back door wasn’t written as a back door, but more as a vulnerability (i.e. an intensional bug). In that case, I wouldn’t say it was easier or harder to find, it would be the same.
I think that is common sense, and anyone who knows enough to be able to understand the difference between a bug and a feature would be able to see how that is obvious. I don’t think that it means that it disproves the whole “many eyes make for shallow bugs” thing, but I do think the assertion that bugs are easier to find then code that says it does one thing but actually does another to be, to be frank, downright moronic.
So, I thought I was banging out a quick, sort of funny response to a dumb comment. If you want to talk about this I’m totally fine with that since you don’t seem to be an idiot (which is why I am taking the time to write this out) I wasn’t really trying to troll, but at the same time I wasn’t seriously trying to debate a point either.
Wow.
Even I, as a beginning programmer with alot of theory, know that’s a load.
PS
You sprung my /b/tard trap. I never directly called you a troll, but you discerned the implication, though it’s multiple degrees separated from my statement.
Only one who’s well versed in the activities and codes of Anonymous could figure that out. I had (HAD. PAST TENSE) a friend who would go on about shit like chris-chan afk. “DUDE. I wish you hadn’t mentioned this horrible story the first time!! STOP GOING TO THAT SITE.” They didn’t…
Backpedaling is the oldest troll trick in the book, too.
You have ‘newfag’ written all over you.
EDIT. Oh, this one was a reply to something else.
But still, backpedaling.
Troll.
Edited 2010-12-16 01:59 UTC
Seriously, how long have you been programming and at what level? I programmed professionally for 8+ years (assembly, c, c++, perl, python). You can hide malicious code in logic issues aswell as using other techniques. For some examples (that I think you should be able to follow):
http://underhanded.xcott.com/?page_id=17
And this was in the crypto framework, which is quite advanced stuff and the mail mentioned key-leaking mechanisms. And no, it’s not going to be any function call in the middle of the code called ‘leak_keys()’, I thought you were just trolling but it seems you are most likely very incompetent.
Edited 2010-12-15 15:58 UTC
7 years, but only high level application languages (ruby/perl/lisp/a bit of smalltalk/c#/java), never done systems stuff.
I can sort of muddle through C++ (never really had interest or job opertunities), but something like “if(x > rx * 3 && x <= (rx + rwidth) * 3 && y > ry && y < ry + rheight)” I wouldn’t consider to be that great in any language, and a prime candidate for refactoring. You may not catch it in a security audit, but you will if you are trying to maintain quality in your code base.
In any case, I will concede that a deliberate obfuscation like what you linked to is of equal difficulty to find then a bug in similarly gnarly code. What I don’t buy is that it is significantly harder to find, which was the implication of the person I was responding to.
wrt the whole incompetence remark we’re talking about skimming an article and banging something out while drinking my coffee getting ready to start the day. I probably would have said the same thing as the previous paragraph in a great deal less of a condescending way if I had fully read the article and thought through what it probably was referring to. I would call that “introducing a vulnerability”, a back door to me sounds more like I am expecting something in a specific format, but if I get it in another format just return true. That sort of misunderstanding would definitely be incompetence if I were in the security industry, but that is very very far from what I do.
Time does not equal competence.
Also, I don’t believe you.
I disagree, a bug is not intentionally hard to find although it certainly still can be extremely hard to find. A back door on the other hand is intentionally and specifically crafted so that it is hard to find, it can come in many guises but obviously it will create a vulnerability which can be exploited, but unlike ‘innocent’ bugs this has been hidden to the best of the programmers ability.
And if you know the code in question very well, and it is sufficiently advanced (crypto certainly fits that bill) then it’s kind of obvious that except in extreme cases, a deliberately hidden vulnerability will be harder to find than a non-deliberately hidden bug causing a vulnerability, since the programmer would know how to make it as hard as possible to discover.
Heh, well that explains a bit of your comment. Sadly backdoors are a far wider concept than a login/pass and a ‘Greetings professor Falken, how about a game of chess?’.
I’d wager that historically the most common ‘backdoors’ would be buffer overflows resulting in execution of malicious code.
Yes.
Obvious.
Obvious even to someone with _nothing but theory and application scripting_ experience.
Yes, I have a 433-line init.el(and a few other custom *.el files), 182 line .stumpwmrc, and 208-line .conkerorrc, but I’m no hacker.
This guy’s just a… ‘hack’, to use stage vernacular.
If you were to put a backdoor in some program you wouldn’t insert a “backdoor code”, which could be easily spotted, but place a concealed bug that you can exploit later. A bug inside a piece of code that other than that does exactly what it’s supposed to do.
As such, in the event it was found, it would be indistinguishable from any other bug, and because it’s deliberately concealed it’d be harder to find.
Which leads to believe that maybe, just maybe, someone could actually have fixed that backdoor already (if it’s true that it was placed to begin with). If some dev had found it he wouldn’t have gone “OMG A BACKDOOR!!!” but just fix it the same as with any other bug.
Nope. phail.
Any logic I throw at you will be pointless if you can’t understand this.
A software defect (bug) is surely any behaviour the software does that is not in the specification. So a backdoor is a bug.
Edited 2010-12-17 13:32 UTC
Inventing definitions doesn’t help your case.
What?? That is the definition of a software defect,
“A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways.”
http://en.wikipedia.org/wiki/Software_bug
If the software wasn’t specced to allow the FBI to have a back door then it is a software defect, otherwise known as a bug. It is irrelevant whether the defect was intention or not.
… do you actually know what you are talking about or do you just talk crap/troll on OSNews?
Edited 2010-12-17 15:56 UTC
If this is true, it was indeed expected to have a back-door put in for the FBI, by those who put it there.
I can’t imagine anywhere in the spec it said,
“Must include FBI backdoor”
You’re just being contrary at this point.
You are the one that accused me of making up definitions, while talking quite a lot of rubbish. You started it not I, I will just finish it.
Edited 2010-12-19 17:39 UTC
key word: ‘if’. none of this is actually proven. it’s an allegation.
Well, with open source you CAN audit, if you don’t then obviously it’s no safer than closed source. With closed source you don’t have that option at all.
To argue the other side; With closed source, a company has financial incentive to audit their code, since they can be sued if something goes wrong. In open source, nobody has that incentive.
Bullshit, and nice trolling btw.
Companies have little financial incentive to audit their code, not even when explicitly paid for it. They will audit the code exactly as little as they can get away with – and no more. There’s a reason the most insecure software packages are proprietary packages. Because they cannot be effectively audited.
FLOSS projects have an incentive that no proprietary project will ever have: Street credit.
How about being a little less hostile?
Many companies do as little as possible but there are also those that do an average job and those that do an excellent job. Blanket statement FAIL.
Security Companies DO have financial incentives to audit their code as it would be highly embarrassing and financially damaging if things like this were to be found.
The “Real” difference between closed source is that number and variety of people that can look at the code increasing coverage against poor coding or just plain human error (in the code and in checking the code).
However, there is not real statistical way to accurately quantify security verification. Are 3 less intelligent/fastidious code checkers in an OSS project than 1 very fastidious/Intelligent code checker better?
The fact that OSS is more secure is still only a (probable) hypothesis. NOT 100% proven theory.
I would say that it is likely that the low hanging security bugs are more likely to be caught in OSS that closed source, but the really tricky stuff in critical software is probably a much more level playing field.
Edited 2010-12-15 14:53 UTC
There is zero hostility on my part; I’m merely stating a fact. google_ninja is trolling – with support from MollyC. No surprise though. They are as mad as a certain french windmill.
EDIT: Corporate Capitalism results in companies only having one incentive: Maximum profit. This means they will do as little as necessary and cannot be relied on for anything remotely connected to infrastructure and security. They are good at making refrigerators, but that’s it really. Anything more than that requires laws that diminishes profits unless the companies take on certain tasks. Or put differently: Companies have zero incentive to audit their code.
Edited 2010-12-15 19:18 UTC
I am literally in the middle of exactly that kind of audit right now.
Our customers care about that kind of thing, they care about our test coverage, and they care about our engineering practices. They are serious companies that are literally putting their future in the hands of our software, and our answers to those kinds of questions can be the difference between making a sale, and losing it.
The reason that I said “to argue the other side” is because I don’t really agree with the origional post, exactly because of the street cred thing. It is rare to have security experts reviewing open source code to prove they are badasses publically, but at the same time its rare for a company to have the engineering practices we do, and I don’t think one really trumps the other.
o_O?
So open source hackers don’t get paid to write code? That would explain why Linus Torvalds is so poor, and Red Hat went out of business.
Oh, wait.
Did you actually read what he wrote instead of imagining what he didn’t write?
He did not say that open source developers don’t get paid. Just that Closed source companies have incentives to improve their code.
Red Hat has incentives to make sure that the code they ship is good. The difference is that the burden on maintaining and fixing the code isn’t solely Red Hats responsibility.
A closed source company has sole responsibility for their code, theoretically they should be more paranoid therefore paying people to ship and check good software.
Where Red hat has to build trust and in turn trust the community for the software it supports, the closed source company has to put developers/money on the code to fix/maintain.
Both can be better or worse. in OSS less popular software has fewer eyeballs checking the source, In closed source a company has to put competent people because they can’t make up the diversity and volume of eyballs that a OSS project has.
From the original post: “In open source, nobody has that incentive.”
So, back atcha.
I suspect that they meant there is no external incentive for anyone to ensure that there is code quality. But In OSS there are people self-incentivised to fix code (often which OSS companies hire to ensure code quality).
I agree that it does read differently from what I put above. But truth is pretty much self evident as there is some high quality secure OSS software around.
security audits are boring things. many aspects of writing code are pure fun, that is not one of them. I have added features I thought would be cool to open source projects many times before, I have fixed bugs I have run into many times before, but I have never done an audit of a codebase.
On the flip side, thats what I have been doing at work for the last few weeks. Boring as hell, and wouldn’t do it if I wasn’t getting paid.
Have you ever read Phrack? They’ve been publishing articles on security of open source software since the late 80’s. There are entire crowds of people looking for vulnerabilities in the Linux kernel, for different reasons.
I worked at a software company for some time. It had many more features than the open source alternatives. When it came to security, everyone said it’s important and security bugs were the most important, but (!), nobody ever did a security audit. As soon as a new feature worked it would never be looked at again until obvious behavioral bugs were detected. Or new related features were added in that area.
The code was quite a bit of a mess, there were few people who knew, more or less, what everything was supposed to do but only vaguely, parts of the program written by people who had left, many parts that many people wanted to rewrite but never, ever had the time to. Only new features would go in.
I also took a look at the source code of an open source alternative. It was like drinking water from a mountain spring after running on a marathon. Admittedly, the code was written (mainly) by one person. But it was so clean and so consistent.
One more thing, when you use C++ in a corporation it’s a disaster in my opinion. Some people will write plain C. Some people will use templates (yuck). There are some many ways to do things in that language and it so many ways people use it. I’ve never seen a program written in C++ that would have pretty, consistent source code. I guess this is because C really forces you to be organized. But I digest…
I opensource not only the author(s) of the software can (and have the time to, and do it properly, because they’re not constained) find security holes. Especially when they rewrite the software. Because they have time to rewrite it. Because they don’t a deadline to push a new feature that is needed for more revenue. Also, the people who incorporate your software in some other software can find security holes. Also, the distributors can find security holes. Also, the people who use the software can look for security wholes (many times large companies who have at least a few qualified people).
Even I, while working at that company, found bugs in the alternative opensource software and filed them (admittedly not security related).
In my opinion, it’s not the “many eyes” part that’s important. It’s being able to write just what’s fun for you, look back at it and be proud of how beautiful it is, how consistent, rewrite any parts (probably several times) until it’s consistent and(!) until you know the whole program really well. You’re not constrained to write so many features that the software becomes difficult to maintain.
When someone wants more from your software they can add to it and, in turn, be proud of their part.
It’s only people who program like this that are really good programmers and, yes, there are a few of those in big companies. I regret having met really smart programmers where I worked at that never had the time to do things their way, people who could have become great programmers but instead got headaches trying to maintain something that should have been refactored a long time ago.
The amount of people paid to work on open source code is absolutely minuscule compared to the amount of open source code that exists.
I don’t know what to say about that… Do you have any figures?
I seem to remember statistics about the Linux kernel and Open Office saying the opposite.
Yeah, massive number of commits, work moves much faster…
If nothing else, the development model is the main feature with FOSS’ rep for security and bug fixes, not the license.
Really? When was the last time that happened?
Has any software company ever been sued because of a bug that compromised their customers’ security?
Heh.
Precisely.
http://www.junauza.com/2010/12/top-50-programming-quotes-of-all-tim…
“If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning, and the response would be, ‘We’re sorry, here’s a coupon for two more.’ “
– Mark Minasi
Posted here on the 14th.
been working on an audit for the purpose of certification for the last two weeks now. It is proprietary software, and we will need to re-certify regularly.
On the flip side, I have never audited any open source project for security issues.
so yeah, it happens.
What does auditing for a certification have to do with being sued for a bug?
My guess is this is all copypasta.
Nothing mythical about that. It’s been proven repeatedly – in software projects as in many other projects. The strength of openness has proved itself repeatedly. Of course, you of all could not be expected to react less emotionally than you do right now. Given your professional background you had to jump on this one.
Anyway, all bugs are shallow, but when the eyes looking are working for companies creating proprietary solutions, and these companies turns out to be FBI-controlled, the “bugs” will not be found but rather introduced. It only goes to prove that one should embrace the openness even further and be less happy about capitalist meddling with open source.
EDIT: Actually this case proves the point of “many eyeballs making bugs shallow”. We will never hear anything detailed about the backdoors we know are in Windows. OTOH they were there 10 years ago when we (in Europe) talked a lot about the backdoors introduced by USA and they are here today. Guaranteed.
Edited 2010-12-15 19:30 UTC
Something like this would be considered classified. Classified material carries a life long commitment to be protected not an NDA with 10 years.
If this was a real FBI activity then I doubt this guy would really send the openbsd guys an email from himself.
Occam’s razor.
He read his contract, not espionage law.
Time for BSDleaks.org?
…an update that included a back-door in a FOSS, without anyone noticing for quite a long time either ? A Bittorrent client or some sort of.
Kochise
If you are talking about ProFTPD, then no, it did not take that long:
“The modification was carried out on the 28th November and discovered and reverted on 1st December.”
http://www.h-online.com/open/news/item/Back-door-in-ProFTPD-FTP-ser…
I know nothing about Gregory Perry other than that he worked on OpenBSD in the past but, before we assume the allegations are true, would he have any motive for spreading FUD about OSS?
Just asking, not disparaging…
While I hate conspiracy, all these may be true. I also hate ANY government-approved-stuff, like FIPS, etc. But the moral is, as he said : you can’t trust anyone.
Why would anyone wait 10 years, no NDA can be used to suppress a crime and failing to report a crime is in itself a crime.
It is not necessarily a crime. A scandal yes, but not necessarily a crime. I’m surprised that he can talk about it at all after as little as 10 years.
Some people do not want to end in jail.
It’s not a crime.
It’s just immoral.
It seems to me that the guy who contacted Theo about this tries to make joke out of OpenBSD. There were no cases with evidence which would’ve been coming from decrypted vpn traffic. This whole thing is a FUD.
Thats a fact that the US military pays for hackers for coding 0days and they use that in whatever blackops but exploiting globally used encryption protocol is just not law enforcement style. He would make the story more realisting saying ‘nsa’ instead of ‘fbi’…
The email hasn’t even been confirmed yet. It is really easy to spoof an email.
http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd
The email is real.
The ‘covert backdoors in firmware-based systems.’ is a nasty proposition which he mentioned working on in his reply. All these firmware blobs that can’t be audited in any way.
OK, now the next question is, is the guy nuts or is it for real ? 🙂
The OCF was a target for side channel key leaking mechanisms, as well as pf (the stateful inspection packet filter), in addition to the gigabit Ethernet driver stack for the OpenBSD operating system;
How is that possible? According to his statement, this took place circa 1999, before he left in 2000.
pf didn’t exist before June 2001, the IPFilter license dispute was in May 2001.
…what?
I’m not saying you’re wrong, or trying to be insulting (god, I hate that I have to qualify everything I say, but everyone seems so reactionary and prone to inferring..), but I honestly cannot make heads nor tails of anything in this post.
Please write this in your native tongue and have someone with a bit more experience in English translate it for you.
As it stands, your concerted effort at using this frustrating language (English is so complicated…) reminds me of a letter Henry Rollins once received from a student in the Czech Republic, which contained the sentence “On two concert I’m should collective photo, but such small, fat, bald-headed technologist be insane”.
FreeBSD developer has started a bounty, and other people have decided to match it.
http://maycontaintracesofbolts.blogspot.com/2010/12/openbsd-ipsec-b…
Nobody noticed all the extra open ports or traffic leaking out of your VPN and diverted to fbi.gov because your router, your modem, your IDS, your firewall, your friends at the ISP and the laws of physics have all been hacked to not report it.
Or it’s not real. Judging by the amount of crap that gets past the FBI, I’m guessing there is no 3l33t back door. Either that or the FBI is excellent at faking failure.
Edit: (Ok nevermind, just realized what leaking a key does, Still think it’s BS.)
Edited 2010-12-15 19:30 UTC
So far it’s just a rumour. Toss out some FOIA requests and have the dev’s review the source code in question.
This, if proven by either, could be huge news.. until then its just a rumour.
This is essentially the problem any company (or end user!) should have with open source – unless someone is charged with auditing the code coming in, who knows what’s been installed / integrated with applications. Of course, the joke then is that who knows what someone could put in internally (plenty of Chinese nationals working in corporate IT departments, eh? what if you had a disgruntled developer at Intuit?), or what’s in commercial software either. Subvert the code. Subvert the compiler. Subvert the OS. The hardware…. the hardware (chip) design software…
It’s all very funny when reading about security anyway – ‘we won’t let you plug in a USB stick’. OK, that’s 1 issue, but damn if it doesn’t go way way deeper than that.
In terms fo OpenBSD specifically, this is currently nothing more than a rumour. If something is found in the source code then we can draw conclusion along with seeing how fast the issue is fixed.
In general terms, sure.. anyone can slip code into the distirbution repositories if it’s a poorly managed distribution. Wake me when you manage to get arbitrary code included into Debian Stable.
In terms of insider threat, sure.. one of your developers may download and compile a package. Maybe your admin installed a backdoor of some sort. Those are both company specific HR problems though. Why was someone so disgruntled without being recognized. Why where they able to install arbitrary software across the network. Why did security monitoring systems not recognize the issue. How did the malicious insider get pack the system’s package validation mechanism, past tripwire, past regular debsum checks?
Hiding the source code doesn’t make anything more secure nor does revealing the source code make it less secure. If your security system relies on what is hidden then you’ve failed. If it’s not secure knowing all details of the implemented security mechanisms except for the user specific key/cert/password then you’ve failed. Obscurity has no place in security design.. include it as icing on the top; sure.. rely on it as the core security mechansim.. Fk no. Consider cryptographic research where a new algorithm remains untrusted until proven through open peer review and analysis. AES is not currently considered secure and trusted because the math behind it us unknown; it remains so because the math behind it known and has not yet shown weakness under the scrutiny of the brightest cryptographers and criminals in the world.
If your nervous about reputable major open source projects having intentionall vulnerabilities written in.. you should be terrified of the closed source reliance on obscurity and what minimal code review doesn’t conflict with time and monitary budgets. Heck, with a reputable open source project, you can track a bug from first report, confirmation, patch creation and updated version release; you can clearly watch and measure the time between discovery and the “fix”. You’ll actually get bug reports of “in house” discovered bugs. Show me the closed source software producers who voluntarily report internally discovered bugs rather than fixing them quietly under some illusion that it maintains PR cred.
Your absolutely right about USB though.. security goes much deeper than allowing or disallowing removable media and “security through visibility” is a very big part of that as is layering, monitoring, least privileged and so on.
One thing I haven’t seen discussed that’s kind of scary is the possible fallout. If it becomes resonable to believe that US governmental agencies are actively trying to get coders to surrepetitiously insert back doors into products, then that’ll basically destroy the market for US-made software, as well as ruin the careers of anyone who might be suspected of having relationships in the US government.
Fun times.