Alan Cox, one of the leading Linux kernel developers, has told a House of Lords hearing that neither open- nor closed-source developers should be liable for the security of the code they write. Cox, who is permanently employed at Red Hat, told the Lords Science and Technology Committee inquiry into personal internet security that both open- and closed-source software developers, including Microsoft, have an ethical duty to make their code as secure as possible. “Microsoft people have a moral duty in making sure their operating system is fit-for-purpose,” Cox said on Wednesday.
I’ll have to agree that general liability for the security of code is wrong for a couple of reasons:
1. There is no absolute security; developers are just people, and people make mistakes.
2. It would pretty much kill off every little useful hobbyist application, every small open or closed source project, almost every small to medium software company, because those are exactly the ones who cannot afford extensive security audits, leaving only a handful of big players like Microsoft, Sun and IBM.
3. It would generally stifle (or at least severely slow down) innovation and progress, as developers would hesitate to introduce new features and explore novel methods of computing for fear of introducing new holes.
I do support, though, the idea of holding for-profit companies liable for negligence. There’s a security problem unpatched for six months? Punish them. Bad software was knowingly released to the public? Punish them.
For the most part I’d agree that they should be liable to a point. Of course my ‘to a point’ would be determined on how large the software is, and how many holes it has.
For example, something like an operating system developer should be held more liable over security holes / bugs than say a person who is creating a notepad clone.
At least with smaller software, you can remove it if there is a security hole that isn’t patched. The operating system on the other hand is much more difficult to do so, due to applications required, etc.
I don’t think that’s legally feasible. It’s be twisted to be too strict or to be lenient to where it was meaningless.
I don’t think regulators could keep up with software. I’d rather see consumers holding their vendors responsible and leaving them when they have major troubles and the vendor isn’t their to respond quickly enough.
Software is just too complex.
/*1. There is no absolute security; developers are just people, and people make mistakes.*/
they can use that as an excuse to write sloppy code
and just relase the program if it just works without really auditing the code for bugs which leads to security holes
just waiting to be exploited.
3. It would generally stifle (or at least severely slow down) innovation and progress, as developers would hesitate to introduce new features and explore novel methods of computing for fear of introducing new holes.
I don’t think that would be a bad thing. Considering that most software has more features than people can even begin to use, ignoring the smallest programs, adding more and more features for every release does not make much sense in the long run. Of course it’s sexy to announce “we integrated foobar and quuxwiz in this release”, rather than “there are no new features, but we did plug in seven security holes that no-one had exploited yet”.
With every new feature, you are making a more complex program. With every new feature, you will have more so-called feature interactions, most of which you are not aware of. With every new feature, you will have higher chance of errors (bugs) in the program.
If it was possible to trade in 50% of features to even 20% more correctness (and correctness usually implies more secure, as it will have less bugs), I would gladly do the trade. Unfortunately this is unlikely to happen until the general populace stops demanding more features for every release.
Most of what AC and the other person interviewed says makes sense; however, no doubt it will be taken out of context. For example, one point made was that when a burglar breaks into your house, your efforts are and should be concentrated on catching and suing the burglar; however, I believe that it should still be possible to sue the lockmaker if it were shown that the same knowingly and deliberately created locks which did not lock. Software developers (or, at least, the unethical ones, which shall for now remain nameless) would do well not to take these comments out of context and assume that public opinion will now give them carte blanche to come up with software ripe to be exposed as being riddled with holes, in which no attempt whatsoever has been made to render them secure.
But it’s possible to smash a perfectly working lock with a sledgehammer. Would you sue the lockmaker in that case?
If I use a set of lockpicks to open your front door, would you sue Yale? Lock makers, like
software developers, produce products which under normal circumstances are reasonably secure when used for their intended purpose. Additional protection and security costs more money; you wouldn’t expect to get Fort Knox levels of security from a £10 five pin lock, and you shouldn’t expect to get high levels of security from consumer (or even “enterprise”) software.
Additional protection and security costs more money; you wouldn’t expect to get Fort Knox levels of security from a £10 five pin lock, and you shouldn’t expect to get high levels of security from consumer (or even “enterprise”) software.
Well, as to enterprise software, of course you should. As for consumer software, if the vendor keeps bleating on about how secure their consumer software is, you should get decent security with that too.
I think closed source software should be liable. They are telling me that even though I bought the software and legally own it, that I can’t see what’s inside so that I (or anyone) can fix it if there’s a problem? ok but then the company now has the responsibility to fix all the problems. If you take my rights I should at least get something back.
I kind of have to agree with closed source software being held accountable. Especially for known vulnerabilities. I also think we should hold the system admins who don’t patch their systems accountable. Code Red was a known bug with a patch out for quite a while before it hit. THere was no reason for it to do the damage it did.
just as a side note: you don’t own it, you own a licence to use it.
I think this is something that should be decided by the market. If you make crappy software with security holes then people won’t buy your products.
Proper computer security is based on assuming that all your software is insecure. How are you going to divide blame between the developers and those that deploy it.
I think this is something that should be decided by the market. If you make crappy software with security holes then people won’t buy your products.
And I think you’re too optimistic and simplistic. The market has proven to behave otherwise several times, for reasons beyond the quality of the sw.
If you make crappy software with security holes then people won’t buy your products.
Naah, loads of people buy XP ;o)
I think closed source software should be liable. They are telling me that even though I bought the software and legally own it, that I can’t see what’s inside so that I (or anyone) can fix it if there’s a problem?
Following the same logic, Coca-Cola should give you the recipe for Coke, ’cause you paid for it?
No, you didn’t pay to get a recipe for the Coke, nor you paid to get the “blueprint” (source code) for that application.
If you do understand x86 machine code, than you can look into the code, can’t you?
And — you don’t have to buy that software. It’s that simple.
Edited 2007-01-19 14:47
> Following the same logic, Coca-Cola should give you the
> recipe for Coke, ’cause you paid for it?
No. But since they sell it as a beverage, and don’t give me the recipe, they should be held liable if, for example, they fill the can with floor cleaner.
Without the recipe (-> source code), I cannot check what’s inside, at least not without complex chemical analysis of the mixture (-> reverse engineering of the code). Nevertheless, they sell it as a beverage (-> working program). So they should be liable if it’s not a beverage (-> the program doesn’t work). Imagine a market where Coke and Pepsi and others can fill whatever poisonous stuff they want into their cans (-> write hopelessly buggy software) just to make it cheaper, and then write “don’t drink this” onto the cans (-> “don’t use this program” in the EULA) to be on the safe side, but you use it anyway because it’s common practice and there are no alternatives.
But to be honest, legal measures aren’t really what we need. We need a market where working software is common practice.
No high-competitive industry practices what you suggest.
For example, when you buy AMD’s or Intel’s CPU you will not get detailed blueprints, instructions, etc for the piece. When you buy a car, you will not get detailed instructions on how to build one, etc, etc?
I agree that it is strange that whole software industry
gets away with selling not enough tested products, but then, since I am developer myself, I completely understand them.
If you want product that is 100% bug free, fine, but then instead $200 for that application, you’ll have to pay $100,000. Deal?
One more thing, as someone already pointed, in most cases you don’t buy the software, you just rent it. When you buy it, then you usually get the source code.
And, as I said, you always have a choice of not buying the software that you can’t get source code for. So, what is the problem?
Edited 2007-01-19 18:11
I agree, with some caveats. Here’s the problem. How do you distinguish between “a security issue”, “a bug”, “a feature” and “an inconvenience that’s acceptable for a product of that price point”? You have to treat them separately, otherwise a closed source vendor could be on the hook for turning a simple drawing package into a 3D CAD studio.
Just to point out that it’s not clear what’s a security issue and what’s a feature, consider the case of Doku-wiki. Doku-wiki has a flag that allows you to enable embedded HTML and PHP to be executed. If you turn them on, you’ve got the potential for a cross-scripting vulnerability. Should Doku-wiki (if it were closed source) be forced to fix the “HTML” and “PHP” modes so they would prevent cross-scripting vulnerabilities (i.e. implement some sort of sandbox or filtering) or would you call this addition a “nice to have” feature since you should only enable these options if you’re in a secure environment with trusted users?
Another issue is with how long should the vendor be on the hook? Should the vendor of a 10 dollar product for DOS be on the hook if a vulnerability is found 30 years after you purchased the product?
There has to be some boundaries on both liability and time expiration, but unfortunately, I don’t think a good “legally binding” definition can be made.
That’s one of the main reason some of us (e.g. me) use open source — you don’t have to worry about vendor lock-in.
I think Alan C is correct, you cant blame the software developer for security holes, you cant take them to court and get $$ from them if you get a virus etc, its just not right .. you can however stop using product that consistently has holes, vote with your $$ people if they have too many .. In the case of OSS just live with it, its free dude, someone will be along soon enough to plug that hole, if OSS also has too many holes that dont get plugged quick enough then stop using it, simple ..
I agree however with another poster, if a closed source company knows there are holes and wont/cant patch them in a certain (quick) time frame then its not too much to expect them to accept some kind of liability .. especially you lose $$ from it
At first I thought “Hey it is pretty much the same as an engineer making a building that falls apart later.” but then it came to my mind that is IS NOT the same.
Can the engineer be held responsible if, lets take an hypothetical situation here, an air plane is thrown in a building it designed and suddenly it falls apart ?
I think that, as with every engineering achievement, these problems need to be investigated and a decision should be made in a per case basis.
And that it should be able to decide such thing in advance before acquiring the piece of software (I don’t like EULA’s as much as the guy next door, I’m talking about contracts made prior to the act of buying the software).
Contracts exist everywhere, except in the “boxed software” model where most big boys play these days, although I don’t know how this kind of thing works when negotiating with large enterprises.
But in custom software, at least in the projects I worked for the last 4 years or so, this kind of liability is decided upon contract. Who is responsible for what in the life-time of the software and for how long the situation will hold as in the contract before expiring.
I know this is all too complex to be put in a law system but complex problems need complex solutions.
I don’t think open source developers should be held responsible for the code they make available, but open source software distributions with commercial support should be liable for the damage that may come from the use of their commercialized systems as long as all this is contemplated in a proper contract.
It is not just because I put something on the internet and YOU decided to use it that I should be held responsible for your losses.
I mean, people say the US law system is completely twisted (I don’t live there, and know not much more than what I see in the movies, nor care much), so I don’t know if the answer can possibly be true to this but: It is like holding a person responsible if he/she leaves a broken bottle on his/her own backyard and someone comes in uninvited and cuts oneself’s feet in the glass.
Post too long, sorry. Just got carried away. Bye.
I don’t see why OSS devs shouldn’t be held accountable while closed source devs should. OSS advocates are lobbying governments to move entirely to OSS. On slashdot today there’s an article where a second India state is mandating OSS for all government functions. You mean to tell me that after lobbying governemnts to use OSS, that OSS should be exempt from laws regarding software security? And there are companies like IBM and Red Hat that are *selling* solutions that use OSS. They are no different than someone selling closed source solutions wrt whether they should be liable. IBM and Red Hat can certainly afford it.
In fact, OSS advocates do themselves a disservice if they advocate that OSS be exempt from any software security liabilty laws. *If* laws were passed that held closed source software liable for security but not open source, then businesses and governments will flock to closed source. Because if something goes wrong, they can sue under the hypothetical liability law. And second, they will be under the belief (justifiably) that closed source developers will be much more thorough regarding security than OSS devs because it affects their bottom line. So continue advocating that closed source be liable but not open source if you want to kill open source off when it comes to serious enterprise solutions.
Frankly, I don’t think any software should be held liable for security holes that require a “bad guy” to explicitly take steps to exploit. That would be like holding a tire manufacturer liable when a vandal slashes your tires.
Also, such software security liability laws would do nothing but give profits to the insurance business. Software companies will just buy liability insurance policies and be done with it.
I actually don’t believe that OSS advocates should lobby govts. People will simply say that the govt adopted OSS not because it was any good or cost effective but because there’s a political agenda involved. I think OSS can win on the merits and should only win on the merits and not ideology.
Ironic you mentioned tires. Remember Firestone?
Now, think of it this way, if you hired a security guard and he said the deal is that he’s the only one who can guard and he fell asleep while your store was broken into, sure you’d be going after the crooks. But you’d also fire the security guard and really…he owes you for not doing his job.
If closed source software is liable, it will be much more expensive for those software companies. Right now they can sleep at the wheel and get away with it and yet there’s nothing you could do because only they can secure the software.
Edited 2007-01-19 05:20
//I actually don’t believe that OSS advocates should lobby govts. //
Whereas I actually don’t believe that closed-source advocates should lobby govts.
How about that! We cancel each other.
“Ironic you mentioned tires. Remember Firestone? “
——————————-
You might want to re-read my comment, which was “I don’t think any software should be held liable for security holes that require a “bad guy” to explicitly take steps to exploit. That would be like holding a tire manufacturer liable when a vandal slashes your tires.”
The Firestone case was one where tires were falling apart under certain driving conditions on Ford Explorers. It was not one where Firestone was held liable because a “bad guy” was slashing the tires.
Applying it to software, if there’s some bug that’s causing data loss/corruption on its own, then maybe one can sue for that (although nearly all software comes with disclaimers in the EULAs). But if there’s some security hole that would only cause problems if a “bad guy” decided to exploit it, then no. For example, let’s say an email client allowed a user to receive executable program attachments and allowed the user to click such an attachment to run the program. This is a security hole that can be exploited. If a “bad guy” sends a user a malware executable and the user runs it, I don’t think the developer of the email client should be liable. Because the security hole isn’t a “bug” (i.e. it’s not a tire falling apart under normal use), it’s a feature that an unscrupulous person misused (a guy decided to slash the tire).
As for your sleeping security guard, it doesn’t apply since software comes, not with guarantees of security, but actually the opposite, disclaimers regarding any functionality at all.
“If closed source software is liable, it will be much more expensive for those software companies. Right now they can sleep at the wheel and get away with it and yet there’s nothing you could do because only they can secure the software.”
So you continue to suggest that closed source be held to a higher standard. OSS is supposed to be more secure by default (“a million eyes”, and all that), yet the advocates of such software are afraid to be held to the same standard as closed source? Why should IBM, a company that has billions of dollars, not be held to just as high a standard for their open source solutions as some small time closed-source dev?
It would seem that your agenda is to put shackles on closed source (to drive up the expense, according to what you wrote) to better compete with it.
//I don’t see why OSS devs shouldn’t be held accountable while closed source devs should.//
//In fact, OSS advocates do themselves a disservice if they advocate that OSS be exempt from any software security liabilty laws.//
Please refer to the original text:
“Alan Cox, one of the leading Linux kernel developers, has told a House of Lords hearing that neither open- nor closed-source developers should be liable for the security of the code they write.”
//Frankly, I don’t think any software should be held liable for security holes that require a “bad guy” to explicitly take steps to exploit.//
… which is precisely what Alan Cox said originally.
Please read what you are commenting about before you post.
If you want to argue a point, please make sure you argue against what was actually said, and refrain from arguing against what you THOUGHT you read.
Hey hey, I guess its all too easy to read the posts’ subject and understand that he’s replying to the “closed source devs can’t go off scot free” or something in that line.
This was a thread where some people were saying exactly what the old parent was disagreeing.
Relax a bit, dude
You’re missing one key distinction — commercial versus non-commercial. It’s perfectly reasonable to go into a legal contract with a commercial vendor like Red Hat or Canonical to handle security and other problems and be liable. It’s not reasonable to expect non-commercial entities that offered you their hard work, that you used *at your* discretion.
You’re also missing another distinction. If a commercial vendor sells you the source and allows you to make any change you wish, or an open source project *gives* you co-ownership of a project (that’s what open source is), you’re immediately transferring all liability precisely because you are an owner of the code and can fix the problem yourself or hire an independent consultant to fix it for you.
Holding software companies liable over security is not going to move the industry anywhere that is worthwhile.
It will force software manufacturer’s to leave out features, limit development APIs and will result in software that is rigid and not extensible.
The cost will also go up expotentially as the lawsuits come in by the tide.
If you want software companies to be liable for security then be ready to live with software that you cannot change or touch, its going to be a black box and you will NOT have control over it.
Edited 2007-01-19 03:33
Not to mention the glacial pace at which it’ll be developed. Maybe we’re moving too fast today, but I don’t think anyone here would like it if basic GUI’s were high technology today.
i dont believe either that you should be able to sue if theres a security issue in some software, HOWEVER, especially commercial software can not be allowed to get away with everything, their blatant disregard for security, such as what microsoft have demonstrated over the years, not giving a damn about it. that should be punishable, but a bug in general should not.
nobody really sues for a bug that was left unpatched for a week. Real damage must be caused from the insecurity.
HOWEVER, especially commercial software can not be allowed to get away with everything, their blatant disregard for security, such as what microsoft have demonstrated over the years, not giving a damn about it. that should be punishable, but a bug in general should not.
You can’t draw a line like that; either software authors are liable or they’re not. It’s a slippery slope when governments start legislating to target specific individuals or groups, and more pragmatically, it starts to add yet further layers of regulation that ultimately results in higher costs being passed to consumers.
To look at Microsoft specifically, their security/vulnerability posture is not the problem, it’s symptomatic of the bigger problem: marketshare and lock-in.
It’s not a practice unique to IT, but vendors enjoying significant marketshare through lock-in are generally not incented to expend resources on post-release development and support, which is the primary reason they strive so hard to achieve lock-in, it ultimately reduces their costs. Vendors fighting for marketshare cannot afford to be as ambivalent towards their customer’s requirements and will therefore tend to be more aggressive with ensuring a positive customer experience.
At the end of the day, we have to quit trying to find ways to use legislation and regulation to restrict or punish Microsoft’s ability to operate. These are artificial inhibitors that ignore the fact that, for better or worse, the market ultimately put Microsoft in their position. If we could somehow break Microsoft into pieces and dilute their share to create a fragmented market, it is inevitable that another company would simply rise to take their place (or maybe Microsoft would re-amalgamate as AT&T did in the US).
The only solution that will work is for the market to change their own behavior, hold vendors accountable when making purchasing decisions and consider the long term impact on infrastructure decisions. If customers continue to purchase Microsoft products because of their perceived need for Microsoft products (ie. I “need” to play games, I “need” to use Office, I “need” multimedia, I “need” Autocad) then really, security concerns are clearly being weighed below the convenience of maintaining status quo. That’s not Microsoft’s fault, that’s the market’s. If the market won’t place a value on security or software stability that exceeds the inconvenience of switching software, finding alternatives etc. then really, Microsoft has no incentive to change. A complacent market is it’s own worst enemy, and rightly so.
The free market is often a powerful regulator, we just have too much of a tendency to assume unfair manipulation when the market doesn’t react or respond in a manner that matches our own priorities or expectations. The fact is, people at large are simply are not as concerned with Microsoft security as we like to think they are, so instead of punishing Microsoft directly, we should be educating the public and making sure alternatives are publicized and accessible. Microsoft can do remarkable things when they’re battling for growth, and if they *ever* started to see measurable marketshare loss due to customers actively selecting alternatives, you can bet that there would be a significant corporate culture overhaul. Vulnerabilities and flaws would not be left to stagnate against increasingly competitive market offerings that react and respond more quickly. They’re not a stupid company, after all. They simply haven’t had to be smart or aggressive in far too long a time, and it is easy to become complacent and watch your money build interest in the bank rather than having to spend it on R&D.
The market should work this out with the industry, it’s a problem they created and it’s a problem they need to ultimately solve. It can be done, but it will take time. Taking the example of OpenDoc, which is gaining incrementally small but steady momentum against Microsoft’s best efforts to contain it, change can be forced. Cut the file document format tether to MS Office, and MS will be forced to make sure Office becomes or remains a valuable, productive and effective tool, not something that always wins by default. Competitive pressure at play. But until the market is willing to make the effort to apply that pressure, Microsoft will not budge. And people will be forced to settle for the software Microsoft provides and dependent upon whatever level of support Microsoft’s accountants decide is justifiable. Bugs, flaws, quirks and all.
So if you truly want Microsoft to produce stabler, safer software, the only way is to not use Microsoft products and suffer the inconvenience and cost of finding alternatives. That’s the only way they’ll respond.
And being a market issue, the government should not get involved any more than the courts already have the ability to. Government regulations and imposed liability invariably hinders an industry rather than nurtures it. As it is, if you can demonstrate negligence on behalf of a vendor that carries a measurable cost, you have legal remedies available. EULA’s and government legislation won’t change that.
Anyways, just my rambling 2c…
Software developers should be as liable for the software they create as they’re willing to guarantee.
Most software specifically states that the author of the software is not liable for any damages that is done. In the case of Windows XP, I believe that Microsoft is liable for the lesser of damages or $5.00.
If a developer is confident enough in their software to say that it _IS_ secure, stable, etc, and are willing to stand behind it, then they should be liable.
It should be up to the buyer of the software to decide if they want software that’s trusted enough by it’s creator so that the creator is liable, rather than the job of the government to issue a blanket statement on this.
Far too easy to blame the programmer, raises the question of proper usage, environment, maintenance by the user. You would certify your product to return this output from that input under this load and be right back where we started.
Does the EULA make any promises about security? If it does, then someone is liable. If it doesn’t, they’re not liable. Don’t like it? Buy software that takes the responsibility of security.
Okay, so there are no such options. That’s a market problem. I’ll never understand government intervention for some things.
The legal position of EULAs has not really been tested. Most of them offer no liability for anything and make no claims grander than the software will take up disk space.
If an EULA is ever held up in court, then holding OSS developers liable for security would be grossly unfair.
Uk was the first nation to create a connection between the product, the producer and the user.
In fact it happened that a sweet lady ordered a guinnes and found a snail that rottened inside.
After her gastroenteritis she sued Guinnes and won, creating the first case in Europe of direct responsibility of the producer on its products
I see the vague idea behind but it is too stupid yo be true.
Making coders responsible for the security for their code is like saying that the maker of a door is responsible if someone forces it.
From a strictly legal point of view this is a Heresy.
A crime consists of three element a victim , a doer, an intention.
Now, focusing on intention, it can be direct or accidental ( like you hit someone, you kill him; you did not want to kill him, but you wabnted to harm him)
Direct intention bring people to try to force a door/ code to pass behind and get where they want.
So in fact you see that no matter how good I code, there will be always someon better than me that can open that door.
So summing up. I code, I do my best, someone cracks my code. I AM GUILTY!
I can imagine that someone would say “Hey, it like the Guinnes, door should have worked!” Nope mate, not in my life.
The difference is that it was an accident, a carelessness from Guinnes. it is not like the first stout producer of the world would say “Hey mates, let’s put some snails randomly and try to nail some ladies’ stomach!”
My old professor would take his law code book and give me one behind my ears if I proposed such idiocy ( and believe me , an italian law code book isn’t something you want anywhere but on your desk )
There is another matter that is accessory but has its importance.
Let’s suppose I code sloppily, and in sloppily I mean like “Dude, I do not want to write a code genrator to create passwords, thus on the opening of the soft, right under the name, there will be the root password of your PC as a reminder.”
We see the process this way: I code, he installs, the other cracks ( well cracks, actually reads what is written on screen ) )and I AM GUILTY.
So I’m here responsible for the Crime of the cracker and the free decision of the user that used the program on HIS FREE WILL and installed it on HIS PC.
Sorry ladz, but that reminds me too close to all the obese McDonalds fan that after a life passed in stoning their arteries sue the Big M because they did not push them to move more.
Seems a thing that only USA would see, and what do I see? UK???
Tony, me and you must have a talk a day…
I don’t understand those sledgehammer analogies. A working door lock can be smashed with a sledgehammer. Software can’t. Unlike a real door, to which you can apply arbitrary amounts of physical force, a program only gets 0’s and 1’s as its input, and it will react to them in a way that is completely determined by the programmer. If the programmer writes his program not to break as a reaction to the 0’s and 1’s, it won’t break.
Which is why all of the “It’s just like a thief breaking into your house” analogies fail. It isn’t anything like that. The sad part is that people *need* analogies to explain what computer crime is “like” rather than what it is. Even technical people.
To me it seems that before you start holding developers responsible for the code the produce it might be a better idea to first give them or make available the tools they need to produce secure code. As another poster pointed out developers are only human and to ‘err is human’.
Of course the downside to actually holding developers liable for the quality of their code is that it would probably flood an already over burden legal system with more frivolous lawsuits.