Once upon a time, a friend of mine accidentally took over thousands of computers. He had found a vulnerability in a piece of software and started playing with it. In the process, he figured out how to get total administration access over a network. He put it in a script, and ran it to see what would happen, then went to bed for about four hours. Next morning on the way to work he checked on it, and discovered he was now lord and master of about 50,000 computers. After nearly vomiting in fear he killed the whole thing and deleted all the files associated with it. In the end he said he threw the hard drive into a bonfire. I can’t tell you who he is because he doesn’t want to go to Federal prison, which is what could have happened if he’d told anyone that could do anything about the bug he’d found. Did that bug get fixed? Probably eventually, but not by my friend. This story isn’t extraordinary at all. Spend much time in the hacker and security scene, you’ll hear stories like this and worse.
It’s hard to explain to regular people how much technology barely works, how much the infrastructure of our lives is held together by the IT equivalent of baling wire.
It’s from 2014, but drop everything you’re doing right now and read this. Go on. Don’t put it off. Read it.
The entire article is full of things we who love technology already knew… but labelled broken.
What is missing, as is usually in these kind articles by security proffesionals, is perspective.
Computers are useful and wonderful, miraculous even if they leak information like a sieve. Privacy and security are features that matter more in some contexts than others. In the contexts where they really do matter, the world is pretty broken. No argument from me there.
But that is not the big picture, for most users of most computers.
…but also military products are :
https://www.theregister.co.uk/2017/04/03/gilmore_farewells_trump_gov…
Or :
https://arstechnica.com/gadgets/2017/04/samsungs-tizen-is-riddled-wi…
Or even :
https://arstechnica.com/information-technology/2017/04/iot-garage-do…
Quality is an expensive option now.
Edited 2017-04-05 04:38 UTC
It’s not computers that are broken. It’s developers. In my experience, majority of modern-day developers are morons. It does not help that security is usually not even a concern in their minds when writing an application.
It’s not only developers here.
They usually are the last decision point when changes are required by the client.
There are many sketches out there that try to outline this, and some of them do it quite well.
Developers are only half of the problem. The other half are the people putting together the functional requirements for a project. They are the ones that don’t address the additional time needed to account for security in the development and testing. If security is not addressed up-front, I can guarantee it won’t be at the end.
The final player in this game is society. What happens when a big-ass breach happens? The overwhelming majority of people go “meh” and move on. If there is massive accountability placed on the organization that is breached, then someday, security will be addressed.
It’s that the pointy-haired bosses of the world don’t want to spend money on talent. They don’t understand programming, and don’t appreciate the work that good coding is, so they hire whomever is cheapest, often from another country. They’ll even complain that there aren’t enough programmers when what they mean is there aren’t enough programmers willing to work for half minimum wage while living ten to an apartment meant for two.
When I can get a job (and boy have I had to slash how much I ask for), I CRINGE at the code I get from previous workers. It’s like they had the local Jr High computer class working for them as a school project or something. It’s THAT BAD! It isn’t even that these other programmers don’t know about security, it’s that they don’t care – they aren’t paid enough to care.
JLF65,
+1 again.
I’m seeing the same thing. I’m regularly loosing work to much cheaper typically offshore workers. It’s very hard to offer a great great security when the market has decided that it prefers cheaper labor.
One of my clients offshored the development of a magento website, that’s something I could have done, but whatever. They can only work with ubuntu, so I install the latest 16.4 LTS, but now they say they can only support PHP5, which is no longer being maintained in the latest distributions. I raise this issue and point out that magento has support for PHP7, but they respond that their code only works with an older unsupported version of magento that only works on PHP5 and Ubuntu 14.4. I suggest that they need to keep their code up to date with the current version of magento, but it turns out their developer is not even a real programmer, all that he can do is plug in templates and patches that someone else wrote for them that they are unable to maintain.
Now I’m extremely peeved at this because it means this unmaintained website code from who knows where is going to fall in my lap and I’m either going to have to fix it/replace it myself or just leave it running past EOL. I may not be paid to do the former and I hate being the sysadmin responsible for the later. It sucks all around that this team is doing an awful job, landing a lot of work due to low rates displacing people like us, and someone else is going to suffer the repercussions of their bad work.
I’d up-vote your post, but I already posted, so can’t vote anymore. Anywho, great points, all. I feel for you. Working on someone else’s code can be a frustrating, hair-pulling task. In a few cases, I’ve simply rewritten the code rather than try to maintain/patch the old code. Getting someone to work on someone else’s code is even more of a task. Like in your case, they may not have anyone capable of working on it anymore. Good luck on them actually hiring someone capable. That’s one of the biggest hurdles involved with sub-contracting out work – you can’t count on them having/hiring capable people. :/
That whole “aren’t paid enough to care” is a character/value quality, and it’s the companies not holding them to account. When you signed up, you agreed to do the task at hand, not do the task at hand in a crappy way. It’s not like they’re fixed rate, “done by thursday” developers that under bid and over promised. These are staff developers. The whole “I don’t get paid enough care” is BS. You knew the job coming in, whether it’s fast food, or programming. In fast food, you clean the bathrooms.
Since there is a shortage of programmers, bad actors retain employment.
Part of this, of course, is that the management is simply unqualified to judge the quality of the work that they have hired to be done. They can nod their head at the pretty screens and hope the reports balance. Beyond that, it’s sausage makings. The don’t see the long term technical debt that may have been piled up in their system. But, also, most of the time, they end up never paying off that technical debt. It’s cheaper to get a new system made, buy one off the shelf, etc. as the market matures.
So, developers get away with murder. Worse is better.
Finally, there was some study I recall where they basically determined that when it came to data breaches and other security things, it was simply cheaper to “deal with it” after the fact than to be proactive and prevent it in the future. This does not bode well for the industry as a whole.
This is excuse and wrong. People making good audit tools don’t work at minimum wage. You are talking at least BA in mathematics with high understanding of probability as a required staff member. Also some of those who design memory models and other things key to auditing are not programmers. They have BA-PHD in applied mathematics or are professional document writers who normally also have a BA.
At this point the alarm bells should start ringing. Its not a lack of programmers in a lot of cases. No matter how good of a programmer you have if they don’t understand how they should be coding stuff they will make errors. Documentation is key for programmers to understand how they should be doing stuff. So you employ no documentation writers your code quality goes down.
Your audit tools need someone with applied mathematics to work out if you have covered every possibility. Again if your companies don’t employ one of those you are fairly much again having programmer batting in the dark without the information to produce good results. So again you could have perfectly skilled programmers producing bad code.
So invest in more programmers only will basically never fix this problem.
Mathematically secure is a term for a reason. Its proved by a large amount of maths is not in the final product.
whartung,
I think JLF65 would agree that it’s not so much that we don’t care. Honestly many of us working in the industry do care very much and we’re just as disappointed as you too. It’s not that we’re lazy or unwilling or anything like that, but it’s not a corporate priority for all of the other reasons you mention.
The biggest complaint I have with regards to security problems in the industry is simply not allocating enough resources for it. A project might very well have zero resources allocated to it until after a compromise takes place. This is one of those topics where all companies will say “yes security is extremely important to us” because they’re ashamed to say anything else, but when it comes to dollars and cents almost all of them are guilty of shortchanging it.
In other words, I hear you loud and clear, now go find a way to demand better security from the companies such that they’ll actually invest in it up front!
Another point I’d like to address is that just because we’re in tech doesn’t mean we’re all quite as well off or in demand as it seems you are implying. Believe it or not the IT industry faces a lot of layoffs where many senior people with years of experience get tossed to the curb because the companies either can’t afford to pay them or would rather replace them with someone cheaper via “restructuring”.
Microsoft laid off thousands of US workers while planning to hire more in India:
http://www.businessinsider.com/microsofts-layoffs-are-not-yet-done-…
http://tech.firstpost.com/news-analysis/microsoft-set-to-build-beng…
IBM has been doing this on a continuous basis for a while, laying off thousands of highly paid senior workers while simultaneously bringing on thousands of new employees.
http://www.bizjournals.com/triangle/news/2017/03/30/ibm-confirms-rt…
http://www.businessinsider.com/ibm-added-and-lost-70000-people-2016…
I learned a lot about enterprise databases working with this company, they decided last year to lay off the whole staff in favor of cheaper temporary workers. Employees could apply for their old jobs at lower pay or take severance.
http://www.computerworld.com/article/3059256/it-careers/rejecting-e…
Understand that I’m not trying to complain here, but rather use these examples to maybe convince you that it’s not simply a matter of companies not being able to find skilled workers. There’s also a problem with companies actively flushing their experienced workforce in exchange for less experienced staff for lower wages.
I posted this not long ago, maybe you’ll find it interesting?
http://www.cbsnews.com/videos/youre-fired/
Absolutely. It was bad enough when jobs are out sourced and off shored. It’s even worse when instead of moving the work out, they’re bringing the workers in.
But I don’t know if there’s a glut of developers in the market. I don’t know why an imported person would take less money than someone already here. They have the same expenses, they have to live where everyone else does. Housing here in So Cal is awful, not as bad as the bay area, but awful. Perhaps they’re willing to earn less since they don’t need to sock away as much for their retirement, since they don’t plan to stay. Perhaps they’re willing to live in cheaper conditions (several room mates, etc.). Perhaps they view a short term (few years) sacrifice in living standard (our living standards) worth the net gain they receive, even at a lower nominal rate. Simply, when you have 2 people living in the same area, why is one willing to work for so much less than the other in the same market.
The “hue and cry” for more developers, to the point that they’re shipping them in, suggests that we simply don’t have enough. That the natural labor market forces haven’t driven the costs down to where they equalize with the “low cost contracts” that are paid to the foreign providers. Thus the calls for market protection (fewer visas, etc.) I’m trying to avoid politics, and I’m not Laissez-faire.
And I certainly don’t agree with what’s happening in this sector surrounding this.
But, that said, are folks refusing positions because they’re paying too low? And if they are, then what are they doing instead? If the labor rate would net higher if the foreign importation ceased, that suggests that there is a rise in demand compared to supply (“not enough programmers”).
I believe that there really is not enough programmers. I believe that programming is not “for everyone”, that “everyone can code” as Tim Cook says. I think there is a difference between copying and pasting PHP and website templates and “programming”. At the same time, I am a great advocate for end user programming, for tools that lower the barrier to entry, and, heck, the fact that someone can “be successful” by cutting and pasting PHP and website templates, truly, that’s a good thing. We don’t need computers to be in the hands of the elite to be useful. How many EEs are cringing at the who knows what contraptions folks are wiring together with little more knowledge than where the red and green wires go using things like Arduino et al. But if Sally gets here electric clothing costume with streaming, synchronized lights through a bit of solder, some “shields”, and cut and pasted code — that’s a great thing.
On the other hand, much of programming is becoming a blue collar enterprise. High level drag and drop, wiring together stuff you didn’t write. The Arduino of programming. No real understanding, just cut and paste the internet in some new order to get your application. But that’s still “wizardry” enough to get by for many applications. And, for many, that “worse is better”, is good enough. Why should they pay premium rates? The internet is what Home Depot is to home improvement projects. Peter Principle as far as you can, and finally break down and hire an expert to deal with the last mile when you’ve found yourself over your head.
We are an unlicensed, uncertified bunch. I’m fundamentally against the idea of formalizing that, as I think it would destroy the market, or be completely useless. The industry is still not mature enough. Don’t make me liable for whatever Microsoft decided to shove down the Internet some Tuesday night. As developers, we simply don’t have a leg to stand on. We use all this stuff, but we can’t trust any of it. When there is documentation, it’s wrong or incomplete. And we’re supposed to share the same responsibility for a web site that a civil engineer is for a building. Not hardly.
So, the consumers — they get pot luck. Developer du jour creating constructs the end user can’t judge beyond the pretty pictures they show on the screen. No wonder they’re willing to take anyone they can find.
whartung,
You bring up a lot of good points. My response: chuckle it off and nod in agreement.
Edited 2017-04-07 09:08 UTC
First of all, I call bullshit on this story. It sounds like nothing more than a typical “I hacked the pentagon when I was young, but I won’t tell you how and there is 0 proof”.
…but I read the article anyway and especially in the beginning it has some good information and analysis. Surely worth to read
…and then the last part of the article (a few paragraphs into “People, as well, are broken”) is clearly getting less coherent and out of the expertise of the writer.
Seriously…. “Accidentally”? No he created script and left it running. How could it be accidental? It could only be intentional
… because I logged on to their WiFi and discovered PCs with patient and doctor records that were sharing everything!
I was in the emergency room waiting to see a doctor and bored out of my mind. I took my laptop out and looked for free wifi to surf the web, when I discovered that the only (and public) wifi there also had PCs with these private files on them.
I told a nurse about it and she solved the problem by simply taking my laptop away 😐
Thus solving the problem once and for all.
But…
ONCE AND FOR ALL!
Yeah, that part of the problem – the people in charge don’t want to pay to SOLVE the problem, they expect the current workers to cover the symptoms as part of their current job. The nurse can’t solve the problem, but she can take away your laptop.
Since we are proposing things for other people to read, there is a book called the “unix haters handbook” (Google it, it is offered by the publisher for free).
Among other things, it prophesized how C’s insistence on the lowest common denominator, emphasis on speed over security, and worse-is-better would result in disasters in the future, and how C++ was the COBOL of the 2000s.
The book is written in a very fun and easy-to-read manner (yet amazingly technically correct) and if you haven’t read it so, do it.
This is what I tell to people who preach Desktop Linux: Your OS is a non-sandboxed, written-in-C piece of garbage that allows apps to inherit all the user’s permissions (instead of restricting permissions on a per-app basis). You are just lucky nobody wants to hack you, because if they wanted, they ‘d probably shellshock their way towards you. Most linux people keep their servers running unrestarted for years (the uptimes!), so their main memory is full of loaded vulnerable code (even if the binary has been replaced on disk, the old version remains loaded in memory until a restart).
Android is good enough (assuming you have the latest version like my Nexus has), and Fuschia will be better. Google has an interest in security. They don’t want hackers to steal user private data that axiomatically belong to the Google Adbot.
Edited 2017-04-05 11:28 UTC
Sorry, but the COBOL of the 2000s is not C++, it is Java.
And I doubt the problem is “C”. nor any language for that matter. The problem is “deep stacks” of libraries. In fact, in the article it points to that very same problem.
The problem is a developer working on something (s)he doesn’t fully understand often relying on libraries that are built on top of libraries on top of libraries. Many of them developed as well by people that didn’t fully know what they were doing, linking them using tools they don’t fully understand.
I am pretty sure there are many people (I’ve seen plenty of them) releasing software written in C++, linking against a plethora of libraries, not even knowing how the linker works. When I think about this the first book of Foundation comes to mind. The empire collapsing because people forgot how technology actually worked and all machinery being operated following some kind of ritual.
I am pretty sure there will be work for “software archeologists” in the near future, which will just “dig” into layers of “old code” to retrieve useful information. Well… maybe it already exists in the form of security experts.
osvil,
I think the most severe classes of faults are due to unsafe languages like C. Even someone proficient in security will make mistakes. IMHO choosing a safe language is step #1 for safe software.
I find your perspective on libraries interesting. Building layers of high level abstractions is often touted as one of the great enablers for modern software.
Clearly libraries enable some programmers to write programs that they wouldn’t otherwise be able to write, which is ostensibly good, but the flip side of that is that programmers are writing programs they may not fully understand, which is ostensibly bad.
Of course there are other criticisms too, like the inefficiencies that are often the result of too many layers.
Do you think there’s a solution?
The most severe faults are not due to the language, but due to the programmer either not understanding where security comes from, or not caring. There are very few languages written with security in mind, and there are also compilers for other languages that vastly improve the security of the language being compiled. It all comes down to the programmer understanding AND caring enough to make a secure program/library/driver. I can take the least secure language you can find and make a 100% secure program. I can also take the most secure language you can find and make a 0% secure program. Piss off your programmers and just see what they’re capable of!
JLF65,
I agree, there’s no doubt that skill and experience comes into play. However my comment was aiming more at the fact that even skilled engineers make bad assumptions from time to time and it’s not necessarily due to inexperience, just that over time our ability to track all assumptions in our head can be impaired by the sheer volume of code.
Obviously we can still produce bugs in a managed language, but IMHO memory corruption is the most insidious and common kind of bug with unsafe languages. This is where safe languages can help us all produce more secure code. I concede that whether we like safe languages or not is a different matter
Edited 2017-04-05 18:37 UTC
Another example would be parameterized SQL interfaces versus generating full sql statements by concatenating strings programmatically. While there’s nothing wrong with concatenating properly escaped values, as you can imagine I see a lot of horrible PHP code that concatenates values without escaping thereby making them vulnerable to SQL injection.
This goes to what you said: despite PHP having safe memory management, there’s a lot of terrible programming practices anyways. Encouraging the consistent use of parameterized functions should curtail this somewhat, but then there’s always something else, like cross-site-scripting. Or using predictable tokens to identify a session, or…
We need a do-what-I-mean-not-what-I-say programming language
Realistically though if such a language existed it would only serve to lower the bar again and businesses would hire even less competent programmers, haha.
Edited 2017-04-05 21:33 UTC
So you’re saying Rust will breed incompetency. *walks away, whistling non-chalantly*
kwan_e,
To suggest that rust is this language is a backhanded compliment to rust you know
I guess if you want to analyze the cost/benefits from this perspective you have to ask yourself if the improvements caused by safer and easier languages are sufficient to overcome the lowered competency of programmers who may begin to use them.
I think PHP replacing the more difficult perl is a good historical example of this. PHP made web programming more accessible to less competent programmers, but they would go on to produce the world’s most notoriously insecure code. So are we better off for having PHP? That’s a very intriguing question I don’t have an answer for.
Really this is not this simple.
C is always talked about as being unsafe. Question is what form of C.
https://kernelnewbies.org/Sparse
Like just because Linux is using C does not mean everything you can do in C is allowed by the project because the project mandates static code analysis.
https://www.viva64.com/en/b/0324/
Now there are some serous things to consider to take note of the performance bit on that page. Where C is and where rust is. C with static code analysis is normally as fast as C without static code analysis of course subtract a huge number of bugs.
I am not saying rust cannot mature.
https://hackernoon.com/why-im-dropping-rust-fd1c32986c88
We have developers running into unsolvable problems in Rust designs still. Rust is still too immature.
Language choice becomes a very hard problem. How do we code securely without killing system performance is a very hard problem.
Now also remember a mathematically provable secure OS sel4 is of all things coded in pure C. This means the source code by design cannot contain a single case that can cause a buffer overflow or incorrect pointer or anything else like that. But sel4 goes though an insanely strict validation process.
So if someone changing programming language will make something more secure they need to think carefully. Changing programming language normally swaps one set of bugs for another. Of course this other set of bugs you will not know as well.
Secure design takes time and resources there is no magic short cut. Over and over again we see here is a new programming language it will solve all the problems that turn out to solve one list of problems and create new ones. I would think over over 40 years+ of this we should have started to get really sceptical of these claims.
If you have bad internal design processes with poor auditing it really does not matter what programming language you have you will code up a security fault at some point.
oiaohm,
Yea there’s no denying all languages can be improved, including the safer ones.
IMHO it’s become evident over several decades of software engineering that humans aren’t going to become better programmers, we’ve been making the same mistakes for decades so if we want safer code, we don’t have much choice than to use safer tools.
As I said earlier “I concede that whether we like safe languages or not is a different matter ” For better or worse, unsafe languages have a lot of momentum, so a mass transition to safer tools might not be in the deck.
Please go read the papers on Sel4. They did not make the language the thing is programmed in any safer. Instead a complete mathematical audit process.
The better tools not make new programming languages we have limited understanding of the defects.
https://sel4.systems/Info/Docs/GD-NICTA-whitepaper.pdf
There is a price tag to properly audit code of at least.
$362 per line of code for the correctness proof plus $78 per line for the Security Proofs.
So everything is not broken. The issue is no one wants to pay the bill to make programs that are not broken end of story. Change programming language might alter the cost values but fix nothing if still no one pays for proper code audits.
Alfman without understanding the concept of mathematically secure also explains why you were not getting why I was saying In Kernel ABI that is stable is impossible with current technology. If I wanted to write a theses paper on it I could with all the mathematical reasons why it cannot be done.
So kernel ABI that stable is impossible was never my option is something you can prove by mathematics on a computer memory model. The Linux kernel does have a memory model that covers all its memory operations.
There is a little fault from hell. Ring 0 once you are in what is refereed to as Ring 0 in the systems we use that is a open license to modify almost everything. You cannot have mathematically stable kernel ABI while the giving drivers ability to-do everything. Result is a user-space ABI when you can use the protection of rings. CPU that we use would have to change before we could have a mathematically secure kernel ABI.
Mathematically secure/stable means you have considered absolutely every possibility. Anything that it not mathematically provable as secure/stable should be considered insecure or unstable. Maybe then people would start investing in the processes required to make functional products.
oiaohm,
So what? They could have programmed it in machine language byte by byte and proven correctness as well. The point isn’t that correct code can’t be produced with arbitrary languages, but rather that the likelihood of humans making mistakes is greater.
The likelihood of human making mistakes does not change if you change the programming language. All the programming language change does is change the mistakes human make.
The only way to deal with human error is have to tools to audit the programming code for it. This does not get easy. Like needing a full formal memory model to detect huge number of bugs that cannot be detected any other way.
Sel4 audit process is designed to find and locate and remove human coding errors from it code base.
I remember the darp self drive car where they choose to use C# so they would not have any pointer leaks. Instead they got a object leak that that crashed the C# OS and run the self drive car off track. So car running nuts on track due to a pointer leaks or object leak really make no difference that the end result of failing in ways we don’t want.
Prove correctness requires understanding the programming language.
Do note what you said careful. So it does not matter the programming language used. It if we have the auditing tools and use them. The auditing tools will find the human mistakes.
Over and over again its lets write a new language that will fix it. Instead of our auditing tools for this language are bad and need to be improved.
The reality to prove security Sel4 had to go down the bytecode level validate that then down to the silicon level than validate that. So yes a full formal validation requires doing the machine language level.
I told you to go read the white paper for a reason. So we need tools to audit at machine language level byte by byte to detect complier infection no matter the programming language in use.
oiaohm,
Sure we can try to do that, but these kinds of efforts don’t scale up when done manually. At least not without great expense, which is why improving our automatic language tools is an essential part of the solution. So is it possible we actually agree? Yay
Edited 2017-04-06 14:00 UTC
https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project
The Linux Kernel Self Protection Project is starting to take on the ideas of sel4.
Two ways to prevent particular classes of bugs.
1) write a language that does not allow them.
2) Tool the development so audit tool detects when ever codes has something has that bug or above acceptable possibility of having that bug.
In C and C++ in can mathematically prove all the common faults that code you have a bit of code that cannot contain them. Also you can also prove that a section of C and C++ has a probability of having most of the common faults. The turning machine problem means you cannot prove absolutely most of the time if the code is defective only that it has a possibility of being defective. The question comes is this important to making quality code and the answer to that is mostly no.
Audit Tooling a language does not add more bug surprises like garbage collection faults that you have taking C++ to C# as example. There are many documented cases of people conventing C++ to C# for the pointer protection but then end up with other human error bugs.
There are other examples like Libreoffice that is mostly C++ but for a C++ project has a very low bug count to common C/C++ faults why they run every audit tool they can. This is done without changing language or massive investment. They have not had the resources to invest in own tooling to reduce this even lower.
Investment is critical. The reality is a language sell you the idea it can prevent X, Y, Z faults then you boss decides they can save on code Auditing because you don’t have to find those faults. So changing to a so called safer language can make a project more unsafe due to not having audit tools for the A, B, C bugs that language has and not having the investment to make those tools.
If a higher level language can prevent a fault. Tooling on an lower level language can prevent the same fault. Either way the developer cannot do that error.
Young programming languages are lacking Audit tools. So they come trading one devil for another devil.
To understand need doing you need to understand what mathematically secure is. Mathematically secure is a absolute. If there is a probability of a defect it classed as defect be it a defect or not under mathematically secure rules. When you apply mathematical secure logic every one of C defect problems are solvable. This is a big thing with doing mathematically secure is solvable to mathematically secure standards.
Most take the point of view unless it is a known fault don’t fix it. If you demand that faults you fix in C must be absolutely known as faults there is no way to code in C and be absolutely safe. So making a language safe is a game of probability. Human error probability is almost a constant.
The high the probability of defect normally the lower the cost of development when it comes to programming.
oiaohm,
Yes, Human error is Human error, so let’s take steps to minimize the risks by anticipating our mistakes! Like putting guardrails on a staircase or seatbelts in a car
Thing to remember is guardrails,and seatbelts are mandated by regulation.
Audit tools are ways of anticipating out mistakes. The same way a spell and grammar checker works with the written word.
I don’t think it is even a question of poor auditing. It’s just that security is a highly specialized, highly expert field. Most programmers can’t even conceive of a way of testing something to cause a buffer overrun.
kwan_e,
I agree, even if it’s theoretically possible to manually audit code, sometimes there’s just too much of it for human experts to manually audit all of it. Even the human auditors can make mistakes so really they should be audited too.
To the extent that what the auditors are doing is repetitive and automatable, I think it makes a lot of sense to include that process alongside our language tools so that bad code isn’t generated in the first place, at least to the best extent possible.
Edited 2017-04-06 12:18 UTC
95%+ of what a security auditor does can be automated sel4 proves this. The reality is that is not done today in most projects.
2014 sel4 development moved the cost from 1000 dollars per line of code to about 450. The after work moved that to 200 dollars per line of code. Please note this is for a program written in C. That cost as sel4 has been going with on going development they have been finding ways of keeping on reducing the audit cost.
So instead of ripping up and reinventing the well sel4 went with take existing C and make auditing effective as possible as a goal. So question how cheap can sel4 make C.
Again this is also investing resources.
This becomes the problem you can have a old language with quite a few known bugs but these are not a problem because you have audit tools that catch them all. Or you can have a new language that fixes X/Y/Z bugs but we have not used it long enough to know it has A/B/C bugs that we have no audit tools for. So a new language may be worse than a old known buggy.
And most languages are either implemented in C, or compile down to some IR that is processed by something written in C/C++, like LLVM.
Exactly and if that C or C/C++ in the complier is not fully audited how do you know that this is producing safe results from you so called safe language.
The reality is you need to audit everything between the machine code and the source code as well. Failure to-do so opens yourself up for a world of hurt possibilities.
Of course no one is really footing these bills serous-ally most of the time.
oiaohm,
I know you like to disagree for the sake of disagreeing with me, but it seems like we both should agree that this is an opportunity to improve our tooling and encourage more developers to use it!
https://fuchsia.googlesource.com/magenta/+/master/kernel/app/app.c
Fuchsia is also written in c.
Google also has security issues. Magic security does not rain down on us from the Gods sitting on their perch in Mountain View.
If you want good talk/read on subject, please watch Alan Kay
– Is it really “Complex”? Or did we just make it “Complicated”?
https://www.youtube.com/watch?v=ubaX1Smg6pY
+100!!! I can’t tell you how often I’ve seen code where I can replace an entire MODULE with two or three lines of code. I suspect they were paid by the line…
A program I was working on just a couple months ago, I replaced five files of code with three short functions. It’s easier to read, easier to maintain, and much faster. Being far shorter means it’s easier to check for security – I’d have hated to vette the old code for security… or even proper operation! Which might be the real reason it was so complicated – it’s a corollary the old quote from “Real Programmers”: Real Programmers never comment – if it was hard to write, it should be hard to read, and even harder to modify.
That is why, in my opinion, it is more important to have more “files” than more “lines” in your software project. Functions should be separated logically from each other, and call them from the main program when necessary. Is this what we are doing, or some programmers were just too lazy to create a library of functions and just add functions without knowing that doing it gets more complicated during code review or debugging.
You got that right. Programmers sometimes feel the overwhelming need to put EVERYTHING in a single file. I’ve worked on projects that were one file with more than a megabyte of code in it! That’s just nuts!
I should probably remark of why multiple smaller files is good: it’s a form of commenting coupled to hierarchical structure of functions. The file names tell you what a function or set of functions is related to, and the hierarchy the files follow tell how they relate to the rest of the project. The smaller the file, the more specific a comment the name becomes.
Edited 2017-04-06 14:41 UTC
Qube OS.
https://www.qubes-os.org/
Is this a solution?