The opening up of the mobile industry is great news for application developers but not so good for IT security professionals, according to experts. For example, Symbian, the single most widely used mobile software platform, has already wrestled with the dangers of openness to third-party developers, said Khoi Nguyen, group product manager in mobile security at Symantec. Symbian 7 and 8 were fairly open and allowed almost any application to be installed and run. This led to a few hundred viruses being introduced within a couple of years, so Symbian 9 was locked down significantly, he said.
Security through obscurity is a myth. Any system with a bad security model, open or closed, is just asking for trouble. Don’t go blaming it on being Open.
For example, Symbian, the single most widely used mobile software platform, has already wrestled with the dangers of openness to third-party developers, said Khoi Nguyen, group product manager in mobile security at Symantec.
For Example, Microsoft Windows From over the years has been closed sourced, Ergo the reason for Mr Khoi Mguyen’s employment at said Company.
While Another Example is OpenBSD. It is VERY Open Sourced OS and is more than robust enough to be used as an Enterprise Grade Firewall.
Third party developers haven’t got shit to do with the openness of the source code, learn to read. Open platform for third party applications isn’t the same as open source platform.
And part of the actual issue with Windows insecureness is that it trust any application, so you are wrong on both points. Try again.
They aren’t talking about security through obscurity. They’re talking about how much the phone trusts applications. From the sounds of it Symbian is “Insanely too much” and MIDP is “So little you can hardly do anything”. The article even says old versions of symbian allowed silent sending of text messages and use of the phone’s mic! What idiot allowed that? In contrast MIDP will ask your permission for each file access and there is no way to disable this behaviour.
Clearly there is a sensible middle ground that no-one is taking. Apps shouldn’t require expensive signing, and API’s should be smart about what they allow. For example for sending texts there should be the options:
* Always deny
* Always allow
* Ask permission
* Ask permission for numbers not in my address book
I seriously doubt many (any?) of those viruses were really viruses. They were probably of the ‘Please press OK to send this to everyone in your address book’ type.
Security through obscurity, an embedded operating systems openness, and open source are all completely different things.
Actually “security through obscurity is a myth” is a myth. Good security is layered. Obscurity can legitimately be one of those layers. (What are passwords if not security through obscurity?) Security *solely* through obscurity has been shown to be inadequate. But the old adage which you quote goes further, and is wrong.
Edited 2008-09-12 17:49 UTC
Cryptographic secrets are, by definition, not security through obscurity. Trying to argue otherwise is just making up your own definitions.
Edited 2008-09-12 18:57 UTC
But what techniques you are using to implement them are. If you know exactly how security is implemented, that knowledge is better to have then not to have when attacking it. If that is all that is protecting you, it isn’t enough. But a well implemented security scheme that nobody knows of is more secure then a well implemented security scheme that everyone has the source code to.
Only marginally so, and once the cat’s out of the bag, that margin shrinks to zero. If you were hiding a bunch of bugs, that’s probably going to be a negative margin.
I think I let you slip one past me there.
No; the only way knowing the details of implementation for storing my passwords would be relying on obscurity is if I were using a _broken_ crypto scheme which is, again, by definition.
Knowing how the mechanism works does not make it less secure. It only means that peer review can figure out how to make it better.
SSL; open source yet it still works pretty damn well, why has that not been invalidated (other than Debian’s meddling where Crypto experts should have been consulted).
Safe locks; known yet still secure
Key locks; known, still secure
PAM; source is out there, security isn’t compromised by that
Cryptography research; a purely open science valueing peer review. This is not by accident but by the understanding that it results in better crypo.
You should be able to publish the blueprints of your security mechanism and still not allow anyone to walk through it without having a valid authentication key. Keeping that key safe is not obscurity either. It’s not that I have an SSL certificate hidden some place that makes it secure, it’s that breaking the encryption it provides will take you so long that the information is no longer relevant by the time you get it. Keeping your keys in your pocket is not obfuscation, it’s keeping your personal authentication with you and safe so you can use it in the security mechanism on your front door when you get home that night.
I am not saying that. If you are implementing a security system, it is better if potential attackers do not know what you are using then if they do. What is more important then that is that the system is inherently secure, but all things being equal, it is better if they do not know how it works then if they know.
Also, just because something is published, does not mean the peer review is really worth anything. I would go out on a limb and say I would be willing to put down money that at least 90% of open source code is not peer reviewed by anyone with any level of competence. There are some shining exceptions to this (like openbsd for example), but most of the source code I have read off the net has been fairly average in quality, compared to what I have seen from inside companies throughout my career, and I have worked at several places that did not implement automated testing or peer reviews. Anyone who publishes security code that is not reviewed is only making it easier for the bad guys to identify attack vectors.
I didn’t intend to suggest that all open source is more secure because it gets exhaustive peer review. There is some truth to that with active projects but it’s still not a universal truth. In terms of chryptography and thigns designed for the security field, peer review is very important. Do I trust that IPS system X is rock solid because one company told me so or because one company plus everyone in the security and cryptography industries told me so?
Regarding your first point and with all things being equal, someone with intent is going to learn your system. If I’m contracted to audit the security of a company, I’ll learn whatever they use and negate any obscurity they are putting value in. With an amateur, not knowing how your system works may be the thing that makes them interested os it’s actually reducing your security overall by attracting attack.
Overally, I just can’t accept any improved security based on obscuring part of your mechanisms. That may be my limitation but that’s how it is until I find solid reason to believe otherwise.
You are correct to direct the discussion towards the matter of underlying definitions. (Many a lengthy forum thread actually comes down to the simple matter of a lack of agreement upon definitions.)
In this case, I think that it makes sense to examine “the definition” you refer to. My point is this: There is nothing fundamentally different about “credentials” and “obscurity”. It is a matter of degree. The effectiveness of passwords depend entirely upon their obscurity. In effect, passwords often have a high enough level of obscurity to be considered good security. The point at which that line is crossed is a matter of opinion. But its still “obscurity” on both sides of it.
Edited 2008-09-13 16:25 UTC
Good security is indeed a layered approach but putting obscurity anywhere in the official strategy only reduces your overall security posture. If you want to tack on obscurity after the security strategy is confirmed then by all means, enjoy the blue icing on top but, here’s the thing as I see it.
Obscurity is only of use to the attacker and has no place in the official defense policy; it’s too short lived. When you are the outsider, you want to remain hidden and sneaky and until your found with your eyes covered crouched behind a tree saying “you can’t see me”.. it’s all good. The gig is up once your detected so obscurity is the attackers entire world.
On the other hand, you still have to defend when your obscurity is blown. Anyone coming into your network or device is going to be looking for what you have hidden:
“oh look, they have a clear IDS/IPS on the wire.. good for them.. we’ll just step around that and continue on”.. now your obscure detection device is useless.
“Say, that’s an interesting obscure OS they are using but I really want to get in so I’ll have to learn it”.. and, your nifty mainframe or obscure software platform is useless.
“say, this software is only available as a binary.. where’s my binary auditing tools”.. and your security through keeping source code obscure is useless. The police search your house and you have bad stuff hidden; they’ll find it and your screwed.
Your child figures out how to open door handles; time for child proofing locks for real security.
If your including obscurity in your security planning with the idea that it is increasing your potential security posture in any way, your already begging failure. Obscurity on the part of the defender in a computer network is nothing more than “security theater”; It’s feeling safe instead of actually being safe.
It may not be fully applicable to this article being that the point is platforms trusting any third party program thrown at it without having a strong approach to security from the ground up. In general though, it’s just bad planning to think that hiding stuff makes you safer.
Edited 2008-09-12 20:04 UTC
Ah yes. The “false sense of security” non-argument that I often see employed when there is no real argument to make. (I know I’ve scored a point when people are forced fall back to using it.) If you have an otherwise solid plan in place, adding anything that the attacker might not happen to know about can only make it harder for him. Layering security already implies that you don’t trust any layer completely. All things being equal, security Plan X + Obscurity is always going to be more secure than Plan X by itself.
“If you want to tack on obscurity after the security strategy is confirmed then by all means, enjoy the blue icing on top..”
Seems you had trouble reading the last line of the first paragraph where I say obscurity is fine to toss on top after but should not be part of the primary strategy in which you place definitive value. You also missed the bit about real security keeping an unauthenticated users out of your system even when the security mechanism is known rather than only as long as they don’t see it hiding in the bushes.
No matter really. I don’t gain anything by you feeling the same way or not. Your free to consider or disregard the opinion. You can even explain why you’ve considered the aproach in detail and find fault with it. Just try to include more than just a single dismissive close minded and unsupported response.
I think you’re confusing “security through obscurity” with “defense in depth”; neither have anything to do with the other.
If you’re going to have multiple layers, make them all *strong*.
Edited 2008-09-12 20:10 UTC
Agreed.
Login credentials (username + password) act as an authentication mechanism. Let’s not confuse terms here.
Except he don’t seem to talk about the source code but rather as application platform. And validated traceable trusted applications will always be more secure than unknown ones.
But then he may use that as an argument against another platform which uses open source, and eventually is open for any developers as well, and then he just uses the argument wrong and makes no sense at all. But that’s another thing.
I agree that phones should be secure by default, but there should be a way to allow a phone to run any application. An administrator’s password on first run, perhaps. Or alternatively, I could live with having to compile and self sign any non approved applications.
So this article is basically saying that Symbian screwed up their security model, therefore mobile phone security sucks. I’m not seeing any analysis of other security models, including those of Android or the iPhone. I don’t know what kind of security the iPhone has other than the killswitch, but in the case of Android, apps are completely isolated from each other, and each one needs to request permission to do things like access the network, camera, or contact list, or to make calls.
The panel might have discussed this, and still found it lacking, but the article sure doesn’t help me figure that out.
It’s still a valid argument though. With all the criticism of Apple for not opening the iPhone as much as people wanted, it’s good to remember the flip side. A device that’s completely open and doesn’t restrict developers in any way is a recipe for disaster, as Symbian appear to have demonstrated.
No, you’re confused again. Symbian did not show that Openness == Bad Security. Symbian showed that Bad Security Models == Bad Security.
There’s nothing stopping an Open OS from having good security, and a locked-down proprietary OS from having security like a sieve.
No it isn’t. A valid argument would be that it’s bad to open Symbian. Generalizing Symbians problem to everyone is as absurd as state that an open source OS is bad because Windows has had many security bugs. There’s no natural cause and effect between the two.
This has been an issue in the desktop world as well. It hasn’t come up with the ‘perfect’ solution either.
This is not about security through obscurity. It is about how much you trust an application.
Consider simple program installation on a desktop:
If you don’t require admin more to install applications then you allow malicious programs into your system.
So you require an ‘admin’ mode or password. Now your users get annoyed. On top of that, many users will just click okay to install the application.
So you throw in anti-virus software, spyware defenders…
So you decide to restrict what applications can be installed. You have a central repository. Now you upset people’s freedom and the application in your repository is not up to date because you only want ‘stable and tested’ software in there.
It’s not an easy problem to solve. I’ve always felt there should be an easy way to ‘restrict’ a program’s abilities based on request. We should undo the link between an application permission and the permissions granted to the user running the application.
So notepad application would request read/write permission on files, but would not request network access. Maybe it would get user-document file permissions by default, but try to open a system file and u get an admin request dialog or something.
But then again, doing this also creates issues with complexity and what not.
For example, when I started coding in Java, I made my application access the internet. And it would crash with a security exception. I eventually learned I had to sign my application even to just run it locally.
Also making the permissions test simple enough that people can understand it, but granular enough to allow application to do what they want to do is not an easy task either.
The SELinux modules do just that. Each program is locked within it’s own space. Both program and user permissions must allow the action before it happens. The down side is that it’s still a pig to get all the initial config refined and updated each time you have to add in a new allowable action.
How about capability-based security?
http://en.wikipedia.org/wiki/Capability-based_security
Agreed, I don’t know of any production-grade capability-based desktop OS, but I think there’s a trend. Well-known examples are the Coyotos OS (formerly EROS) and the CapDesk DE. More recent examples are the Caja programming language (capability javascript) and the Genode OS.
http://code.google.com/p/google-caja/
http://www.combex.com/tech/edesk.html
http://genode.org/
http://www.coyotos.org/
With capabilities you can have untrusted programs running in your system, and they can’t access any resource unless you let them.
That’s where you loose control and let crackers and security mercenaries (err.. I mean, experts) fight over your computer and your wallet.
The nice thing of an application in a capability environment, say, a text editor, is that it doesn’t need permissions to all your text files. Instead, you can give it permissions dynamically, just to the file you want to read at this moment. The program can also prompt you to select a file, for instance via the so-called “powerbox” (like a file open dialog), but it can’t see what files you have.
From the article:
“Snoopware” is a form of spyware that can activate the microphone or camera without the user’s knowledge, listen in on calls and collect text messages and call logs.
Come on. It’s f***ing spyware. SPYWARE. We don’t need another damn term for spyware just because it uses the speaker and camera on your cell phone instead of the keyboard (and/or webcam+microphone) on your PC to… get this… spy on you. Seriously, this is getting out of hand, no need to coin new terms for the same old crap with a slightly different twist, when there’s a proven term that suits it *perfectly* to begin with.
I swear, this whole anti-whateverware is becoming almost like marketing. Oh wow, spyware is expanding from the Windows PC onto portable devices. Only because they typically don’t have a full keyboard, and instead usually contain a built-in speaker and microphone, they do their spying with those instead. What a genius idea! Let’s write an article about it and coin the catchy term “snoopware.” That’ll turn some heads, generate hits, and maybe increase ad revenue!
Can’t wait for the first version of Norton AntiSnoopware for cell phones. Guaranteed to bork up portable devices everywhere, no Windows needed!
[Disclaimer: Not that I can verify that the site contains ads, since I use AdBlock and refuse to disable it, but I’m sure it does…]
You’re right. Too many things end with “ware” these days.
Software, hardware, firmware, middleware, spyware, shareware, freeware, snoopware, OpenCourseWare, malware, adware, abandonware, beerware, donationware, crippleware, payware, ransomware, bloatware, ta-ra-ra-boomware.
Actually that last one is made up.
Oh come on, without new fancy buzzwords what’s going to power the hype machine?
koolaid?
The shift towards total lockdown in the Symbian platform had very little to do with the appearances of virus and spyware. Rather it had to do with the fact that the various phone makers using Symbian really wanted a way to bring out phones with MUCH more sophisticated media and graphics features that previously and didn’t want to allow FOSS (and other small-time) developers to have access to those new features. Or at least that’s how Nokia’s Series 60 phones have used it.
Security: something that secures or makes safe.
Obscurity: the condition of being unknown.
Security through Obscurity (or Obsecurity): Something that secures or makes safe through the condition of being unknown.
In other words, a password, private key, system spec, or any other bit of information that if known would compromise the security of the system.
The trick is where you isolate this obscurity. You DON’T want to rely on the core system remaining obscure, because the ENTIRE system will be compromised if the secret ever gets out. You DO want passwords to be obscure because only the parts of the system a single or group of users has access to become compromised, and further security violations can be prevented by simply revoking their access.
Security relies on being able to verify the identity of someone or something, and the only two ways to achieve this are by either using difficult to guess secrets, or difficult to replicate characteristics.
In computing, it is very easy to replicate identifying information, so shared secrets are the only way to go. In the physical world, we can take advantage of the inherent difficulty of replicating certain things, and this is the form of security used by physical currency such as Dollars, and why Gold is such a useful standard.
And neither form of security is absolute. There is always a chance that a secret will be guessed, or that an identity will be replicated. All security is a matter of chance. Good security just stacks the deck in your favor.