In an unexpected move for a security company, SecurEnvoy today said that cyber break-ins and advanced malware incidents, such as the recent DDoS attack by LulzSec, should actually be welcomed and their initiators applauded. The company’s CTO Andy Kemshall said: “I firmly believe that the media attention LulzSec’s DDoS attack has recently received is deserving. It’s thanks to these guys, who’re exposing the blase attitudes of government and businesses without any personal financial gain, that will make a difference in the long term to the security being put in place to protect our own personal data!”
I think what he actually said better translates as:
1. Hax0r$, please don’t pwn us!
2. Thanks for the $$$
Of course a security company would say that. It means more money for them.
Edited 2011-06-28 16:12 UTC
They do have a point, though. All those companies that got hacked had crappy security yet are always demanding personal information from their customers to use their products.
People should be happy that those security holes weren’t found first by more malicious people than lulzsec.
MORB,
“They do have a point, though. All those companies that got hacked had crappy security yet are always demanding personal information from their customers to use their products.”
In so far as the data breaches expose a vulnerability which the company then fixes, then yes the company’s security could benefit in the long term. There’s nothing like an attack to raise awareness. However in context of the piece quoted, the vendor specifically mentions that DDoS encourage better data security, which is idiotic.
There’s no connection between bandwidth limitations and data security. If you can’t keep up with the attacker/botnet, then your dead. It doesn’t indicate anything about bad security practices.
Except these recent DDoS attacks haven’t been just about raw fragmented packets hitting the server with more bandwidth then the server can handle.
If you look at the LOIC that the anonymous group use, they target a website to request pages that take up vast amounts of resources, be it memory, server side scripting or database load.
An example would be searching in the help section of a website and searching for a common word, or even letter such as ‘a’ and the search results taking several seconds per request due to high CPU time or Database load on the servers. In this instance, just a few people (sometimes even 1 person) can take down a website simply because of bad code.
sagum,
“If you look at the LOIC that the anonymous group use, they target a website to request pages that take up vast amounts of resources, be it memory, server side scripting or database load.”
“In this instance, just a few people (sometimes even 1 person) can take down a website simply because of bad code.”
Believe me when I say that I’m a huge advocate of running efficient code. However you have to admit that depleting the server of resources by running useless (yet valid+legal) queries is not nearly the same thing as taking over the server through a security vulnerability.
It’s certainly not the same, but if there’s a way to take a server down with a small amount of organization/friends, due to the way the software running on this server works, it’s another form of security vulnerability.
Neolander,
“It’s certainly not the same, but if there’s a way to take a server down with a small amount of organization/friends, due to the way the software running on this server works, it’s another form of security vulnerability.”
This speaks to unscalable designs and systems, however a company can find itself in a situation where systems can handle the legitimate load of X customers, but not X + Y attackers. I’m uncomfortable with the conclusion that a company out to design the infrastructure to handle X customers + Y attacks.
Edit: Although, what choice is there?
Edited 2011-06-29 12:53 UTC
Availability != security.
The fact that a site wasn’t designed to withstand a DDoS does not mean it suffers from a security problem and neither is inefficient code a security problem.
It’s usually not feasible to start out with a site and infrastructure designed to handle the volume of YouTube or Facebook or a DDoS.
Deploy now, get customers and worry about scalability when the need arises. Even a DDoS once or twice is not a cause for concern unless it has a major impact on your bottom line and/or is caused by a security problem.
Some wise guy said something about premature optimization a long time ago and it’s still true.
Soulbender,
“Some wise guy said something about premature optimization a long time ago and it’s still true.”
I agreed with you up until this point. Too many people in CS use the quote above to justify designs with very poor scalability. Never forget that the quote was from the 1970s when the inefficiency typical of computing today was not yet conceivable. I’m afraid if modern day CS developers were sent back in time to work with Knuth, the quote you’d be reading would be quite different.
Yeah, it’s indeed much overused but it does apply in this situation. It’s most often not wise to spend time and money designing for immense scalability before you launch. Try to make good engineering decisions that won’t hamper you later on but don’t sweat it until your userbase and traffic start to really increase.
Or to put it another way: “working properly” is usually more important than “working fast”.
Once you have it working properly, at least it serves as a reference implementation which you can test further optimization against to make sure you didn’t break anything.
I’d suggest that DDoS vulnerability is indeed a security issue. Security is not just concerned with protecting the information in that one box. It is also concerned with protecting the system resources for legitimate use. A denial of service removes resources from legitimate users.
If your network gets flooded out by packets, you have a security mechanism failing to filter packets properly.
If your software gets crashed into a denial of service condition, you have an exploitable vulnerability in the code that needs to be addressed.
If your website takes down your webserver due to resource exhaustion through a designed website function, you have site code that needs to be addressed.
The information systems are a business resource that need to be protected in addition to the information those systems house. Denial of service demonstrates an exploitable flaw in the security of those systems.
jabbotts,
“I’d suggest that DDoS vulnerability is indeed a security issue. Security is not just concerned with protecting the information in that one box. It is also concerned with protecting the system resources for legitimate use. A denial of service removes resources from legitimate users.”
This is all true, however you’ve overlooked a crucial element: in a well designed large scale DDoS attack, the victim doesn’t know the attackers from legitimate customers.
“If your network gets flooded out by packets, you have a security mechanism failing to filter packets properly.”
Two problems:
1. A filter is useless when the attacker’s botnet has more bandwidth than you. Even an OC3 (which was considered large enough for my whole university) is easily saturated by a few hundred broadband users.
2. What kind of filter do you use? If you detect excessive bandwidth on an IP you can block it, but it may or may not be legitimate. Consider a bunch of mobile users being a proxy/nat router, you’re filter could inadvertently block all of them.
“If your software gets crashed into a denial of service condition, you have an exploitable vulnerability in the code that needs to be addressed.”
Well granted, the software should never crash. In the worse case, a busy server should start returning something like error 500 in http-speak.
“If your website takes down your webserver due to resource exhaustion through a designed website function, you have site code that needs to be addressed.”
You’re totally oversimplifying the issue to imply that code is at fault. Assuming you actually have enough bandwidth in the first place (which isn’t likely for most small/medium businesses), then there are other local bottlenecks which will require infrastructure upgrades to eliminate. Databases quickly become saturated. Even ordinary web servers can start thrashing if the attackers deliberately request pieces of material which are unlikely to be cached. This causes random disks seeks well in excess of normal load. A typical disk seek is 5ms, if the attacker successfully requests an uncached file each time, then both normal users and attackers will reach a limit of 200 requests/sec.
“The information systems are a business resource that need to be protected in addition to the information those systems house. Denial of service demonstrates an exploitable flaw in the security of those systems.”
Hopefully I’ve gotten my point across that being vulnerable to DDoS doesn’t imply a security vulnerability. As Soulbender stated already “Availability != security.”
I’d gladly discuss any usable ideas you have, but DDoS isn’t as easy to solve as you make it out.
If you get blown off the network by a flood you technology can not at all deal with then fair enough. The issue is not mitigting the risk of denial of service and getting blown off by deciding to ignore it outright; “DDoS isn’t our responsability and even if it was, we’ll just get hit with volumes that our physical network medium can’t even handle.”
“Availability != Security” is what I really keep tripping over. Encase I’m reading it wrong:
I would agree that avaiability does not mean one is secure. I would not agree that availability is not a security concern.
If your systems are getting hammered by malicious intent, maybe you need an IPS on the line to defend your systems.
If your webform is chewing up your server resources, maybe you need some throttling in place.
If the denial of service is caused as a misdirection or cover for a breakin that is most definately a security issue.
If we refer to IBM’s ten principles of secure software design which are equally applicable as ten strong principles on which to base your greater system security, Denial of Service seems to apply to:
Provide defense in depth – provide redundant security solutions should one layer fail. provide redundancy in systems should one system fail.
Secure Failure – ie. have your website degrade gracefully instead of allowing it to simply consume the system’s resources.
Compartmentalization – try to keep a denial of service on one system from taking out other systems
If we have a hode of random addresses cooperating to keep the server busy; throttle them so the server hardware can at least keep up rather than become completely unusable. Block all but known good addresses if your in such a situation the public service is secondary to specific clients/partners who use it. Drop an IPS in front of the box and let it help manage the hit.
I mean, if you’ve done what you can to mitigate denial of service attacks and your upstream provider is litterally over-run then fair enough. If you simply discount denial of service as “not a security concern” then; security fail.
jabbotts,
“I would agree that avaiability does not mean one is secure. I would not agree that availability is not a security concern.”
Let me put forward the notion that if availability is a security concern, then the internet is not really a suitable medium.
Hypothetical a country may have a grid of warhead detecting radars. These radars are considered to be of paramount importance with near-absolute availability. They bring in a team of security experts to eliminate all possible vulnerabilities. They factor in all possibilities, including spies leaking details of the project (no security by obscurity). Now that they’ve addressed the security issues, can they rely on the internet to provide the necessary availability?
I expect the answer is “no”.
I realize this skews the discussion a little bit, and that your talking about exploiting code scalability issues, but I think the point still stands; one cannot secure availability on the internet.
“If your systems are getting hammered by malicious intent, maybe you need an IPS on the line to defend your systems.”
The problem with DDoS is that no one has intruded onto the system in the typical sense. The attacker is flooding servers with otherwise innocuous traffic.
“If your webform is chewing up your server resources, maybe you need some throttling in place.”
How do you keep the throttle from affecting normal users?
“If the denial of service is caused as a misdirection or cover for a breakin that is most definately a security issue.”
Yes but in general the DDoS *is* the damage, not a cover for some other nefarious activity.
“If we refer to IBM’s ten principles of secure software design…”
You have me at a disadvantage here, I’ve never heard of them.
“If we have a hode of random addresses cooperating to keep the server busy; throttle them so the server hardware can at least keep up rather than become completely unusable.”
Well, apache tends to respond with error messages when it gets overloaded. Is this what you mean by degrading gracefully? If not, then what do you mean?
“Block all but known good addresses if your in such a situation the public service is secondary to specific clients/partners who use it.”
This will block legitimate users too, but my bigger question is how to put this into practice. Would you envision a process which scans web server logs heuristically for IP addresses and then loads them into iptables? Something more sophisticated? This list could become overwhelmingly large. What if bad IPs make it through the whitelist or the IPs are faked?
“Drop an IPS in front of the box and let it help manage the hit.”
To the extent that it can determine which requests are legitimate, then that’s great, but in practice it can be impossible to tell, an IPS has even less information to go by than the application server.
“I mean, if you’ve done what you can to mitigate denial of service attacks and your upstream provider is litterally over-run then fair enough. If you simply discount denial of service as ‘not a security concern’ then; security fail.”
That’s the opposite of what I’m claiming. It’s an “availability fail”, but the security is still intact.
I guess we’re just arguing semantics here, but I’d rather that the media distinguish between actual security failings and denial of service related downtime. Otherwise we’d start to hear about about “security flaws” every time a company’s servers were overloaded.
“I firmly believe that the media attention LulzSec’s DDoS attack has recently received is deserving. It’s thanks to these guys, who’re exposing the blasé attitudes of government and businesses without any personal financial gain, that will make a difference in the long term to the security being put in place to protect our own personal data!”
Wow, that’s more than a little ignorant.
A DDoS attack does not help improve data security at all. DDoSing does not expose a data security vulnerability which needs to be fixed. Once the DDoS is over – there’s nothing to do within one’s network to prevent it from happening again.
Yeah, well, have you looked at the products SecureEnvoy peddles? Their “securemail’ solution uses SMS (which we all know is awesomely secure and super encrypted) to deliver “secure” email.
There’s a word for it and the word is snakeoil so it isn’t exactly surprising he wouldn’t have a clue.
I’ll agree with you that a DDoS attack does not demonstrate a lack of security… and this “security” company should know better than to call the attacks from LulzSec DDoS attacks. They clearly infiltrated “secure” systems, extracted data illegally, and released it publicly. That’s far from being a DDoS attack.
I would agree that the LulzSec attacks did shine more sunlight on the pitiful security practices that corporations and governments put in place to make things seem secure – but this SecurEnvoy CTO is clearly making idiotic statements.
If all they had done was to show weaknesses, then fine. But they also posted private info gleaned from databases, things like passwords, emails, & such — not the worst they could do, for sure, but still a nontrivial tick upward on the creep-o-meter.
More creepiness = more money for security companies. Their whole business is based on fear and distrust, after all…
…
Kind of like nuclear weapon engineering, in fact.
Edited 2011-06-28 17:52 UTC
makes you wonder who might have really been behind lulz, doesn’t it.
Even if it wasn’t the case to start with, it is to be expected that the LulzSec members will soon be offered golden job opportunities…
And people say crime doesn’t pay.
The very existence of hit men is a proof of the contrary.
Edited 2011-06-28 18:39 UTC
Soulbender,
“And people say crime doesn’t pay.”
If that were true, we wouldn’t have crime.
True. In this case crime is outright encouraged though which is different from robbing the local 7/11 or selling smack at school.
I’d say Lulzsec’s crimes are being esploited more than encouraged. People are taking the opertunity presented by the last month and a half’s events and applying it to all sorts of agenda’s. Some honorable, some benign, some profiteering.
I don’t think honorable infosec folks makeing the best of the opertunity is the same as openly encouraging such behavior. Granted, those who are encouraging further criminal acts and irresponsible disclosures are equally as irresponsible.
Responsible disclosure could equally motivate big business to take it’s customer’s data more seriously if one is really in it to improve security.
Unexpected? What is unexpected is that this hasn’t happened sooner.
Security service and, even more so, appliance vendors rely heavily on the Fear Sell pitch; “buy our product else bad guys will be able to harm you.”
For security vendors, Lulzsec provides a fantastic bit of recent news to base marketing around. “buy our product else *these* bad guys will be able to harm you.”