AMD’s Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP. Intel plans similar features on next generation Pentium chips.
AMD’s Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP. Intel plans similar features on next generation Pentium chips.
This is not an excuse for being a lousy programmer.
Why people keep using gets() and the billion other buffer-over-flowable functions is beyond me. Schools really should teach secure programming, instead of lazy programmming.
you mean having accessor methods are bad? i’ve been learning programming at college fo only a few years and i was taught that this was good programming practice
I don’t think uruloki is talking about accessor methods. I would assume he is talking about gets() function in the stdio library in C. It gets a string from teh standard stream.
It’s declared as follows:
char * gets (char * buffer );
Which you may notice does not have buffer_length as a parameter. So you may do the following when calling this function:
char name[64];
gets(name);
Here, you make a programming assumption that no normal person has a name longer than 63 characters. And 98% of the time that assumption would be valid. But suppose someone is malicious or just had a crayz long name, then you would overwrite the buffer space provided by name. What happens from that point on is simply undefined. It could just crash, it could take over your code or any number of things…Simply put these types of functions can and should be avoided.
In C code, especially in teh libraries, there are many functions which could potentially overflow a buffer.
Most of the string related function are not protected by buffer overflows (strcpy…) However most of them have ‘n’ version like strncpy, snpritnf…which limit the copying done.
sprintf is one of the biggest problems as checking for buffer limits is non-trivial as opposed to say strcpy, where its a simple check of the src-lenght and dst buffer size.
But, this new chip is definitely a welcome design. Sure, it would be good if every programmer knew what he/she was doing, but having the OS/hardware capable of preventing such problems is a great idea. I wonder if having if enforcing such rules will also increase cache efficiency?
Every new programmer should be forced to use Perl until they know how to properly manage memory on their own. I’m still learning this myself.
why not just change compiliers, or make smarter ones that catch memory leakage problems. Seems like it would be simpler.
I remember OpenBSD was going to support this in the athlon64 chips. Having your apps crash when a buffer overflow happens helps debugging. Its the reason I use Electric Fence to test programs. The feature is going to be wonderful for consumer computers in general.
You can also prevent buffer overflows and other memory corruptions compiling your binaries patched with PAX and of course your kernel. In Linux, for example, the grsecurity patches readily available and are quite impressive.
There are very few good books on secure programing in C. It’s unfortunate that most introductory textbooks do not hammer in on security concepts right from the get go and ways to prevent memory corruption. Defensive programming is hardly thought these days.
http://www.osnews.com/story.php?news_id=2694
we’re not talking about memory leaks. We’re talking about buffer overflows.
Compilers can do certain checks for buffer overflows, so could better standard libraries.
Errors are going to occur with almost anything you use. You just have to protect against it at every level possible.
Programmer trained better
Better libraries
More intelligent compilers (this kindof error would be amazingly hard to detect/fix, but who knows what they can do)
OS
Hardware
As in all cases, its nice ot have problems solved in the uppoer layers. We also can NEVER trust users/applications to play nice with computers. For assurances and complete protection, it had to appear in the bottom layers (OS, hardware)
Descriptor tables in protected mode enable you read only flags on memory. If you’re program utilizes these features then there is no need for any of this extra buffer overflow protection. The protection measures have been there since the 286. Nobody decided to use them. If some sort of executable format were created that mandated that each process has an LDT that explicitly defined it’s data area read/write and it’s code area read only then the problem would be solved. Any attempt to write into a code segment would generate an exception.
Programing is just damn hard. I wouldn’t be so excited about this if I hadn’t noticed how easy it was to mess up memory.
also, fun
http://community.corest.com/~gera/InsecureProgramming/
…we’ve only had these on RISC architectures for years, although they certainly aren’t a silver bullet against buffer overflows.
Uruloki (IP: —.telenet-ops.be)
This is not an excuse for being a lousy programmer.
Why people keep using gets() and the billion other buffer-over-flowable functions is beyond me. Schools really should teach secure programming, instead of lazy programmming.
Your statement assumes that the causes of buffer overflows are trivial, when in reality they often aren’t, but are usually the result of bad pointer arithmetic which relies on preconditions which aren’t being properly checked for (i.e. bad input validation).
A better approach is to enumerate all the preconditions of a given function/method in a comment above the function/method, then place comments throughout the source to show where these preconditions are being validated.
Right — Page protections won’t stop buffer overflows from corrupting data, but when employed properly should stop them from being executed as code.
Microsoft was kind enough to “innovate” and add some sort of stack overflow prevention mechanism to 2003. However, POC code was floating about for rpc dcom (and others) so apparently it’s easy to circumvent.
There have been techniques to get around Pax, Stackguard, etc. In any event, some protection is better than none.
Openvms has had this for years, however it only eliminates a certain class of attack. Does this new hardware stuff prevent heap overflows? hehe.
This sort of thing is very much needed, and it has nothing to do with lazy programmers. Shit happens, bugs sneak past even the most talented and observant coders. The whole “many eyes make all bugs shallow” hting that the Linux kiddies always quote hasn’t come close to eliminating common programming errors from OSS now has it?
Having many layers of security is always better than having to rely on one or two.
Buffer overflows want to make me cry.
It pisses me off that so much code is still produced in C, because no one is willing to use an alternative (or actively believes that C is the best).
-B
why does it piss you off that people use C?
Anything embedded or lower level stuff will have to be written in C or some variant that shares most of its downsides.
People will use whatever language they choose, they are used to, they are told to use.
I for one first learned C++ and I’m more than comfortable with its semantics. That said, whenever possible, I use safe classes when I do use it, which tends to remove almost all ‘normal’ cases of buffer overflows. There’s simply very little need for a non-embedded app to use standard arrays or C style string operations. Given my choice, I’ll use C++ till the end of time as long as it suits my needs.
The great thing about C++ is that its as complicated/detailed as you want to get it. Using proper libraries like STL, QT, or what not, it makes code trivial and takes care of buffer / memory for you. The main thing any c++ programmer should learn is unless you’re working for NASA or ATI or something,don’t optimize anything like performance/memory. KISS is the best philosophy. A PC has more than enough for you. I know I know, I’m responsible for all those inefficient apps out there. You gotta love GUI code which uses a byte count instead of an int counter.
For other GUI/development apps, VB is always fun and easy to use. Don’t diss it until you try it Java is okay too, but I’ve never really done anything with it except academically. People do use many alternatives, but C or C++ will be around for a very very very long time.
Well, actually when you are developing an operating system and want it to be as fast and small as possible, and conceiving that it will need to have a close control to hardware and be flexible mangaging memory, yes, C is hard to be beat.
About the problems with buffer, as Bascule, nqbus and Kingston point out, it’s not easy to defeat and a new layer is very much welcome (hey Kingston I used an VAX 720 a long time ago, it was amazing at that time).
It seems to me as if buffer overflows are actually an cpu architecture design flaw. If the stack grows upwards instead of downwards, buffer overflows would not be able to overwrite the return address of the stack (since any overflowing data gets written upwards, although the return address would be below the stack pointer). Since buffer overflows cannot overwrite the return address, no malicious code would be jumped to.
First, would be nice to have a way to correct a post (many already asked for that).
Please, read managing instead of <<mangaging>>
Intel and AMD aren’t doing the programming for Microsoft. I practically fall on the floor laughing each time I see M$’s “Security” speak promises.
If M$ _really_ was interested in security XP wouldn’t be easier to break into than ’95, 3.1, and DOS. I’m old enough to remember those. So what happened besides the internet. Suddenly programmers are much better programmers than they used to be? Fat chance.
M$’s idea of security will be stopping other programs from being installed on Windows. Meaning that M$ knows that it can’t stop programs from behaving badly in protected areas of memory. So it is only going to do what it is only capable of doing. Severely restrict what can be installed and then charge companies BIG bucks to have their software approved.
Sorry. But innovation usually comes from small companies. These companies (actually just one or two people in the beginning) will never get off the ground because they won’t be able to afford the M$ tax called and Entrance Fee.
um, uh…isn’t that why we have protected mode on the x86? And priviledge levels of execution? Looks like bad OS architecture to me.
Its quite surprising this is taunted as a “new feature” having all the required protection mechanisms built in since the 286 and also for many years on most other architectures.
As ByteEnable suggested, its most often a bad OS architecture, but it can also, as jizzles hinted, a bad programming language selection.
Any language that allows you to write past the end of an array should have been put out of commission a long time ago. Languages like C and C++ should be replaced with Ada95 or any other safe languages.
Quite frankly such a news being posted on OSNews and taunted as revolutionary and solving the problems we had with all the worms is simply disgusting.
Buffer overflows on stack are happening on memory that would be set as writeable anyway, doesn’t help much. Functions that does not check the limit or range of input data are not helping.
Please ppl, get a clue.
This turns out to be an incomplete solution for buffer overflow attacks. It is still quite feasibile to corrupt data, as another poster observed, and it is also possible, with some trickery, to hijack applications and get them to misbehave. This still does not really address the integer overflow attacks that have plagued MS apps and libraries lately. While this eliminates the obvious problem of an application loading code from a data page, it’s still possible to cause other malicious mischief.
I also am curious what kind of backwards compatibility MS is providing once they introduce this feature. Will applications that re-write their code on the fly, or execute code on the stack magically crash and refuse to work, or will MS provide a legacy interface which will render all of their newly i mplemented security work useless? Either way, it sounds messy for application developers, and for those looking to support legacy applications on a Windows platform.
It is also quite true, as another poster noted, that page protections have existed for years on x86. It’s somewhat amusing that MS has finally decided to leverage this for security purposes. Whether or not this will actually turn out to be useful, or be used, remains to be seen.
It is also quite true, as another poster noted, that page protections have existed for years on x86. It’s somewhat amusing that MS has finally decided to leverage this for security purposes. Whether or not this will actually turn out to be useful, or be used, remains to be seen.
Oh, please. I certainly don’t like MS, but they’re not complete idiots altogether.
On ix86, R == X, that means whatever is readable can be executed. Hence exploits using the stack or the heap, memory areas that have to be both writable and readable.
Now, there is a work around this, using some trickery around the code segment, but having a real X flag as other archs have is the real solution.
Anyhow, that doesn’t stop the ‘return into lib’ kind of exploits, for which you only have to overwrite the stack.
Descriptor tables in protected mode enable you read only flags on memory. (…) Any attempt to write into a code segment would generate an exception.
That’s not what this article is about. Most OSes set their code segments read-only – self-modifying code is out of fashion anyway. The problem here is about stack, which isn’t in your code segment, and which has to be RW.
On ix86 CPU’s, if you can read from a segment, you can also execute from that segment. Other CPU architectures have had separate RWX flags before – if you jump into a segment which isn’t executable, you’ll get a CPU exception. On ix86, the processor will happily treat the data as if they were instructions, and start executing them. Usually this leads to a quick death because the instructions don’t make any sense, but the exploit is about placing real instructions on a strategic place in the data segment and make the CPU jump there.
Note that this X bit will not solve all possible expliuts, because one will still be able to overwrite the return address on the stack and make the app crash, or make it go somewhere it normally wouldn’t.
Are you sure Intel x86 had workign execution segment protection (which is what we’re really after to prevent this kind of buffer overflow) and not just read/write protection?
Ideally, for non-changing code, a code segment should be marked READ_ONLY with EXECUTE PERMISSIONS. Other segments should teh EXECUTE PERMISSION explicity not set.
This may not be revolutionary, but its a damn big deal as its one of the main desktop CPU makers doing it.
Oh, please. I certainly don’t like MS, but they’re not complete idiots altogether.
I never said MS were idiots. I’m just amused it has taken them this long to deploy any kind of buffer-overflow prevention, and am skeptical of the methods they use.
Now, there is a work around this, using some trickery around the code segment…
This is exactly what I said. There are workarounds for the methods they employ, so it’s hardly a panacea.
On ix86 CPU’s, if you can read from a segment, you can also execute from that segment.
Not so. Ever since 80286, the protected mode has had separate types of segments for code and data. Code segments are executable and optionally readable; data segments are readable and optionally writable. The problem is that people nowadays map code and data segments to the same range of linear addresses, so that everything is accessible as both code and data. I suppose there are several reasons to do so: loading a segment register is slow, far pointers are larger, and APIs inherited from Unix (such as brk()) may assume a flat memory model.
The change described in the article is that AMD allows making individual pages non-executable. Pages are used in the translation from linear addresses (to which segment descriptors refer) to physical addresses.
If I understand correctly, AMD processors support this feature only in their 64-bit mode. Which makes sense, as this mode requires a flat memory model essentially without segments, so there’s no other way to make accessible data non-executable.
“This sort of thing is very much needed, and it has nothing to do with lazy programmers. Shit happens, bugs sneak past even the most talented and observant coders.”
It absolutely does have to do with the programmers. If they coded securely, it would be a non-issue. And not “laziness” just “insecure coding.” I agree with you, that since eliminating ALL buffer overflows would be tough (and cost too much money and time), that maybe a chip solution is better.
“The whole “many eyes make all bugs shallow” hting that the Linux kiddies always quote hasn’t come close to eliminating common programming errors from OSS now has it? ”
Your comment is total flame bait. No one mentioned Linux, and the article only mentions XP. In any event, it’s a double edge sword. I can take the code to an OSS project such as AOLserver and parse through it looking for bugs. It’s a bit tougher with proprietary products. You aren’t going to seriously put Windows 2003 security on the level that Immunix linux is?
“It pisses me off that so much code is still produced in C, because no one is willing to use an alternative (or actively believes that C is the best).”
All the checks slow down the speed of the code. C = fast. Have you seen benchmarks comparing C to some of the “safe” languages?
“If M$ _really_ was interested in security XP wouldn’t be easier to break into than ’95, 3.1, and DOS.”
In defense of MS, XP has loads of services that 9x does not have. In fact, I’ve seen comparisons of Win 98 vs. Openbsd (Openbsd from 1999 or so). Openbsd has several remote roots, while to attack a Win 98 system would be tough. (I am aware that openbsd supposedly only has “one” remote root. That’s ONLY applicable to current or some such.) Is Openbsd insecure? No, it offers functionality that a stock win 98 does not, so the comparison is invalid and meaningless.
In fact, in all my books, I think I have 1 chapter on attacking Windows 9x. Meanwhile linux and Windows NT have whole volumes written on hacking, and securing.
“Languages like C and C++ should be replaced with Ada95 or any other safe languages.”
DOD had, at one time, a standard that all new apps would be written in ADA. I understand it was rarely adhered to. Most everyone I’ve talked to who has coded ADA, hated the language.
I’d be interested in what the mil uses.
“This turns out to be an incomplete solution for buffer overflow attacks. It is still quite feasibile to corrupt data, as another poster observed, and it is also possible, with some trickery, to hijack applications and get them to misbehave.”
Format bugs, etc, will still be around too. This is mainly for stack overflows (and perhaps heap). However, i think the network worms have mainly used stack overflows, so it would stop those, theoretically.
“It is also quite true, as another poster noted, that page protections have existed for years on x86. It’s somewhat amusing that MS has finally decided to leverage this for security purposes.”
2003 has some form of stack execution protection. However, it’s been circumvented.
“This may not be revolutionary, but its a damn big deal as its one of the main desktop CPU makers doing it.”
Yep. We shall see if the blackhats can find a way around it.
In my view, the talented blackhats will almost always find a way in. So the goal is to lock out the network worms, and the script kiddies. I mean, if you have a serious need for security, you wouldn’t be running Windows 2003 or Redhat linux anyway. There’s a reason DOCKMASTER was run on Multics until 1998.
> Have you seen benchmarks comparing C to some of the “safe” languages?
Yes, I know, impressive how fast “safe” languages are.
O’Caml usually ranks higher in the Great Programming Language Shootout (discontinued) than C or C++.