A US computer scientist and a Microsoft researcher have devised a program that drastically reduces application crashes and can prevent some types attack by altering the way applications use memory. The program, called DieHard, officially launched over the holidays, following development that was funded by large grants from Microsoft, Intel and the US’ National Science Foundation.
I’m curious, is this anything like the Address Space Randomization that’s new to Vista, only for regular applications as well as system files?
Or do I have this all wrong?
I do see some mentions of extra features like protecting against dangling pointers and uninitialized reads, but short of DieHard knowing where the pointers are supposed to point and what the reads are supposed to read… well, I guess I’m not really qualified to understand how all this works.
Anyway, it doesn’t look like it will protect against all bugs, but it still sounds rather useful. I do wonder whether the performance hit from running multiple versions of the program (in ‘replicated’ mode) would really make it worthwhile. Maybe for a few important and buggy applications… I see it works on Linux; I might have to try it sometime
It says they eliminate double free’s and invalid free’s. This makes me suspicious that they’ve just attached garbage collection with random allocation addresses.
It also says they’re running multiple concurrent copies of the process (interesting, if not totally ridiculous) so that when one crashes they can just throw another one up (not sure how they’re making sure the second one doesn’t crash in the same way…).
Edit:
Nevermind, there powerpoint shows they’re not doing this. They’re just marking each allocation and then checking on a free if the allocation is valid and if it’s really still allocated.
Edited 2007-01-03 18:41
So it does what ElectronicFence and similar memory allocation debuggers to on Linux. (Mark freed blocks as invalid pages, pad beginning and end of allocated block with invalid pages, etc)
After looking at their page they also do some more:
* Allocating memory randomly
* Running simultaneous copies and comparing results.
So it’s interesting. Microsoft is really trying to fix their image. I hope they can make this kind of technologies stable and open to general use soon.
The simultaneous results part seems to only work for command line unix programs… Which doesn’t help Windows out much (those processes would be vastly too expensive to run multiple copies in most cases) or OS X or most Linux programs people use (I’m not saying they don’t use CLI, just that they don’t have much problem with CLI programs anyway, it’s firefox and the like that break often).
Apparently it increases Firefox’s memory consumption by 50-75% (according to the website).
With Firefox already eating gobs of memory (hell, it’s why I switched away from it), that’s just ridiculous.
I just tried Installing it for firefox on WinXP and it’s taking about 50,000K just to have this page open… ridiculous!
Well maybe making the web page compliant for a start will avoid it crashing ? W3C html validator finds 10 errors on the page and particularly the “This page is not Valid (no Doctype found)!”.
Next we will have an app that avoids ‘DieHard’ to die ?
I think the point is some bad guy will make something that will crash your browser and then exploit your computer.
I downloaded and installed Die Hard for Firefox. I then ran Firefox. It immediately crashed, twice in a row. Then it ran for a bit, with Die Hard active (I could tell because Die Hard inserted “[Die Hard]” into the title bar), until I tried to load osnews.com, and it crashed again. Currently Firefox crashes immediately on launching.
This is with Firefox 2.0.0.1 – now, granted I’ve not found this version to be particularly stable on my machine, but it was running fine until I launched Die Hard. There doesn’t appear to be any exe I can kill to remove Die Hard, so it appears I will have to reboot to fix it.
worksforme
Is it just me or does this look like a fresh implementation of technologies that OpenBSD has had for quite a while?
It sounds a good piece of development but not perhaps totally novel research.
Also appears to back up the notion that reliability and security are a function of engineered quality rather than how long your track record is at CERT.
The canard that (say) OpenBSD only appears secure because fewer people use it and therefore it gets attacked less is part of nonsensical viewpoint that still believes in security by obscurity.
This is an end-of-the-pipe solution: instead of ensuring that software is written according to a high enough standard, DieHard increases the chance that buggy software works. Why doesn’t it surprise me that this kind of research was sponsored by Microsoft ?
Yeah, it’s the wrong way, unless it is tracking the errors and can send a report back to the developer with their stack traces of their buggy code.
It reminds me of bowling with bumper lanes.
I agree to some extent. It seems like a valid thing to do with programs like Apache where they may be written to the highest possible standard but you add these features in during actual use to protect yourself.
Well, at least some of this stuff. Some of it just sounds silly to me.
Now if it gets to where it’s hard to setup a development machine with these features off then we’ll have a problem.
The basic premise is wrong: “programmers are still writing code as if memory is in short supply”
If you use less memory to store your data, more data fits in the cache. So for performance’s sake, it’s always a good idea not to waste space.
Anyway, you still want to save memory so that memory consumption scale reasonably when your application handle a lot of data.
And then, randomizing addresses… It sounds horrible when it comes to heap fragmentation, plus if a lot of small objects are allocated, it will waste a lot of physical memory because each allocated block will get its own 4kb memory page.
Edited 2007-01-03 19:26
This appears to be addressed in the FAQ on the DieHard site:
“How does DieHard differ from Vista’s, OpenBSD’s, and Linux’s “address space randomization”?
http://prisms.cs.umass.edu/emery/index.php?page=frequently-asked-qu…
The guy’s FAQ is even wrong, that ruxcon paper he links to was out of date three days after it’s release. He should look into how OpenBSD works now, rather than how it did in the past.
The fact of the matter is that this guy is a whack job, DieHard is a bloated tool that does nothing because it isn’t intergrated into anything. Without systems integration it can easily be worked around, because it only protects that which starts after it does, while the Vista, OpenBSD and grsecurity stuff is actually worth a damn, since it’s in the system and thus it is not possible to avoid the effects they have on the system.
I thought the name Emery berger rung a bell. of course! I remember…the same guy who made Hoard. http://www.hoard.org/
It still remains to be seen how useful it is (since the memory tradeoff is pretty steep), but kudos to Microsoft, Intel, and the US National Science Foundation for making the work available.
Ok, the article says this:
The problems Berger wanted to address were caused by the fact that despite the huge amount of memory installed in today’s PCs, “programmers are still writing code as if memory is in short supply”.
No, the problem is that “researchers” like THIS enable bad developers to continue writing bad software:
DieHard also secretly launches multiple copies of some applications, so that when one application is about to crash it can be shut down, while the others continue to run.
So, HIDE the problem from the user – nice… the user will continue using “bad” software and developers will continue being lazy.
and
One caveat is that DieHard noticeably increases memory usage – a rise of 50 to 75 percent, according to Berger. Application performance shouldn’t be noticeably altered, though, as long as the system has sufficient memory.
It appears the reason this works is because it basically DOESN’T free the memory, even when the developer asks for it. This is a great way to take the power of C/C++ to write tight, memory-efficient, fast code and flush it down the drain.
What a crock of shit. I use Firefox, and certainly don’t need this crap to use it. Afterall, Firefox is just an app, it’s not my whole OS.
I was actually going to ask you about this. The reason why i remembered Emery Berger is because he made this for BeOS: http://www.bebits.com/app/717
I’m not familiar with it – but I notice there’s no talkback on that bebits entry! I just can’t see any good reason to allocate more memory than is needed.
I know someone to ask what British Telecom might be using it for (since it’s listed on Emery Berger’s Hoard page).
All-in-all, it almost appears that this DieHard software is another way to advertise this guys technology.
Edit: Oh, I see it’s LGPL – well maybe he’s not selling it afterall
Edited 2007-01-03 21:19
well the real hoard talkback is http://groups.yahoo.com/group/hoard/
because unlike beOS, hoard is alive and well on other platforms.
This was interesting:
http://tech.groups.yahoo.com/group/hoard/message/709
Well, I found Emery Berger’s hoard allocator is part of the Haiku malloc implementation – so I guess it’s not all that bad
Anyhow, had to post this for posterity.
…that OSS/Linux projects has lame and “unprofessional” names…
Edited 2007-01-04 01:06
like PySBIG, MQ4CPP and SpadFS ?
When i read this i immitiatly thoughed of whenthrust:
http://www.wehnus.com/index.pl
ill believe i see it, and test test test it
omg windoze fans are braindead
notice how i didnt say windoze users- that would be hypocrisy- i am a windows user (i have to be at work- thanks to a braindead stupid IT admin) but i prefer Linux specifically debian (for its stability)
my favourite icebreaker to girls- computers are like airconditioners- they’re useless when you open windows
roflmao
Yippie kay yay!
One caveat is that DieHard noticeably increases memory usage – a rise of 50 to 75 percent
Microsoft’s standard operating procedure (SOP) for years has been to release software that uses more hardware resources than previous versions. This is hugely advantageous for them because of the following scenario.
You buy an upgrade to your Microsoft software and install it on your existing hardware. Then you notice your computer seems slow, so you conclude that you need a new computer and you buy one. But you just bought a new copy of the software, so you don’t need to buy yet another one with the new computer, right? Too bad! Microsoft demands that you buy another copy.
By forcing a hardware upgrade, they caused you to buy TWO copies of their new version instead of only the one that you needed.
Seems like plenty of folks seem to think you shouldn’t allocate more memory than needed.
I am personally tired of seeing code like:
char name[7];
strcpy(name, “bob rob”);
Especially if its on the stack. Why would you ever allocate exactly 7 chars? Is this case, its a bug. But even if it weren’t, why not just allocate a reasonable large buffer. char name[64] or even 128.
Programmers micromanaging memory is a huge problem. Yes, for large arrays/collections of data, you should optimize it, but in general…allocate more. For arrays, allocate 1 or 2 more than your predetermined size.
I’d be nervous using a bridge that would only hold exactly my weight. People using arrayd should feel the same way