Operating system vendors were given two months notice before a security flaw was made public, but some have yet to resolve the issue, a security researcher has claimed.
Operating system vendors were given two months notice before a security flaw was made public, but some have yet to resolve the issue, a security researcher has claimed.
Because most OS makers realise it isn’t a flaw worth worrying about for them. The real issue is with the library being used to find the flaw, not with the OS.
The end of the article “and a workaround has now been posted.”
Now, workaround means it’s unpatched but there is a way to avoid the problem. My rememberance was that the work around is this: turn off HT. That workaround works for everyone…
So, has anyone really fixed it? My opinion is that it’s Intel’s problem, and anyone else who fixes it is just being nice.
As one engineer said:
Well, if you read the paper you’ll see how difficult this would
be to do in practice – and this attack requires that the attacker
be able to run his binary on the other thread in the cpu…
So – if you allow hostile folks to run binaries on your machine
while you’re doing ssl – yes, disable ht or simply offline
on of the threads on each processor w/ psradm. Otherwise,
a “that’s interesting” would seem to suffice. -Bart Smaalders
It seems strange that this guy is jumping up and down waving his arms and seemingly no one is paying any attention to him. Perhaps the risk is far less than he claims. Or, perhaps, the various “vendors” are getting too lazy.
If the latter, a zero day sploit will get everyone’s attention.
Well, it would hardly be a zero day considering it’s widely known about at this point. As for the severety of the issue, I guess time will tell, but for right now this guy is on the same list as Steve Gibson IMO.
A Microsoft representative said while the company was investigating Percival’s report, it was not aware of any active attacks using this method at this time and would wait until completion of its investigation to take action.
This is what really upset me about some companies, not only microsoft, Microsoft was a little more honest than the others. But how the hell you will wait because there is no report of the attack????….. Disable the feature, complete your research and then make the required action…. but wait?????
The problem with waiting for a “fix” to OpenSSL is that the vulnerability is
with the information disclosure between processes and not within a particular
library. Unfortunately, making a change to OpenSSL will not be a complete
solution.
See ftp://ftp.freebsd.org/pub/FreeBSD/CERT/advisories/FreeBSD-SA-05:09 for brief
details.
The problem with waiting for a “fix” to OpenSSL is that the vulnerability is
with the information disclosure between processes and not within a particular
library. Unfortunately, making a change to OpenSSL will not be a complete
solution.
That is actually incorrect. The issue is that HT processors allow context switches without flushing the cache lines. Libraries like OpenSSL aren’t random about where they place key data so it’s possible for a carefully crafted process to read data from those known memory locations. The various OS vendors probably won’t do anything about that since it is a library problem. The only solution from the kernel side of things is to force a cache flush during context switches, but that slows down everything for the benefit of a security library with a design flaw.
This is the same kind of problem as the warning about “insecure memory” that GPG prints if you run it as anyone but root. After the program is done executing it’s possible for it’s pages to be given to another process which can then read key data. Even worse, if the GPG key data is swapped to disk, until it’s overwritten anyone can go through swap space and look for key data.
Furthermore, if you’re allowing untrusted parties to run executables locally, there are much simpler methods to attack SSL then this.
The guy who discovered this problem has been seriously over hyping the scope.
> … that slows down everything for the benefit of a security library with a design flaw.
I’m no computer scientist (I’m a physics student), so I can’t evaluate your knowledge on the subject and the correctness of what you wrote, but I want to point out that Colin Percival, the researcher the article talks about, wouldn’t agree with that statement. It doesn’t seem to be a SSL-specific problem – otherwise presumably it could be easily fixed.
From the paper (http://www.daemonology.net/papers/htt.pdf), in the very last paragraph:
“We note, however, that due to the large number of cryptographic libraries in use, and the vastly larger number of applications which use (and distribute) them, it is impractical to attempt to fix all cryptographic libraries and applications in a narrow time frame, so the first line of defence against the particular issues discussed here must come from the operating system.”
[I meant “OpenSSL-specific”, of course]
I’d also add that the guy from the article, besides being a FreeBSD security team member (that’s part of the reason why FreeBSD was the first OS to be patched against this) is studying for PhD at Oxford (23 years old, got CS degree at 19). Of course this doesn’t automatically mean he’s right and/or serious, but I seriously doubt he’s the kind of person to overhype something like this in order to get a reputation – he doesn’t seem to need that.
That is actually incorrect. The issue is that HT processors allow context switches without flushing the cache lines. Libraries like OpenSSL aren’t random about where they place key data so it’s possible for a carefully crafted process to read data from those known memory locations.
No, you are completely incorrect.
The issue has to do with an HT thread being able to cause and detect cacheline invalidations in its sibling, due to cache line aliasing.
There is no direct data leakage.
ulib: Of course this doesn’t automatically mean he’s right and/or serious, but I seriously doubt he’s the kind of person to overhype something like this in order to get a reputation – he doesn’t seem to need that.
Perhaps. But when I read his paper I had to wonder about how practical such an attack would be. I mean really, one thing about security (in general) is that you can’t run after every little issue, you go after the important ones first.
And when I read his article, it sounded… Possible… But highly unlikely that such an attack would work well, that it would be way better to try a different approach.
For example, (maybe someone would know better than I) when I read his paper, I had to wonder how much garbage his spy program stole along with the key. That’s actually pretty important in my opinion, because if a spy has to go through too much garbage, there was no point to spying to start with. (As well as what the garbage looks like. If it’s all “zeroes” for example it wouldn’t be too hard to go through it, but if it’s a jumble, that’s going to be hard.)
Maybe there was no garbage, but the issue (if I remember correctly) was not addressed in his paper and I wasn’t going to setup his own experiment for him.
And then there was the fact that he set up the test so that it had an “ideal” enviornment. That’s not even remotely realistic either in my opinion.
To me… The issue of how “doable” this attack is needs to be addressed before actually pursuing a solution. In fact, if I was the one pursuing this topic I would have attempted to address that already.
Or in other words… I think the OS vendors and hardware people have more important things to do.
If you check out the American Department of Defense standard 5200.28, “Trusted Computing System Evaluation Criteria” (the so called “orange book”), you’d see that the highest security classification (A1) is the only classification that mentions covert channels, and states that these covert channels need to be “identified, analyzed and justified” – it doesn’t say they actually need to be prevented.
This is the highest security classification in the highest security specification I know of – AFAIK no OS has ever actually acheived an A1 rating. For e.g. Windows NT got a rating of C2 as long as you don’t plug it into a network(!). IMHO the main reason no-one cares about covert channels is that a secure OS should prevent the required “spy” software from being installed or run in the first place.
In summary, it’s not a problem for Intel, it’s not a problem for Microsoft, it’s not a problem for other OSs – it’s simply not a problem at all (unless someone somewhere sometime decides to try to get an A1 classification).
While the original author of the research might not be the type of person to deliberately overhype things, it is possible that his perspective has lost touch with reality.
-Brendan
I mean really, one thing about security (in general) is that you can’t run after every little issue, you go after the important ones first.
If this were a generic ‘security issue’, I’d completely agree.
But when you’re talking about a specific flaw that it has *proved possible* to exploit, like in this case, I strongly disagree. I expect a serious OS vendor to promptly take action: from the article, it appears that vendors like Red Hat, Novell, Mandriva, Microsoft didn’t.
ulib: But when you’re talking about a specific flaw that it has *proved possible* to exploit, like in this case, I strongly disagree.
Key words “proved possible”. Lot’s of stuff is actually “possible” that hasn’t been fixed yet. That’s where being practical comes in. You hunt down the important ones first.
Honestly… I’ve had a number of “security hazards” introduced to me over the years of varying of complexity a number of them I had a bit of trouble understanding until a couple key details where mentioned at which point, I immediately saw how the attack could be used.
With this… I see nothing. I see how it works, but I see no way (at the moment) to make use of it. Maybe it’s something which I don’t know the right details about. But if that’s the case Colin Percival could have done a better job of explaining, because I’m obviously not the only one who is having trouble with this.
And remember his test (if my memory serves me well) was under “ideal conditions”. I could sit here and expose security issues probably every day for the next month (maybe longer) using “ideal conditions” if I wanted to. But “ideal conditions” don’t interest me. (That’s why with my own projects, I intentionally try to rip them apart and I never set up “ideal conditions” except for a first test to prove that something just might on a long shot work. To me the real test is “real world conditions” or as close to it as I can get. I always ramp up the pressure on my projects until they break. Then I fix them.)
(In fact… If ideal conditions are of interest to you and others, maybe I should publish some of my “first test” work. For example, I have a few algorithms that operate in O(1) time given “ideal conditions” which almost never happen in real systems to my knowledge. I could probably publish about 10 times the number of papers.)
And if you can’t make use of it. What’s the point?
Also… As if that wasn’t enough, there is already a fix. Turn off hyperthreading. Not an ideal fix, but it does work.
Maybe there’s something I’m missing… But I see this so far as something which is virtually a non-issue.
all kind of exploits and hacks IF you have the right conditions and the time and the knowledge. maybe a large commerce site or something like that should be worried but even then I think the chances are slim to none that anyone would pull this off….
Key words “proved possible”. Lot’s of stuff is actually “possible” that hasn’t been fixed yet. That’s where being practical comes in. You hunt down the important ones first.
If you put it like that, your words are misleading: the paper (provided it’s authoritative: I really think so, but of course this is open to the reader’s judgement) states clearly that it’s not simply “possible” that there is a security flaw. There *is* a security flaw, period. It’s “possible” to exploit it, in the sense that everybody who tries hard enough *will* exploit it. This is what has been proved.
In this situation, I really think that an OS maker is supposed to promptly issue a security advisory. The BSDs did it: security advisory and patch. I understand that they had an advantage, due to the role Colin has in the FreeBSD project.
Still, I don’t think that downplaying security flaws – that is, what other vendors did – is a sensible strategy, nor I think that it goes in the customers’ interests.
If a new way is discovered that enables a malicious fellow to hack my system, I want to know – independently of how likely it is to happen.
it will not get fix that is how they will force their customers to upgrade
Ulib, if you are worried about this exploit, then you might want to think about taking the ethernet card and/or modem out of your computer because both are definitely “a security flaw.” And, you better make sure that your computer room is fireproof and the door is welded shut, because if not, then you are not “SECURE”
“egedra (IP: —.dsl.wotnoh.ameritech.net)” wrote:
Ulib, if you are worried about this exploit, then you might want to think about taking the ethernet card and/or modem out of your computer because both are definitely “a security flaw.” And, you better make sure that your computer room is fireproof and the door is welded shut, because if not, then you are not “SECURE”
[Sadly, I could have sworn that somebody, sooner or later, would have made a comment like this..]
Deciding if I want to make my computer room fireproof falls under *my* responsibility.
On the other hand, I would expect that when a new way of exploiting my system is discovered, the OS maker would tell me – and maybe provide a solution/workaround/patch, like the BSDs have done.
These two things (and here’s where your logic is badly flawed) have absolutely nothing to do with each other.
Regarding the ethernet card and modem that would be “security flaws”: of course it’s very likely that there are ways to hack my system that haven’t yet been *discovered*. I think everybody’s taking this for granted (of course with different OS choices come different degrees of security).
But again, this is totally unrelated (badly flawed logic n.2) to my expectation that when a new way *does* get discovered, the OS vendor would act accordingly, by 1) letting me know that there is a new (more or less dangerous) exploit, and 2) providing a patch/workaround/solution.
ulib: If you put it like that, your words are misleading: the paper (provided it’s authoritative: I really think so, but of course this is open to the reader’s judgement) states clearly that it’s not simply “possible” that there is a security flaw. There *is* a security flaw, period. It’s “possible” to exploit it, in the sense that everybody who tries hard enough *will* exploit it. This is what has been proved.
No my words aren’t misleading. The paper (if my memory serves me well) in fact states that he did this under “ideal conditions”. Ideal conditions are very far and away from the real world.
And this is why no one is taking him seriously. It’s like a “PhD physics student” claiming he has created a “new weapon” that the world needs to worry about. However, this weapon only works in a perfect vacuum, with no gravitational field, no electromagnetic field, and no light. Which, I’m sorry, would not match Earth or Space. But then this “student” claims the countries of the world should develop a defense! But no “professional physics people” can see how this weapon could possibly work in real world conditions and so they ignore him.
My point is… And has been throughout… Ideal Conditions are NOT Real World conditions by any stretch of the imagination. And he doesn’t even mention anything about what would be necessary to make this attack work in the real world.
My other point is… That lot’s of things work with ideal conditions setup that would have a minimal to a “no way” chance of working in the real world. As a result, he really didn’t prove much in my opinion. (I consider work like his to be preliminary work to the “real” work as I basicly stated before. Like in my case, before I dedicate “months” of my time, I already know something is at least possible under ideal conditions, but that isn’t the question I’m trying to answer.)
Without mentioning it, it makes me wonder if he actually either doesn’t understand anything or if he already knows the answer, but it underminds what he wants. Meaning it would be too hard to make it work and he knows it, but he wouldn’t be able to get any attention by mentioning it.
In fact, I’m not the only person who realizes this. I can’t remember any “real computer science proofs” I’ve read in the past that examine “ideal conditions” only. Even a number of prototype projects I’ve read about throughout the years frequently touch upon “real world conditions” even if they haven’t been fully addressed.
I distinctly recall discussing some of them with other people and the ones that did the least about real world conditions were frequently laughed at.
What this means is not that every possible issue needs to be addressed, but that there needs to be some solid evidence that it is actually a concern. And he has not provided any.
I guess the rest of us could “do his research for him”. But I’m not really interested myself and I consider it his job, since he started this.
As an example, what about the garbage? I would swear that there would have to be garbage information collected by the spy. Not so much as whisper about such things. We are just told that it acquired the key under his ideal conditions.
(Now I’ll admit maybe there’s something I forgot or missed. But something tells me no, considering the number of people and the type of people who are “ignoring” him, like Linus for example whom I have a great deal of respect for.)
Deletomn (IP: —.225.210.210.Dial1.Denver1.Level3.net) writes:
> The paper (if my memory serves me well) in fact states that he did this under “ideal conditions”.
> Ideal conditions are very far and away from the real world.
>
> (…..)
Your memory doesn’t serve you well. The paper (http://www.daemonology.net/papers/htt.pdf , I already posted it before) doesn’t talk about “ideal conditions” at all – it doesn’t even contain the word “ideal”.
In fact, it deals absolutely with real world conditions, as you could simply deduce by reading the abstract:
“Simultaneous multithreading – put simply, the sharing of the execution resources of a superscalar processor between multiple execution threads – has recently become widespread via its introduction (under the name Hyper-Threading”) into Intel Pentium 4 processors. In this implementation, for reasons of efficiency and economy of processor area, the sharing of processor resources between threads extends beyond the execution units; of particular concern is that the threads share access to the memory caches.
We demonstrate that this shared access to memory caches provides not only an easily used high bandwidth covert channel between threads, but also permits a malicious thread (operating, in theory, with limited privileges) to monitor the execution of another thread, allowing in many cases for theft of cryptographic keys.
Finally, we provide some suggestions to processor designers, operating system vendors, and the authors of cryptographic software, of how this attack could be mitigated or eliminated entirely.”
I was referencing something of this nature, which could make my argument valid. Not saying that this problem still exist, just that there are many hardware related vulnerabilites out there that the OS developer is not responsible for.
http://www.pcworld.com/news/article/0,aid,108771,00.asp
ulib: Your memory doesn’t serve you well. The paper (http://www.daemonology.net/papers/htt.pdf , I already posted it before) doesn’t talk about “ideal conditions” at all – it doesn’t even contain the word “ideal”.
You don’t have to explicitly state ideal to make it ideal conditions. Everything you’ve mentioned I remembered. Here’s something else I remember…
He started the observed process and the spy process at the same time. (Unrealistic. When is that going to happen in the real world?) He wasn’t running anything else at the same time. (Unrealistic.) He also made sure they were running on the same processor. (Unrealistic. There is no way to ensure that they will run on the same processor in the real world.)
Need I go on?
Everything I remember about his proof revolved around ideal conditions, nothing deviated that I remember. He basicly stacked the deck in favor of his proof. Real proofs don’t do that. (At least not completely. They usually accept the real world in some manner, not just talk about what they’d like to happen.)
Also on page 8… He admits “without these steps an attacker might need to make several attempts”. (I looked that up really quick BTW) I consider that an admission of him using ideal conditions to prove his point.
I didn’t see anything in the whole paper that gave me any idea as to how many attempts that might actually be… 7? 100? 10,000? 10^1,000?
As far as I understood from the paper, he never even gave it a shot. He just waves it off.
He even briefly mentions the “noise” that will enter the channel, but doesn’t really address the issue, he mostly just waves it off.
He basicly skipped over everything that would have proved that this attack would work in the real world. And I don’t know about you, but for three months of work and/or the amount of noise he’s making, I’d say that’s pretty weak.
One thing that would have been convincing would have been if he ran numerous tests and gave us statistics on how successful each test was and how long it took to retrieve the key.
Instead he gives us one ideal test and expects everyone to go nuts over it. What he wants is not realistic considering what he gave.
You don’t seem to understand that while he may be right, his work is not up to the necessary level to be as convincing as what he wants it to be. And then as I already said… There’s already a fix… Turn off hyperthreading. And a number of people no doubt have bigger issues to deal with already. Between the three, he’s going to be largely ignored. (Unless someone helps him out by completing his research for him and it turns out to be actually worth while) Sorry but it’s true. If he doesn’t want that, then he should have done a better job, it’s as simple as that.
If I were him, I’d be refining my paper right now.
Let me add one more thing.
I don’t know all about the internals of the Pentium 4 and how hard it would be to do what he wants and I don’t know enough about every single OS’s internals to know how hard it will be to correct this “problem”. I also don’t know all about their side affects for these systems (though based on what I do know I do believe there will be “side affects” that are negative).
However… I am a member of a non-public (at the moment) experimental OS project. The time it will take us to correct this problem considering how our internals work (which is pretty exotic, that’s why it’s experimental) is projected at a minimum of one year. And in the end, it is believed it will require a complete rewrite.
In addition… Because of how our system works, we are even less convinced that it is a threat to our particular system.
Now maybe it’s just us that would be hit so hard by this. I don’t know enough about the internals of Linux for example to vouch for that. (But Linus at this time doesn’t consider it a problem and I trust his judgement.) But considering how much effort it would require, our project requires more assurances that this is an actual threat that can be pulled off in more than in a labrotory setting.
Also… One person in an eWeek article stated the following:
The approach would take some time. “It would probably be easier to do a social engineering experiment and just walk in there and steal the d–n box,” Fleming said.
and as another person said…
“In order for this particular exploit to be launched in a system, the system has to already have been compromised”
All of which I agree with. That article can be found at http://www.eweek.com/article2/0,1759,1815954,00.asp
To anyone who has reasonable amount of knowledge on computer hardware, security and operating system design and implementation, should really see that the threat is quite real in certain reasonably common scenarios. Unfortunately even 95% of CS students that have recently completed their courses on all these subjects seem to fail when it comes to understanding how all these systems interact.
Typical, feasible scenario would be stealing the private host key (let’s say, for ssh, in this case) on a multi-user system. With this covert channel, it is possible for regular users (in this case, one could consider victim host that’s server hosting services for multiple clients at colo). As long as the system doesn’t have excessive interrupt load, typical time between context switches is in the region of hundreds of thousands of clock cycles, in which time a spy process can collect a considerable chunk of sample data to be compared with other sample runs that form the complete jigsaw puzzle forming the complete latency view of a OpenSSL key calculation with a private key, and thus the key itself. Multiple pieces is not really a problem, since the attacker can initiate the calculations involving the private key himself, just by connecting the ssh daemon (in this case, probably from outside) and aligning the sampling with the connection time.
After the private host key have been stolen, the attacker can impersonate the host with another host. Agreed, if one really wants to benefit, DNS spoofing is also necessary – but DNS spoofing *is* widely used nowadays, anyway. If the attacker manages to set up DNS spoofing and install the key to his server, he can run a ssh server that looks like a ssh “known host” to other users that rely on host key fingerprints provided by ISP, and they can’t really expect they’re logging into entirely different machine. Depending on the sophistication and goals of the attacker, he can do all kinds of nasty things after this with the passwords, transported data, and such that he now has.
Obviously, this attack applies, in different ways, also to secure http connections, and some others.
“As long as the system doesn’t have excessive interrupt load, typical time between context switches is in the region of hundreds of thousands of clock cycles.”
Of course, above included an implicit assumption of userland-dominated processing (=not many system calls), which in the case of public key cryptography is the case. Unless the crypto is hardware accelerated, which is entirely different thing, anyway.
My point was not that it’s absolutely impossible. And my point wasn’t that I don’t understand.
My point was that Colin Perceval did NOT finish his paper (how many times do I have to say stuff like that, before you people will actually read it?) and is expecting a bunch of people to take him seriously. And not all of us don’t have time to sit down and do it for him. It should have been rather simple for him to finish it. But he didn’t.
That’s basicly the long and short of it.
And not all of us don’t have time to sit down and do it for him.
Should be…
And not all of us have the time to sit down and do it for him.
The purpose of the paper is to state that hyperthreading technology provides a timing-based covert channel between OpenSSL key operations and a spy process designed to capture the data from the covert channel. It obviously isn’t the last word in research in any way – the purpose is to show that such an problem exists, show it’s feasible to exploit at least under some circuimstances, and provide description which future research can verify or disprove the feasibility under various working conditions.
The point of the paper is not to provide theoretical background on anything that a scientifically literate reader should either know, or learn easily, such as noise in timing-based covert channels, OS scheduling, memory hierarchy behavior or timing-based attacks against cryptographic algorithms. One should also note that defining “typical conditions” instead of “ideal” or “worst-case” conditions is impossible for most real-world systems. For me it’s enough to know that it works on an idle system, which can be seen as “typical condition” on many hosts.
If you want to disprove that the flaw is feasible, it’s very easy. The paper provides enough details and references to do it yourself.
Anonymous (IP: —.tky.hut.fi): The purpose of the paper is to state that hyperthreading technology provides a timing-based covert channel between OpenSSL key operations and a spy process designed to capture the data from the covert channel. It obviously isn’t the last word in research in any way – the purpose is to show that such an problem exists, show it’s feasible to exploit at least under some circuimstances, and provide description which future research can verify or disprove the feasibility under various working conditions.
And yet, the author wants OS makers to address this right away. What you describe this paper as being is acceptable, what it is being used as is not.
Anonymous (IP: —.tky.hut.fi): The point of the paper is not to provide theoretical background on anything that a scientifically literate reader should either know, or learn easily, such as noise in timing-based covert channels, OS scheduling, memory hierarchy behavior or timing-based attacks against cryptographic algorithms.
That’s fine, except that as I said before OS makers are expected to address this. Everyone has their “speciality” we cannot be familiar with every single little thing. (For example, the only thing I am not familiar with is the noise in covert channels. Now I know a little bit about such things, because one of our subsystems in this project uses something similar for communication rather than a direct link, because surprisingly enough it was more efficient than a traditional channel in our case. But it is not identical in every aspect. In our particular mechanism we resolved the noise issue fairly easily, so it wouldn’t completely shock me if there is no “real” problem with noise, but I am not familiar with the security aspects of this.)
To cover what I need to know in particular would require only about a paragraph (possibly more depending on the level of detail he would want to go into) and since he already has everything set up, it would have been easy for him to add it. For me… It would require setting up a different system from what I have, since his tests do not go one-to-one on our system. As a result, I know I can not easily evaluate it. (And our “standard” systems (like my Linux workstation) don’t use Pentium 4’s. So that can’t run the test either.)
Anonymous (IP: —.tky.hut.fi): One should also note that defining “typical conditions” instead of “ideal” or “worst-case” conditions is impossible for most real-world systems.
Kind of… But you can reach a certain level of realism. In fact, all papers I have read have attempted to address that in some way. Besides his. Which seemed to avoid the issue completely.
For example, one of my upcoming papers (I haven’t decided exactly when I’ll publish it) started with an “ideal system”, but the actual proof is generic. And I threw a number of test cases at it (of various types, hitting it from a variety of directions) and there’s statistics as to how well the concept works.
1 test… Is from my experience… Ridicuously uncommon.
In fact… I don’t even know any “decent” undergraduates who keep it to just 1 test.
Anonymous (IP: —.tky.hut.fi):If you want to disprove that the flaw is feasible, it’s very easy. The paper provides enough details and references to do it yourself.
And as I stated before, we don’t have the time to address every single issue. (Not all of us have the necessary staff/equipment to do it, simple as that. For example, I already have countless issues to address in the project I’m involved in. Adding new ones practically on a minute-by-minute basis. The time does not exist to address this paper.)
Issues literally come up all the time, we cannot address them all, we must pick and choose. Even though we are a small group, I’d imagine the larger (commercial if you will) OSs have similar problems. There is always something… Always “too much” something.
Or in other words… The audience for this paper (based on what he wants) is not just “specialists” who are extremely familiar with the topic on hand. The audience is also people like Linus. (And somewhat myself, though I don’t think the author is too concerned about research OSes) People who are decision makers, involved with OS development, etc… And it would have taken little time to add a few of the details needed to help us evaluate the problem. But he didn’t. If had… We wouldn’t be having this debate. That simple.
Lesson 1 of writing a paper is… Take into account your audience/readers.
Anonymous (IP: —.tky.hut.fi): If you want to disprove that the flaw is feasible, it’s very easy. The paper provides enough details and references to do it yourself.
Yes… I suppose I could shut down one of our test systems, setup FreeBSD on it and everything else and test it. Then spend the next few days deriving tests. Then if it doesn’t turn out to be a real threat, I’ve wasted my time. If it does, then I can take the time to attempt to convert the attack to our systems (which might take a few weeks there actually), run it. And if it doesn’t work I’ve wasted my time. All time taken from my development work. And considering we have a deadline for part of this, we might no longer be able to make it since we are barely making it as it is.
The 2nd part I don’t mind doing if it is a real threat, but the 1st part should have been done by him.
(Granted of course, ours is an experimental system and partially a school project, but real OSes have their own problems as well)
But yes… I have no problems (none that are significant anyway) with it as a research paper. After all it was quite interesting. But to use it to convince OS developers? That’s a problem. It’s not up to the necessary level of quality to achieve that result. (That should be obvious to everyone by now, because if it was it wouldn’t be an ongoing debate.)
Deletomn: In fact… I don’t even know any “decent” undergraduates who keep it to just 1 test.
Heh. Or as one of my old instructors used to say to the people who did do one test… “1 test? How do you know it wasn’t a fluke. You should do at least 3, 2 if you’re in a hurry.” (Or some such thing)