Theo de Raadt wrote about the Intel Core 2 Duo chip: “Various developers are busy implementing workarounds for serious bugs in Intel’s Core 2 cpu. These processors are buggy as hell, and some of these bugs don’t just cause development/debugging problems, but will assuredly be exploitable from userland code.” Intel has given a response to the concerns as well.
Dear OSNews, please help Theo raise enough money to produce his own chip, fab, etc. He needs a new life. Let’s give him one.
R&B
hah hey guys long time no see.
I voted you up on that one becuase its so true. I can jsut picture him as the kind of guy stuc in trafic on a freeway where it is a dead stop, yelling out the window that the cars infront of him are not going fast enough. regaurdless of the fact that they can’t move either.
Although this story is about Intels Core 2 Duo, it’s also mentioned that AMD is having similar problems with dual core. In fact, I vaguely recall reports of such problems with dual core designs from at least two years ago.
My question to all – is dual core really worth it? Do you get such a fantastic speed increase that it’s worthwhile creating a gigantic security hole that can/will be exploited?
And I wonder – is quad-core as bad, or worse?
Since Intel and AMD seem committed to making everything dual-core, maybe it’s time to take a look at VIA again. See mini-itx.com – whole machines can be bought from damnsmalllinux.org – that’s where I’m going for my next machine.
And to think that I almost bought a Dell dual-core…
I’ve been using dual-core machines since the second shipment of the Athlon X2s. Each of the ones I’ve used (X2, PPC 970MP, Core Duo, Core 2) has been extremely stable. So IMHO, the claims about errata in these chips are overstated.
In any case, dual core doesn’t really make a chip more prone to bugginess. Since the cores are clones of each other, the only additional source of bugs really results from the design of the interconnect/shared cache, which doesn’t add that huge amount of complexity.
RE: Is dual-core worth it?
Yes
I believe the reason for dual cores was because they were hitting a ceiling on clock speeds and heat dissipation. Somebody please correct me if I’m wrong.
However dual cores don’t double your speed. The only way a dual core will offer you any performance boost is in running multiple application and/or running applications that can take advantage of multiple cores/processors.
Only you can really answer your question. If you have application than can take advantage of multiple cores or you run with a lot of programs running then yes dual cores will make a difference.
I actually can’t remember the last time i only ran one program at a time… So your argument is rather VOID.
How do you figure. The only way multiple cores or multiple processors will benefit the end user is if those multiple programs are hammering away at the processors or cores. If you are running multiple programs that sit idle most of the time then dual cores are waisted energy.
What benefit is somebody going to see who is running a word processor, a browser, an e-mail client, and solitaire on a system with multiple cores? The answer is none. Outside of maybe a few cases none of those are going to tax the processor to the point that it slows the system down.
Most games currently available can’t effectively take advantage of multiple cores either and would benefit more from a higher speed processor. I know a lot of gamers and I can’t say that I know to many of them that run multiple apps while playing games.
I suppose you can bring up all the services that run from boot to shut down, but none of them tax a system either. So I think one can make a fairly good argument about not buying a dual core system. Some people benefit from multi-cores and some people would benefit more from a higher speed single core.
My understanding is that threads are supposed to be spread across multiple cores (assuming the OS handles it). In that case, wouldn’t any application that is multi-threaded take advantage of multiple cores?
An example is I was writing a swing app and made a stupid mistake where I had an infinite loop in my ui code. But since it was operating in a single thread it pegged only a single core and I was none the wiser until I happened to look at my processor usage. I had never set out to write my app for dual core.
That’s probably the idea, but I’ve read a few articles about bad thread programming even from seasoned pros. Most developers have never had to deal with multiple threads and thus have trouble writing efficient multi-threaded programs.
However, in your case having a single processor or core would have help you find the error faster.
Multithreaded programming can be a beast to tackle. I just got a great book that deals with it in the Java world.
I can only really speak from a Java perspective since that’s what I do day in and day out. Java is concurrent whether you like it or not. That is because even if I write a single threaded console app, the jvm is still multithreaded. An example is garbage collection runs in its own thread. Another one is your servlet can be called concurrently. So my perspective is rather different, everybody is exposed to threads and people need to start taking that into account in their code.
On the flip side, a couple weeks ago this website reported a statement made by some Intel hot shot that said something to the effect of you all should start programming parallel and concurrent applications for every task. That of course is just stupid. There has to be some benefit for the added complexity.
The old single-core processors had just as many bugs. There is no way to avoid errata.
What about the propeller chip?
It has eight 32bit cores, and I think no errata at all.
The designers placed all transistors and wiring manually, without use of standard IP blocks or high level design languages. So it is a ‘full custom’ chip.
From the site:
“We designed our own logic, RAMs, ROMs, PLLs, band-gap references, oscillators, and even ESD-hardened I/O pads”
Of course this does not guarantee it is error free, but as it’s a small processor designed totally in house by fanatics, it just might be possible.
Exactly. Designing a CPU without any errata cannot be done, and if anyone says otherwise…well then, make your own damn chips and see how far you’ll go before giving up. Designing CPU’s/GPU’s etc. is not a trivial task, and anyone expecting perfection out of them has his head up his ass.
I built a Core 2 Duo system for a guy I know. I had to show him how to use DeVeDe to create a Video DVD. While it ran it took 50% of the CPU. So we ran two at a time and it took the full 100%. On my single core, it just took the 100%. So yeah, dual core processors are useful. Though of course it helps to have an Operating System that makes proper use of it, even if the programs themselves are only single-thread capable.
The problem isn’t dual core, single core or 11-core, it’s just you build a beast with a few hundred million microscopic solid state components, linked together by strands a few molecules thick, mistakes can happen. When they are made in hardware, they are harder to fix. The original pentium had problems with floating point calculations. sometimes, mistakes are inevitable.
“
”
It’s not about a speed increase in the traditional sense (ie GHz), it’s about being able to parallel process multi-threaded applications thus reducing overall processing time.
Also I find dual-core systems more stable on Windows machines simply because the system locks up less due to Windows being unable to poll enough tasks to a single core.
In my opinion multi-core systems are the way forward.
So in short, Yes – they are worth it.
Intel has a well-known errata communication process to inform our customers and the public.
Yes, I believe it says “hide the truth” in their policies and procedures manuals. So far, everything past their 8-bit processors through the Pentium series have had tremendous math bugs but thankfully, not so with the Core 2 Duo. It would be bad to find out that we’d been paid less than we should have, right?
It looks as if this is a somewhat small problem and can be lessened, if not removed. I’d suspect that most people would just think it was Windows with another hiccup and they would continue as usual.
I hope they can minimise such problems in the future.
“Yes, I believe it says “hide the truth” in their policies and procedures manuals. So far, everything past their 8-bit processors through the Pentium series have had tremendous math bugs but thankfully, not so with the Core 2 Duo. It would be bad to find out that we’d been paid less than we should have, right? ”
I love how a corner case in the FPU division unit all of the sudden it becomes a set of “tremendous” math bugs since the 80286 on…
Did anyone actually *read* the statement by Linus – a sober look at this so-called ‘bug’ which as he said is more of a documentation issue than anything else.
Come on, less fear mongering more reality; as for Theo, he is the last person to be listened to when it comes to security – his is paranoid beyond that of something reasonable. He is the type of person who would claim that you need to wear a tin foil hat to protect your mind from being read!
Link to Linus’s statement?
As for Theo, his arguments are always based on reason. As an example, him urging every SSH distributor to enable privilege separation. Most vendors ignored him – they got hit by a bug. Meanwhile, OpenBSD stayed secure. Nowadays, about everyone ships SSH with privilege separation enabled.
To the author: “Intel’s response” is a response to something else I believe – it’s not related. Seems to me like it should be removed from the description.
Easy, its The Inquirer article, at the bottom “statement from Linus” with the following link:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=8055…
As for whether they’re reltaed articles to each other; yes they are. The difference is that Theo’s turned into a rant about how crap CPU’s are (apparently he’s an expert at that too!) and how ‘devistating’ the TLB bug is.
As for ‘security bugs’ they’re so weird and obscure it isn’t funny. Its like saying that if you stand on your head, eat cheese cake 2.24134132 days past the expiry date (has to be that exact time) whilst singing the start spangled banner, you *just* might your house robbed.
Odd – I would have thought the Core 2’s were fully capable of having microcode updates to patch around these issues and/or fix some of the errors, at least that’s what Intel usually does in situations like these.
The Geeks.com list claims Intel has committed to fixing one of them… But really, how much does microcode control how the processor behaves? If this is faulty wiring, won’t it have to wait until they update the product line (or something… new codenames seem to go with updates on features, not debugging)
Quite a bit, however usually Intel expects compiler coders to generate the correct code to avoid/work around the issue. Intel would probably generate a microcode patch to fix any show stopping bugs or serious userland exploits (ie, run dodgy code, get to ring 0/kernel mode) however probably at a cost of execution time or other drawbacks.
Ultimately this means at some point it’s probably easier to run hypervisor code that knows the bugs, then virtualises the software/OS completely to scan and fix/detect any code that is bad and eliminates it before it can actually run on the CPU. While they’re at it, they can probably optimise/tune the code for performance on the targetted CPU instead.
I think we need to look at these the same way we do software security flaws- how severe is it, how easy is it to exploit, how many people are likely to run into it (and in this case, how easy is it to work around the flaw).
Theo de Raadt seems to have done that; he does seem to rank them by severity and how likely he thinks they’ll bump into them.
It’s a pity the ‘easier to read’ overview he linked to numbers the errata differently; it’s hard to figure out which ones he’s complaining about and the official Intel document has way too many acronyms for me to comprehend.
has never seen an Intel errata sheet before. Those look pretty typical for Intel processors, and AFAICT, nothing to panic over.
Sorry, Theo, but the sky’s not falling this time.
“Sorry, Theo, but the sky’s not falling this time.”
He’s not claiming it is. He didn’t make an official statement. It was a post on a mailnglist where he (correctly) complained about how buggy the core 2 duo was. It was not blown out of proportions until osnews and slashdot decided it was newsworthy material.
Nothing brings in the clicks like linking to something, anything, Theo has, or has not, said.
Edited 2007-06-29 04:07
There is a kind of build up within the some fringe parts of the open source communities where people just try to take Theo de Raadt and project him as the most vile of people around. They like to take anything he says and blow it out of proportions, put words in his mouth, bold-faced lie about what he does or thinks, then of course, tear those ideas down to prove how stupid he is for thinking, saying or doing whatever it is they’ve fabricated for him.
I really don’t understand why, he’s mostly a decent guy, a little abrasive at times, but he at least doesn’t pussy-foot or bullshit people.
I remember once reading a thread of discussion where Theo de Raadt had complained that Sun did not provide proper documentation for their hardware, consequently, several people decided that Theo was a dick for demanding that Sun relicence ZFS so it could be used in OpenBSD.
I just kinda ignore the idiots that don’t even take context into account when I am able to catch myself before tearing the fools the preverbial, “new one.”
Bullcrap. The PCIe issue was resolved by Linux developers by reading the Solaris code – there was no mystery of voodoo about it. As for the SCSI controller (assuming he wants to support that), Sun can’t release third party code or specification which they aren’t allow to disclose.
Oh, and btw, acting like a prick and biting the hand that feeds you isn’t exactly a smart move on Theo’s part. You know, a little smooth talking, pocket pissing and snivvelling, and you can get alot done in a small space of time, the problem is, Theo wanted straight away without any diplomacy.
As for OpenBSD; they can ‘mix code’ with OpenBSD, OpenBSD philosophically choose not to allow CDDL code into their tree. Yes, that is their right, but at the same time, they forfeit their right to whine, whinge and complain when they themselves have blocked the ability to pull code in from Solaris.
“You know, a little smooth talking, pocket pissing and snivvelling, and you can get alot done in a small space of time”
Some people have a backbone, some don’t.
I believe the poster prior to you was exaggerating for effect. The old adage “don’t bite the hand that feeds you” is VERY true.
If you think being polite, courteous, and tactful equates to having no spine, then I am truly sorry for you. Theo would get a lot further if he didn’t speak everything as if he had just swallowed a liter or vinegar.
Demanding a company open up something to you when you’ve already been at odds with them concerning other issues isn’t going to work. Only a fool would go that route. I think Theo should just relegate himself to a dark room somewhere he could code, and hire a PR guy to handle interaction with people/businesses. He could really use the help, and it would be a boon for the OpenBSD community.
Actually no, since the ISC licence has existed longer than the CDDL, Sun chose not to allow it’s code into OpenBSD’s tree.
The CDDL is a recent development and one made specifically so Sun’s code stays in Sun’s control, this is not an issue that OpenBSD chose, it is one that they chose in order to maintain control over Solaris and it’s associated technologies.
There is no need to be acusatory of Theo in this matter, Sun’s the one that has decided not to play nice with anyone else. And as already stated, Theo wasn’t whining about the inability to take code from Sun’s CDDL sources, I was just pointing that out in my previous post, was I somehow confusing you?
“Sorry, Theo, but the sky’s not falling this time.”
He’s not claiming it is.
Theo wrote, among other things
Note that some errata like AI65, AI79, AI43, AI39, AI90, AI99 scare the hell out of us.
Maybe that’s not “the sky is falling”, but it, and his comments that some of the errata *will* be exploited, are definitely out of proportion to the nature of the bugs described.