For the past two years, modern CPUs—particularly those made by Intel—have been under siege by an unending series of attacks that make it possible for highly skilled attackers to pluck passwords, encryption keys, and other secrets out of silicon-resident memory. On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.
[…]The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.
Is this ever going to stop?
http://www.osnews.com/story/30173/apple-addresses-meltdown-and-spectre-in-macos/
Haha, I hear you! No, I don’t think we’re there yet.
http://www.osnews.com/story/30382/google-and-microsoft-disclose-new-cpu-flaw/#comments
My prediction hasn’t come true yet, however I still believe that all speculative execution and certain kinds of caching are inherently risky and it’s just a matter of coming up with an exploit. The problem is that acceleration based on caching or speculation end up unwittingly leaking state information. Fast execution paths are simply not secure when knowledge of execution time, which is often trivial to measure, leads to strong conclusions about the underlying “secret” data. It’s not just intel, in principal all modern CPU that use code speculation are vulnerable too. Fixing it is trivial, just disable caching and speculative processes designed to speed up CPUs…at a heafty performance cost!
My opinion about where the industry should go from here hasn’t really changed…but the software industry is slow as molasses to change, so I’m not sure it will change.
http://www.osnews.com/story/130521/intel-launches-comet-lake-u-and-comet-lake-y-up-to-6-cores-for-thin-and-light-laptops/
As long as there is money to be found, criminals will be there trying to steal it. Always has been, always will be. It applies to every field where there is money, period.
At least we have researchers trying to help warn when security issues are found. That’s actually a little surprising because of how much “shoot the messenger” still applies even in modern society.
JLF65,
Yeah, everyone is quick to blame everyone else, but when you look at it in detail it really comes down to a very delicate balancing act between engineering security versus performance. Should engineers go with an implementation that has a 100% chance of increasing performance by 0% or take one that has a 50% chance of increases performance by ~20%? The answer is “it depends”. For decades engineers were focusing heavily on maximizing average performance per clock with probabilistic optimizations that were appreciated by consumers. However that probabilistic chance can reveal details about the underlying data such that an adversary is able to determine the nature of the data being processed by the CPU. Given enough samples, one can statistically leak more and more facts of the data being processed until it is leaked entirely.
Is the solution to go with 100% chance of 0% gain? Because that’s certainly possible! But I think consumers would be disappointed when all such probabilistic mechanisms get removed. A lot of the tricks that use to predict branching and loads, which are crucial to the viability of long pipelines, are adaptive in nature. They build up state information from the very data that’s being processed to help make accurate predictions, which is good for performance but bad for security.
I think intel hasn’t been completely forthcoming about the nature of this performance versus security dichotomy and they (as well as others including amd, microsoft, apple, etc) have been to quick to say it’s fixed when in fact it isn’t and engineers continue to face the same performance versus security dilemmas.
The industry needs to take a serious look at basic & simple CPU implementations that don’t do any of these fancy pipeline optimizations for the sake of secure computing. But realistically it probably wouldn’t sell well. So I think your right, when it comes to blame, many people want to have their cake and eat it too.