When the Spectre and Meltdown attacks were disclosed earlier this year, the initial exploits required an attacker to be able to run code of their choosing on a victim system. This made browsers vulnerable, as suitably crafted JavaScript could be used to perform Spectre attacks. Cloud hosts were susceptible, too. But outside these situations, the impact seemed relatively limited.
That impact is now a little larger. Researchers from Graz University of Technology including one of the original Meltdown discoverers, Daniel Gruss, have described NetSpectre: a fully remote attack based on Spectre. With NetSpectre, an attacker can remotely read the memory of a victim system without running any code on that system.
One of these days, somebody’s going to come up with a PC/phone exploit that absolutely can’t be patched with software. And then everybody’s boned.
That’s literally this exploit, on older phones that don’t receive updates.
Poseidon,
That’s a very good point. We can add it to the list of reasons it sucks to have phones that are stuck with original manufacturer builds. As someone who’s successfully extended the life of a PC many times over without much difficulty at all, I am very disappointed in the throw away culture of mobile phones.
This is the fault of both manufacturers as well as the unwillingness of OS builders to have stable ABIs. I don’t know that anyone is willing to budge on their positions that have led us down these paths.
In theory project treble could help 3rd parties support a phone even though it’s official goal is helping manufacturers provide updates rather than us. I don’t have a phone that supports it, so I don’t know if it’s actually helped or not. I can’t say I have much faith in the industry because encouraging us to throw away products is more profitable than allowing us to fix/re-provision them.
Look on the bright side though – it helps more poor people have access to decent phones. Like, I upgrade mine every year or two (since I have the discretionary income), and I give my old phone to my best friend. She will eventually pass it on to other family members when I upgrade next time, so everybody wins
Is this trickle-down phonenomics?
Ya, something like that. Friends don’t let poor friends use shitty tech
Edited 2018-07-27 20:28 UTC
Nah. The bluetooth vulnerabilities are much worse.
You’d only be able to get a byte or two out of a phone every few days with NetSpectre.
BlueBorne allows someone to hack your phone while standing 30 feet away without you knowing about it.
WorknMan,
Yeah, we knew that spectre would be a game changing vulnerability when it first showed up. Meltdown could be mitigated by removing privileged memory pages from application address space, which hurt performance but at least solved the vulnerability. The problem with spectre vulnerabilities is that they can show up in any software, which generally has to be re-written (or recompiled) to make CPU code speculation becomes non-viable.
I’m afraid that until the hardware eliminates the vulnerability and removes speculation, spectre will continue to come up in new contexts. It’s going to take a long time for new hardware to displace vulnerable computers in the market, but there is a positive side though… speculation requires a lot of transistors/power, which came at the expense of other features that could use those transistors better. It might finally put the pressure on software developers to improve software efficiency rather than relying on CPU speculation(*).
* The inefficiency of software has been a longstanding personal gripe of mine
While I appreciate the research and ingenuity involved in this particular variation of a “remote exploit”, please understand that it isn’t a practical remote exploit.
It’s a way of potentially leeching data remotely.
Think of this as a game of Go Fish– You can theoretically compare a card (byte) in your hand with the contents of the cache on the remote system to see if they have the same card (byte).
The problem is, you need a hand with about 800,000 cards. On a local network, that means guessing one card that’s in the cache every 30 minutes using one method, or every 8 minutes using the second.
Over a REMOTE network, it’s one card (byte) every 8 hours, or every 3, depending on which method you use.
My computer is pretty active– the cache gets rolled over every few seconds, and things like a GPG key are only in memory for a few moments and are usually deliberately flushed by good software.
As an academic exercise, this is beautiful. As a practical means of extracting data, you’d be better off with a virus and a camera drone that flashes the HDD light on and off to exfiltrate data.
grat,
Of course, the biggest risk for spectre is when the attacker can run their code locally. The timing analysis on network packets is far less precise than the timing analysis from locally running processes, which requires far more time & data samples than a normal spectre attack. This may mitigate the danger somewhat, but we should refrain from labeling it completely academic since some agencies have lots of time and assuming they knew where to look, a few bytes could leak some security critical key material that could prove invaluable for a more conventional attack.
All of these exploits boil down to this:
1. I want to read something in memory.
2. Write code that says, if it’s this value, do something that takes a long time (or a very short time), otherwise do something that takes a short time (or a very long time).
3. You now know more than you used to. Repeat over and over until you know everything you want to know.
Architecturally, this is very hard to mitigate.
fretinator,
That’s a nice simple summary, haha.
While timing attacks are obviously nothing new, the main issue with spectre is that you don’t need to write any code for #2 because it’s already implemented in silicon by the CPU’s own speculative execution engine running code that would not ordinarily be considered vulnerable if not for the speculative execution.
So while it’s hard to mitigate timing side channel attacks in general, it’d be pretty easy to mitigate the specific side channel leaks caused by spectre just by removing the speculative execution features all together.
Hi,
If you remove the speculative execution features performance would drop significantly (e.g. 4 GHz CPU from today that performs like a 1 GHz Pentium III from the late 1990s), and a lot of the software we use wouldn’t have the performance needed for their features (e.g. a lot of modern games would have to be nerfed back to last century to get acceptable frame rates).
The only thing that would happen is that nobody would buy these CPUs, because given the choice between guaranteed performance crippling and a tiny chance that the vulnerability will actually effect you (and not someone else, and not nobody) everyone will choose the latter.
– Brendan
Brendan,
Odds are the chances of being attacked will only increase over time as the vulnerabilities become more accessible to script kiddies. All the arguments for updating software to patch vulnerabilities should technically apply to hardware vulnerabilities as well. It’s clearly inconvenient, but from a security perspective not doing anything is the equivalent of running unpatched software, it would make hypocrites of us to ignore it.
The most likely medium term fix is a new CPU flag to switch between performance and security and allow the OS/apps to outline a policy.
But I disagree with the conclusion that nobody would buy a CPU without speculation, since as I mentioned earlier, the elimination of complex execution units has other benefits like battery life/power consumption and re-purposing transistors for more explicit parallelism.
Not for nothing, but even though the software industry has been extremely reluctant to change it’s ways, our approach of relying on hardware speculation to make single threaded software faster and faster was hitting the wall of diminishing returns 10 years ago. Even if it hadn’t been for spectre, it would have been the right time for engineers and software developers to come up with a better way to use billions of transistors than to optimize sequential code. Spectre gives us another reason to look ahead to a different kind of strategy for building computers.
IMHO single threaded performance has a very long legacy, but has reached the end of it’s practical evolution. Look at the progress that is already being made with GPGPUs, which despite being slower than CPUs, can end up leaving CPUs in the dust with applications designed to use them. For most games, the heavy lifting for graphics is already done by GPU and in the future I expect more and more games will rely on GPUs for AI, physics, etc. In other words, for computationally intensive applications, single threaded performance will become less and less important (because it does not scale nearly as well as explicit parallelism).
It’s not just games, but potentially business and data applications that can get a boost too. I concede change is hard, but I for one welcome the paradigm shift. The dependence on single threaded code execution has been holding us back!
Hi,
I agree. Personally; I’d like to see something like Xeon Phi (a lot of cores with 4 logical CPUs per core and not much speculative execution) being used for mainstream server/desktop/laptop; and all programming done using some sort of highly scalable “message passing between isolated small pieces” approach (e.g. the actor model); and all software redesigned, rewritten and replaced.
Of course the chance of this happening is zero – if everyone agreed it’d take decades to complete the change; and most people won’t agree (they’ll vigorously defend current practices and actively try to prevent the change).
– Brendan
Brendan,
On the other hand, most of the influence in tech stems from less than a dozen companies. If these influential companies like apple/microsoft/google/amazon were on board, I have little doubt the rest of the industry would follow. Of course you’ll always have the dinosaurs that are never willing to change, but they don’t necessarily define the industry anyways. Just as mainframes and modern desktop computers can coexist, so too can future computing paradigms.
Yes, I was getting at the more general issue. We are almost at a quantum level with this. Taking a measurement changes what you measure.
That describes the leaking of information, not the speculative attack. And the attacker can’t write the code that reads – it have to already exist in the victim code.
Modern processors use speculation where the processor guesses what it should do when the information isn’t available.
This improves performance as some things can be guessed with high accuracy (>95% right) so that the processor can avoid waiting for those things to be known.
If the processor guesses wrong everything will be restored like the guess never happened – in theory.
Speculative attacks makes the victim guess wrong causing it to access data the attacker wants. When the attacker have done this there are traces of that guess still in the system which are leaked.
So the attack is like:
1. Determine a piece of code in the victim that can do the right read.
2. Fill the hardware predictor with bad information.
3. Call the victim.
4. The victim does a check if it should read the secret data and guesses that it should.
5. The victim does a read and processing of the secret data leaving data in the data cache.
6. The victim detects that the guess was wrong and restores the state of things.
7. The attacker get the return data provided by the victim – not the secret data it wants.
8. The attacker times the cache response time to determine what the data the victim read was.
It’s very complicated. As is fixing it.