Remember when I wrote about how your mobile phone runs two operating systems, one of which is a black box we know and understand little about, ripe for vulnerabilities? As many rightfully pointed out in the comments – it’s not just mobile phones that have tiny processors for specific tasks embedded in them. As it turns out, memory cards have microprocessors though – and yes, they can be cracked for remote code execution too.
Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution – on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers.
There’s so much computing power hidden in the dark.
The positive side is that these microcontrollers (based on this talk, and some personal research) are mostly 8051 (with some ARM Cortex M0 added in the mix) and run at quite a high clock frequency compared to something like the Arduino (100Mhz vs 16Mhz), and considering the low cost of flash memory may prove to be a new source of hackable electronics.
The AX211 and AX215 that Bunnie and Xobs dug into are 8051 with 32 bit additions which is a very interesting mix. Its going to take a lot of hacking and research but I am hoping that it won’t be long before we can use SD cards as a cheap source of hackable electronics
I already assumed this to be the case, but I lack the skills to actually verify this kind of stuff myself, so I love reading about it when others do. This opens up a very interesting avenue for those who like tinkering and getting more bang for their buck: replace the firmware with something that provides a pass-through mode and let the software on the computer-side of things handle the low-level details and POOF, you’ve got cheap, portable, ubiquitous MTD-devices. Ubifs and other flash-optimized filesystems should then perform wondrously!
I just wish there were more details available and for more cards, and I hope someone does make a real project out of this idea.
What kind of benefits do you expect to get from having direct access to flash chips? I mean, it’s a device that does exactly one thing: store blocks you shove on it and retrieve ’em back when you ask for them. Given this purpose, what sort of advantage does direct chip access give you over, say, a traditional filesystem sitting on top of the controller-provided interface?
One huge downside I think right off the top of my head is that you’ll have to deal with the nitty-gritty of flash chips in a kernel driver, which is *[f]ugly* (not that it can’t be done, it’s just that it’s the kind of code that makes you want to pull your hair out).
I think the idea is that running something with POSIX-compliant semantics (and an in-kernel driver, unlike NTFS) on top of a translation layer designed for FAT or NTFS may provide performance significantly inferior to running a filesystem designed to do its own wear levelling right on top of the chips.
POSIX compliance in a filesystem has no bearing on the filesystem’s internal structure and the required block storage characteristics. It’s a feature of a filesystem’s front-end interface, not of its back-end implementation (as evidenced by ZFS, which has, among others, a POSIX-compliant front-end, but is about as non-traditional at the back-end as you can get).
As for wear leveling logic, the primary purpose of it being implemented in a flash-aware filesystem is simply to get rid of the need for a controller chip in highly-integrated systems, such as set-top-boxes, phones, etc. If you can “control” your flash from the SoC, then you don’t need a dedicated tiny ARM chip to run the (often proprietary and thus costly) controller software. You can simply produce a single firmware image and have total control. On the other hand, you are sensitive as to the brand of flash you put in. Different chips can have wildly varying differences in control behavior, so that needs to be then maintained instead of being “outsourced” to the flash manufacturer.
Let me say it more simply. “People think running a non-FAT filesystem on a translation layer designed for FAT might be causing a slowdown they could avoid with a filesystem that needs no translation layer.”
That’s all. The only reason I mention POSIX is because UNIX filename semantics and UNIX mode bits are examples of things FAT doesn’t do.
I just spend more time than I wanted reading datasheets so I could be sure about this.
The flash controller chips are not “translation layer[s] designed for FAT or NTFS”. They are block level devices that support multiple page sizes based on the how the data is requested.
YAFFS Version 1 was based on a 512 byte block, because of assumptions of flash memory controllers at the time, but that 2002. YAFFS2 got rid of that assumption around 2003 or so.
Indeed if you look at the datasheet for the AX215 (http://www.appotech.com/dp/product/ax215) you will see it accomodates up to 8K blocks. This is true of the other flash memory controllers.
There is no reason to believe a posix file system is any more or less efficient on a flash memory controller. But I will add this: the reason error detection and correction is done at the controller level and not the driver level is so that the drivers do not have to know about every kind of flash memory, its structure, the number of banks, etc. Requiring drivers to be updated every time a SD card manufacturer wanted to tweak the internal design would only end in tears.
There are plenty of documents and discussion about this all over the Internet, but well, how these memory-cards generally work is they simulate a 512-byte block device, since they have to work with any filesystem whatsoever they have to with the lowest common denominator, they cannot optimize their operation based on high-level details, like e.g. type of data, expected usage-frequency and so on. Just having to simulate a 512-byte per sector – device alone is a major drag all by itself.
Moving the details of filesystem-implementation and low-level hardware details into the kernel allows for optimizing the codepaths and for taking advantage of details these systems normally cannot.
Here are a few links you can read if you care to:
https://en.wikipedia.org/wiki/Flash_file_system
https://en.wikipedia.org/wiki/YAFFS
https://en.wikipedia.org/wiki/F2FS
http://lwn.net/Articles/276025/
http://lwn.net/Articles/528617/
http://www.linux-mtd.infradead.org/doc/ubifs_whitepaper.pdf
Hardly. And it’s already done in multiple different production-ready flash-filesystems, it’s not like you’d have to invent it all from scratch.
Edited 2014-01-02 00:28 UTC
But on the other side you have to face the problem of crappy storage capabilities – remember, those CPUs are there to perform data recovery because the underlying storage medium is unreliable. I’m not sure that marking bad blocks would be sufficient.
The chips don’t do any sort of black magic, they store some sort of ECC + parity and use that to reconstruct the data when needed. There is no reason for why the same couldn’t be done on the in-kernel driver just as well. ECC, parity and so on are again well-known, ages-old techniques.
I’m not so sure about it. That would definitely be the case if the data storage would be as unreliable as HDDs are. But in this case I would not be surprised if there is something smarter then ECC in place – eg. some level of redundency. In the end I would expect a reason behind custom incompatible firmwares – which in case os simple “ECC+parity” scheme would be a huge overkill as compared to standardized interface and driver-based operation.
Anyway, investing effort in the outright crappy machine looks more like waste of effort to me. These MicroSD cards may serve a good source of companents (as in disassemble and take what you need), but I doubt that MicroSD running an OS would be a good investment of time.
On the other hand, a cluster of a dosen of identical SD cards may be cost-effective, particularly if configured in some sort of RAID storage-wise…
Redundancy in flash memory – devices comes from re-mapping sectors. Like e.g. you have an SD-card that says it’s 16GB in size, but in reality it’s 18GB or something, and the controller silently remaps the sectors without letting you in on it. This is how SSDs do it, too, you know. It’s still bad-block management, you’re just never being informed that it’s happening, and not all devices can even do it. It’s usually only higher-end SD-cards that have some spare areas.
Who was even talking about running an OS inside an SD-card?
Edited 2014-01-02 10:19 UTC
I doubt it’s all that simple. Bad block management and ECC are generally implemented in hardware, so vendors could use less complex dedicated hardware solutions and save a couple of cents per shipped unit. Provided the amount of units shipped, anything allowing to save even a fraction of a cent per shipment in a long run would get implemented. The fact that SD cards are operated by more general-purpose processing units suggests that techniques you mention are not sufficient.
And again, all OSes have some sort of bad block management, ECC and parity checks for ages. Both Linux and Windows (Embedded) have drivers for various flash memory devices. Be the problem with USB sticks and SD cards that simple, it would have been solved in software without raising the price of shipped unit.
Actually, they do quite a lot of black magic, since modern flash memory is too crappy to be handled by simple ECC parity/correction. In fact, this black magic is a big part of current SD card firmware.
Because that adds complexity and thus increases chance of failure? there is a reason why these little chips exist, its to do one simple little job and do it well and even then there are plenty of failures out there. What you propose is sticking it into the kernel…which is also handling a bazillion other jobs and expect it to not be seriously crash prone and buggy?
Sure you’d probably get a speed up, but at what cost? The only thing we agree on is that 512 bytes is stupid, if we could switch it to 4K defaults and still have the controller chip do the lifting? Sounds good. But “let the kernel devs handle it” certainly hasn’t been good for wireless and I seriously doubt it’d be good when we are talking about your data.
Yes, I can see you can reference uncited material and describe the general workings of a 512-byte block storage interface, but can you describe the actual benefits that are there?
Such as? Be specific.
These are all generic descriptions of the filesystems involved and say nothing on the exact benefits of direct flash access. For example, the first lwn.net article simply mentions that direct flash access is beneficial, but doesn’t say anything specific about it either.
In my experience, the primary benefit of a flash-filesystem is that you can avoid having a separate controller chip and software in a high-embedded box (e.g. set-top-box). Fewer chips equals lower cost.
If you’re only talking about memory cards and stuff such as eMMC, where performance isn’t key, then yeah, you can get by on existing algorithms. But take a modern high-performance SSD and you’ll begin to understand the complexities involved. Take for instance something like an Intel DC S3700. The 800 GB consisting of several dozen chips with a raw capacity of (IIRC) 960 GB, 1GB of RAM, a few supercaps and a high-performance ARM CPU. If you wanted to directly control the flash efficiently here, you’d have to be able to duplicate e.g. the power-out capability by having your entire machine on a UPS. At the same time you’ll have to contend with very large devices, which would make most flash filesystems barf (exactly because they’re designed for small capacities on highly embedded systems) – e.g. UBIFS’ linear block header scan at startup will take a lot of time to complete.
On the other hand, regular filesystems can be adapted to the specifics of flash relatively easily. Simply pin the smallest block size at something meaningful (like 4k) and relax the allocator algorithm to something like first-fit. Synchronous writes can already be communicated to the SSD easily by tagging them as such or by using cache flushes to implement write barriers. And block erase management can be implemented by issuing appropriate TRIM/UNMAP commands (the benefits of which extend beyond flash to things like emulated logical volumes on SANs).
When even Intel uses ARM for their embedded stuff…
Well the fact that microSD cards have microcontrollers is well known. The fact that they can execute arbitrary code isn’t. Most microcontrollers like this are a bit more locked down, programmed with OTP (one time programmable) ROM and unable to execute code in RAM.
The CPU (AM08EB) in the HEXBUG Spider is more locked than the ones Bunny and Xobs found.
Pretty much everything and anything has an MCU in it, so everything is sort of hackable in a sense, as long as the MCUs aren’t of the ultra cheap write-once type.
Even my home beer tap thing has a PIC in it.
The cool thing about this SD card thing is that SD cards are inherently easy to interface with. So, instead of having to cut traces and solder joints onto things that really aren’t meant to be soldered on, you get a nice, compact package with a well documented electrical interface. SD card sockets are cheap, and so are the cards themselves. Cool things will come from this. That’s pretty much guaranteed.
Edited 2014-01-01 22:37 UTC
The article is nicely written, but it’s big whoop about a nonissue. What they’ve done, assuming you’ve got direct hardware access, is discovered a method to flash the controller and run custom code on it. Custom code can do whatever you want with the card.
So? What’s the story here? That you can run code on a flash controller? It appears to me that the guys simply wanted to ride the wave of recent headline buzz about security and the revelations about the NSA hacking the machines of the general population. This particular discovery, however, of being able to flash a controller does nothing to change the security landscape. Any storage encryption system worth its salt will protect the machine from forged data, regardless of the amount of control an attacker has over the stored data itself.
The buzz is that the microcontrollers are not locked down. Also the hacking (for good or bad) possibilities.
I don’t see where you get that. Bunnie Huang and Xobs are well known hackers who like to learn what they can about the hardware we use. They mentioned the MITM risk but they also talked about upside possibilities.
You sure about that? This is just off the top of my head but a compromised card could silently preserve data that has been deleted. It could silently infect executables as they are being read. And while you may encrypt all your filesystems that is far from the norm
Seriously, how is this news? I would expect that microcontrollers in your hardware allow you to load any firmware to be the norm. Seriously, have we become so accustomed to being held hostage by hardware manufacturers that we need to celebrate the one instance when we aren’t?
So can I have a talk at CCC about how flashing the firmware in your BIOS can be used for nefarious purposes? How about the firmware in your SATA or SAS controller? Your DVB card? Your NIC? Imagine, the driver for most Broadcom NICs includes a firmware blob that the driver loads and the card simply runs! Scary stuff!
They already do that (ever heard of block indirection?). That’s why truly security-conscious environments dispose of storage devices like this: http://www.youtube.com/watch?v=sQYPCPB1g3o
Or simply encrypt everything and don’t keep the cleartext key on the device. Anything else is security theater.
That’s prevented by code signing – already standard on most mobile platforms and being quickly adopted everywhere else (e.g. OS X by default prevents non-signed apps from running).
Unsecured storage = I don’t care about the data. That is hardly news.
I am working in the embedded industry. And really none of our customers does anything to protect their machines from being hacked. Some use CRC but only to be sure the Flash code is still intact.
What embedded industry would that be? I’ve worked with set-top-boxes w/ content-protection and in that space, signed bootloaders and signed kernels are standard. And by “mobile platforms” I meant the likes of Android, iOS, WP, etc.
Also, how would having the controller be more locked down and unflashable protect you from these kinds of attacks? If you hand your hardware to your adversary for manipulation, can’t they just hand you a piece of hardware of their own with entirely custom firmware? Given direct hardware access, there’s almost no limit to the attacks you can mount.
Defense (*grin* these euphemisms), medicine, automation, rail-way.
Yes, it seems engineers for consumer products add these things. But maybe beforehand to protect their product not the customer 🙂
How much do you know about the embedded world? Have you ever had a firmware update for you microwave, your toaster oven, your cheap RC toy, etc?
The vast majority of the uses of microcontrollers are never intended to be updated ever. Almost all microcontrollers come in one time programmable versions, or can be locked down after programming.
I can’t speak to every industry that uses microcontrollers, but for the ones I have worked in if the product isn’t intended to be updatable it is locked down.
As I said in another comment, the microcontroller in the HEXBUG Spider is more locked down than the memory cards in question.
You understand that is not how most people operate right? Yes the most secure environments do the things you talk about, but the vast majority of people who do care about their data do not do these things… or are aware of the implications.
The fact is these cards are potentially attackable in ways that will be very very hard to detect, and seriously should have been locked down to prevent it.
Edited 2014-01-02 16:18 UTC
But we’re not talking about microwaves, toaster ovens, cheap RC toys or “the vast majority of the uses of microcontrollers” are we? We’re talking about computers and storage devices.
This may be the case for very simple single-purpose consumer electronics, but is definitely not true for general-purpose computers. Any manufacturer that produces a firmware-controlled piece of computer hardware without some sort in-field upgrade possibility is setting themselves up for bad grief down the road when people inevitably hit firmware bugs or compatibility issues.
Irrelevant. You’re talking about a different class of device.
But that’s the trick, “people” don’t need to do anything, hence why I wrote: “code signing – already standard on most mobile platforms and being quickly adopted everywhere else”.
As is the firmware in your hard drive, your NIC, your storage controller, your BIOS and literally anything else you can think of in your computer besides the CPU+RAM combo. Seriously, all this does is emphasize that you need to authenticate all the code you run on your machine.
At first: It is a nice hack. Of course, I wonder how much time those guys did spend to do it.
And then: Every computer has an SD card reader, so with a “hacked” (but empty!) SD card I can get some troian horse into a computer: Plug it in, format it (“see, it is empty, no harm” :-). The next time some .exe is stored, it will be modified with the troian horse.
This is nothing but a variation of the evil maid attack. Properly authenticating the data you read is always key, regardless if the corruption is intentional or accidental. Most checksummed filesystems will already defeat such an attempt, though it always helps to use some crypto-secure hash algorithm.
Right, Didn’t think of this. So the malware must know the filesystem.
Is NTFS checksummed?
No, it isn’t. However, speaking about SD cards, the most common case of executable code being loaded onto these is when they serve as storage expansion in mobile devices, which employ code signing.
“the most common case”? Well, I would not sign this statement. Anyway a quick search revealed that e.g. Siemesn PLCs use MMC cards. So, wonder how Stuxx came onto those PLCs? Maybe via a hacked MMC card.
Does Siemens use code-signing? I doubt it.
Even if this were the case and they used exactly this method, one high-profile espionage case doesn’t mean that this is usable as a wide-area exploit. You’re attempting to create a general case out of a corner case.
Btw: we know how they got there, read the Wikipedia page on Stuxnet. Hint: it wasn’t MMC controller embedded malware that modifies data on the fly.
Given I’ve pulled ATTiny4’s out of greeting cards — which are more powerful than my first computer (Cosmac Elf)… that something which actually needs tech has it’s own low end processor on it isn’t really a shock.
No the point that there are embedded processors in flash memory isn’t news. That has been known for a very long time.
The news is that they aren’t locked down and are hackable.
That ATtiny4 runs a few orders of magnitude faster than your old Elf, but then the elf did come with 8x the RAM
This is quite scary even if one was not overly paranoid about the reach of the NSA hacking tool box.
And I’m pretty sure that the corporate IT world will take notice – has most recent systems have a SD memory card slot. This is another potential point of entry even more difficult to monitor than USB drives.