Theo de Raadt unveiled and described an interesting new kernel security feature: Kernel Address Randomized Link.
Over the last three weeks I’ve been working on a new randomization feature which will protect the kernel.
The situation today is that many people install a kernel binary from OpenBSD, and then run that same kernel binary for 6 months or more. We have substantial randomization for the memory allocations made by the kernel, and for userland also of course.
However that kernel is always in the same physical memory, at the same virtual address space (we call it KVA).
Improving this situation takes a few steps.
I’ve never seemed to own hardware that BSD wants to play with, which is a real shame, my hardly bleeding edge broadwell laptop, with intel wireless card either crashes with one, distro, or with another doesn’t seem to recognise the wifi card (no ether port) leaving me stuck…
the last time I asked help on a BSD board was years ago I got three responses demanding I call it BSD and not bsd in a fairly rude manner and no help…
I guess if few people are using it, the its not a malware target…
I’ve run FreeBSD on dektop hardware without a hitch. It is trivial to install using a GhostBDSD ISO. GhostBSD also runs in Virtualbox.
Edited 2017-06-15 07:52 UTC
OpenBSD works fine with almost all intel wireless cards.
http://man.openbsd.org/iwn.4
However you do need to use the fw_update tool to load the firmware.
http://man.openbsd.org/fw_update.1
Or copy it via a usb stick.
They have a clear list of devices that work and those that don’t.
Edited 2017-06-15 08:39 UTC
Give it a try. Broadwell has been working on OpenBSD for a year and a half now. Broadwell is the most recent hardware I’ll buy for that exact reason (i3, i7 NUC, X250 fwiw).
There’s no hardware configuration or tweaking needed at all. Install, reboot, and you have Xorg running with xenodm to login. It’s the simplest operating system to use.
It’s the newer Intel releases that are still a work in progress.
Yet, I’ve never found a great use for it in the tech stack. Suggestions? Maybe as a router and/or firewall?
Software I’d run on an application server isn’t *BSD friendly.
Firewall, router, IPSEC VPN, or, now, load balancer are the situations I could see using it.
They have quite a few technologies that make it really nice for network devices. They have CARP for failover, PF for firewall, Relayd for load balancing and proxying services, OpenIKED for IPSEC VPNs.
Then there is OpenSMTPD for email. This isn’t limited to OpenBSD, so it can be run on most Unix-like platforms.
Indeed. It makes an excellent Firewall if you know how to set everything up. Also makes a nice web proxy.
I use it as a Desktop instead of Linux.
http://sohcahtoa.org.uk/openbsd.html (this isn’t mine but I basically do more or less the same thing).
Most open source applications are either in packages or ports.
The only thing I can’t get working is Dropbox and Sublime Text and I can live with Atom.
Edited 2017-06-16 07:04 UTC
Not going to work for me, I prefer a number of Linux only technologies in my workstation.
such as out of interest?
The Smart, Diligent People we’re used to.
When you’re so far ahead of the curve and you don’t know what else to do.
“Fuck it, let’s randomize the kernel!”
LOL!
Address randomization and ASLR don’t serve a purpose in languages that don’t have exploitable memory leaks in the first place. Clearly the purpose is to mitigate the consequences in the aftermath of these vulnerabilities, but they’re still there and this fails to address the underlying causes.
Despite all the real security problems with C based operating systems, it still has magnitudes more development resources than anything else. Although we have a few OS projects experimenting with robust programming languages to enforce safe memory usage at compile-time, unless the big players start seriously pushing them over the incumbent C kernels, then I’m not sure that any competitor can achieve the critical mass needed to become mainstream.
You’re right, but a smaller sitting duck is always welcomed
Isn’t this common practice at secure embedded devices?
Enforcing safety at compile time is pretty hard. Linear types and things derived from them are AFAIK the most useful technique, but it creates extra work for programmers. Having partially compile time and partially run-time checking is easier and what most safe languages do.
A light-weight safe programming language with manual memory management (and mechanisms to make management automatic in most cases) for systems programming would be nice. The hard thing is detecting dangling pointers with as little overhead as possible.
While I agree with you the world we live in ATM systems are generally programmed in C and just rewriting everything isn’t realistic.
Megol,
I hear you, and yet until we address it, our software will continue to be vulnerable to memory corruption.
Edited 2017-06-15 21:31 UTC
Hi,
..and then after we address it, our software will continue to be vulnerable to compiler bugs and “row hammer” and malicious devices plugged into a gaping “thunderbolt” holes and…
– Brendan
Brendan,
Yep, bugs are always possible without formal proofs anyways, but we should still make an effort to fix what we can.
With firewire/thunderbolt, that’s what you get when you expose the system’s inner bus to peripherals. USB protocols are clearly safer by design (although I suspect a USB fuzzer would expose a lot of holes in our USB stacks).
Regarding hardware corruption, that’s tough. We can mitigate it by adding ECC and redundancy to everything. A fully redundant CPU could mitigate soft CPU errors, but not systematic ones where both CPUs have the same fault. You linked a video about open source CPUs, which could help detect systematic faults. Having redundant CPUs from different manufacturers could catch both types of errors, but it would be very expensive and largely wasteful for events that (I presume) are extremely rare. It could make sense in more mission critical roles though.
Yes. But even using safe languages aren’t enough IMHO, making programmers handle exceptional cases correctly is also important – it doesn’t matter if memory is only accessed by an entity that have the rights if it writes the wrong data.
But that is another story.
Megol,
I agree it’s not going to verify the programmer’s logic, but it is enough to prevent the segfaults and pointer errors that address randomization is designed to mitigate.
Regarding “checked exceptions”, that opens up a whole can of worms I don’t want to get into, haha. But in principal run time exceptions handling is orthogonal to compile time pointer verification.
Edited 2017-06-16 18:30 UTC
Exceptions can be beautiful. They are however often a PITA.
Megol,
If you want to criticize what’s included in the standard library, I won’t object. There are things I don’t like about libc and the C++ STL too, however this isn’t normally relevant for kernel code because kernel code usually replaces the standard library with something else; the same is true with rust.
Edit: I’m not trying trying to sell anyone on rust per say, so it’s fine if you don’t like it. The main point I want to put out is that the continued use of C is impeding the state of software security. With the lessons of the past several decades, we can do better than this.
Edited 2017-06-16 23:14 UTC
I don’t think segfaults are the problem. If something segfaults, that means the memory protection set up by the OS has worked. It’s when it doesn’t segfault that is worrying.
kwan_e,
Both should be worrying IMHO. A direct segfault is arguably the best outcome and is certainly the easiest to debug! However it’s still possible the fault created a latent condition where the heap continues to be used in its corrupted state for a while before segfaulting later. This is very difficult to debug because you can’t directly track down the segfault to the faulty code that caused it
Well, as more people move to electronic cash, we won’t need ATMs anymore. Crisis averting.
So, the solution to insecure computers in charge of money, is other insecure computers in charge of money.
Got it.
Bill Shooter of Bul,
I assumed kwan_e was being sarcastic, I thought it was funny
I wouldn’t call it sarcastic. It’s more self-deprecating as it’s making fun of me misparsing that sentence.
[q]
😀
(For anyone confused ATM in this case = At The Moment)
C is still one of the most feasible choices for OS kernel programming. Those magical “safe” languages from utopia-land always require crap load of runtime written in “evil” C/C++ and assembly.
agentj,
C has the distinction of being the pioneer, and it’s ubiquitous support makes it the defacto standard for virtually all low level work, but it’s not actually that unique in terms of it’s capabilities. Ada, modula 2, or pascal, etc could also be used.
Most C programmers could probably switch to Dlang without much trouble since it was meant to build on C’s familiarity while fixing C’s most notorious problems.
As you know java and .net (and a plethora of scripting languages) add safety through the runtime checks, however developments in modern static languages like rust have eliminated the need for runtime checks and can enforce the same safe memory semantics at compile time, making them suitable for low level & real time work.
If it weren’t for C’s incumbent advantages, and given it’s deficiencies, it wouldn’t be getting much attention if it were invented today. However since we already have so many mission critical C codebases and interfaces, we’re kind of stuck with it as our fundamental building block.
Edit:
If you disagree, do you mind providing an example of something C can do that others cannot?
Edited 2017-06-16 15:37 UTC
If your system has any program written in “safe” language combined with a buffer-overflow language, then randomization is still useful.
ROP attack code needs access to machine instructions that are already in executable memory. It is a lot easier to do ROP if you know where to find the code you need.
And by ROP I also mean to include things such as function / virtual function pointer overwrites.
Randomized kernel addresses are good for this too because bugs do happen. If it was ever possible to copy data into the kernel through a buggy write() call, for example, and then trick the kernel into using that memory as a structure of some kind with a function pointer, randomized addresses would make that a lot harder. And the attack probably only gets one try, because getting a bad address would crash the system.
What about Ada?
allanregistos,
Interestingly this seems to apply to other languages as well, they often are safer but the industry support for them is so poor that it becomes a pretty big disadvantage.
Most of us know C’s safety record is atrocious, but its preexisting codebase is so enormous that changing it is unpalatable, kind of like a “too big to fail” situation with software.
The main reason they ignore “the problem” is that they are C programmers with deep knowledge of UNIX and very good ones at that. They don’t want to throw it away and start building something in Rust today only to be told next year to switch to Mildew++.
They took 44BSD(something that worked) and slowly turned it into a more reliable system that still does everything you need. They keep innovating and making the system better and better – they call it “polishing a turd”.
Every once in a while, Silicon Valley kids come, create a new language, build a secure micro kernel that can do nothing, argue about it for a year or two and then their ADHD leads them to greener pastures after it becomes a maintenance nightmare that no amount of stimulants can fix.
The real value is in mature systems. Windows, Linux, macOS. Not some toy project.
Of course, then the problem you run into is that you get Byzantine security features designed by a corporate committee of pointy heads attached to a patchwork of legacy code that originally followed toy project standards.
OpenBSD makes security easy and fortunately some of it leaks to mainstream OSs. Rewriting all your legacy code in Rust is not easy. Having to negotiate with your OS so that it allows you to write a file or else just disable everything and broadcast its contents to the Internet is not easy.
sakeniwefu,
Popularity is what determines if it’s a toy or not (just think how DOS was arguably a “toy” – many years behind alternatives, yet the PC bundling made it the most popular business platform). Many of us who have worked on these kernels do see the technical problems and we acknowledge they can be done better, however it’s extremely difficult to match the resources of incumbent technologies, which impedes merit based advancement for alternatives.
The difficultly does not lie in developing better operating systems since we have the benefit of hindsight. The difficulty lies in overcoming this catch-22 where operating systems have more resources because they are more popular and operating systems are more popular because they have more resources.
I’ll concede, switching from C is an uphill battle. If C was starting at the same starting line as modern languages, then it would undoubtedly loose in favor of those that don’t have it’s share of problems. However in reality C doesn’t start at the starting line, it starts near the finish line and consequently wins a lot of races despite having all of it’s problems.
Edited 2017-06-17 16:00 UTC