The just released Genode version 14.05 comes with new tools that greatly improve the interoperability of the framework with existing software. Together with a new process-local virtual file system embedded in the C runtime, this change should clear the way to scale Genode well beyond the current state. Besides these infrastructural changes, the release comes with a new block-level encryption facility, enables USB 3.0 on x86, introduces SMP support to the base-hw kernel, enables real-time scheduling on the NOVA hypervisor, and adds guest-addition support for VirtualBox on NOVA.
After the feature-rich release 14.02, the Genode developers took the chance to thoroughly revisit the tooling and overall structure of the framework. The goal was to improve its scalability with steadily growing amount of third-party software combined with the framework. Genode-based system scenarios combine the work of up to 70 open-source projects. However, until now, the framework lacked proper tools to manage such third-party code in a uniform way. In particular, upgrades of third-party software were poorly supported. To overcome those problems, the project took the wonderful Nix package manager as inspiration, created a set of new tools, and reworked the build system to make the porting and use of third-party software much more enjoyable and robust.
Most ported 3rd-party software relies on a C runtime. Genode offers a fairly complete libc based on FreeBSD’s libc. However, translating the POSIX API to the Genode API is not straight forward. For example, Genode does not even have a central virtual file system service. Hence, different applications call for different ways of how POSIX calls are translated to the Genode world. Until now, the different use cases were accommodated by specially crafted libc plugins that tailored the behavior of the C runtime per application. However, as the number of applications grew, the number of libc plugins has grown too. In the new version, the framework consolidates the existing libc plugins to a generic virtual file system (VFS) implementation. In contrast to a traditional VFS that resides in the OS kernel, Genode’s VFS is a plain library embedded in the C runtime. To the C program, it offers the view on a regular file system. But under the hood, it assembles the virtual file system out of Genode resources such as file-system sessions, terminal sessions, or block sessions. Since each process has its own VFS configured by its parent process, the access to resources can be tailored individually per process.
In addition to the infrastructural changes, the new version comes with plenty of platform-related improvements. Genode’s custom kernel platform for ARM devices named base-hw has received multi-processor support and a new memory management that alleviates the need to maintain identity mappings in the kernel. The NOVA microhypervisor has been adapted to make static priority scheduling usable for Genode. Thereby the kernel becomes more attractive for general-purpose OS workloads on the x86 architecture. Also related to NOVA, the project has continued its line of work to run VirtualBox on this kernel by enabling support for guest-additions, namely shared folders, mouse-pointer synchronization, and real-time clock synchronization.
In line with the project’s road map, the new version features a first solution for using encrypted block devices. The developers decided to use NetBSD’s cryptographic device driver (CDG) as a Genode component. One motivation behind the use of CDG was to intensify the work with the rump kernel project, which allows the execution of NetBSD kernel subsystems at user level. After the project successfully used rump kernels as file-system providers with the previous release, extending the use of rump kernels for other purposes was simply intriguing.
These and more topics are covered in the comprehensive release documentation for Genode 14.05.
Nice work, I always like to hear about new developments with Genode.
Genode is definitely one of the most interesting alternative OSs covered here. One thing I’m trying to figure out is how heavy the driver layer is for it. How will it compare to projects like Arrakis in its treatment of high-bandwidth, low-latency access to hardware?
There is actually a large degree of freedom. For example, if a network device does not need to be shared among multiple programs, it is possible to co-locate the network device driver (accessing the physical hardware), the networking stack, and the networking application into one program, similar to the exokernel concept.
To see what is possible, performance-wise, the following paper is very insightful:
“KV-Cache: A Scalable High-Performance Web-Object Cache for Manycore”
http://cs.iupui.edu/~fgsong/publication/ucc13.pdf
Posix is the great equalizer, the least common multiple which guaranties (using the term very loosely here) that software will run on different operating systems with only a recompile. This is ostensibly a good thing, and yet I hate the posix standard. I find it limiting, outdated, poorly designed. I regret that while superior interfaces existing in projects like genome and plan9, they will always be overshadowed by “good enough” posix, which is required just to get software to run on those alternative platforms.
It was Posix that enabled linux to become compatible with and eventually displace unix. Yet I find it ironic that we’re suffering from a kind of posix-lock-in with large barriers to change, forcing alternative projects to mimic what we already have.
Edited 2014-05-30 18:45 UTC
why not?
There’s Noux