So what is a Microkernel and why would you run Unix on top of another OS?
A Microkernel is a highly modular collection of powerful OS-neutral abstractions, upon which can be built operating system servers. In MACH , these abstractions (tasks, threads, memory objects, messages, and ports) provide mechanisms to manage and manipulate virtual memory (VM), scheduling, and interprocess communication (IPC).
This modularity enables scalability, extensibility, and portability not typically found in monolithic or conventional operating systems. Because the MACH microkernel is OS-neutral, different OS "personalities" such as Linux can be hosted on the microkernel. Perhaps the most significant advantage is that there is a focus on portability in the microkernel design and in decoupling that from the hosted OS.
Microkernels move many of the OS services into "user space" that on other operating systems are kept in the kernel. This has significant effects in the following areas:
If there is a problem with a particular service, it normally can be reconfigured and restarted without having to restart the complete OS. This should be helpful for situations requiring high availability.
Moreover, since services would now run in completely independent memory spaces (which is not the case for kernel-level services), bugs and misconfiguration will be less able to corrupt the kernel.
Moreover, the "true kernel" winds up being smaller in scope, and thus ought to be easier to understand and verify. The L4 microkernel occupies about 32K of memory, which severely limits how complex it can realistically be.
Services that run within the kernel effectively have "ring 0" privileges, which is to say that they can do anything, anywhere, at any time. Unix processes that run as root don't have that much control over the system.
A problem at present with Linux is that any program that has graphics access must be setuid root because that is the only way of having permission to access the hardware. This has the unfortunate effect that the program gets "root" access to everything on the system, and not just the screen.
By running the services as lower level user processes, their access to system resources (e.g. - the ability to mess things up) is far more restricted.
Moreover, "security" becomes less monolithic, which allows even system services, and indeed security services for that matter, to themselves be forced to comply with security requirements.
Services can be changed without need to restart the whole system.
Work has been ongoing to (for instance) make Linux more dynamically re-configurable, loadable kernel modules being a good example.
Makes Coding Easier
Kernel code usually requires the use of special memory allocation and output routines since the kernel cannot depend on a lower level to manage these things for it. User-mode code is thus simpler to write than "kernel" code, because it doesn't need to worry about kernel-specific restrictions.
Lower "fixed" memory footprint.
Kernel-allocated memory (code+data) is generally strictly forced to stay in memory; swapping it out is forbidden. The more "kernel-like" code that is moved to user space, the more of the services of the OS can be swapped out if they are used infrequently.
This is the reason why it is advantageous for Linux to run X as a separate process rather than throwing it into the kernel. Portions that are not being used can get swapped out.
Moves us more towards Real Time performance.
Interrupts are usually turned off when in kernel mode to prevent interruption of critical processing. The less code in the kernel, the less that "interrupts are interrupted."
Processes with Real-Time requirements would be given better access to the microkernel than other processes, including some that would normally be considered 'base' OS services.
Simplifies Symmetric Multiprocessing code
Managing multiple processors requires a fair bit of internal bookkeeping. The more services that sit in the kernel, the more bookkeeping work code is required, and the less effective use can be made of the additional processors. Changing a large, single-threaded monolithic kernel to make effective use of SMP hardware is very difficult.
If the OS runs atop a microkernel specifically written to make effective use of SMP, then SMP support does not need to be obtrusive in the "main" kernel.
Factors out complexity
Software complexity that relates to (for instance) hardware interfacing can be factored out into clearly separate modules. This allows the overall system functionality and complexity to grow further without becoming completely unmanageable.
The microkernel approach has been taken by Apple in creating a version of Linux that runs as a "personality" on top of MACH on PowerMacs. This should make it considerably easier to port Linux to additional system architectures, as MACH is considerably smaller than Linux.
Are there any downsides to the use of microkernels? Yes, indeed.
Most microkernels are not tiny, despite the name. (QNX being a notable exception; L4 possibly being an exception.)
The overall RAM footprint of the system will likely increase.
Communications between components of the extended "OS" requires that formalized message-passing mechanisms be used.
As a result, code must be written to use the formal mechanisms, rather than processes being able to informally use system memory. This may reduce performance.
Analysis of the HP PA-RISC port of Linux atop MACH indicates that the microkernelled version of Linux runs approximately 10% slower than HP/UX.
New kinds of deadlocks and other error conditions are possible between system components that would not be possible with a monolithic kernel.
Probably the fundamental consideration encouraging microkernel development is the growth of interest in SMP and other multiprocessing applications. Simplifying ports to new architectures and encouraging compatibility with other OSes being close behind. And then there's the Hurd arguments...
Discussions of implementing Linux over a microkernel come up periodically in the Linux kernel/system design newsgroup; MkLinux represents some concrete steps in this direction. Many of these ideas can be applied to a non-microkerneled OS in at least limited ways; Linux "kernel modules" being a good case in point.
The Mach home page at Carnegie Mellon University.
The kernel for the "official" FSF/GNU operating system project (well, at least when RMS isn't griping about people failing to call it GNU/Linux .
A (seemingly defunct) site seeking to support continuing development of Mach4 and Lites1.
It consists of a low level virtual machine that is treated like a microkernel . Operations are implemented using loadable modules that run atop that kernel.
Not unlike the Java virtual machine or P-Code concepts, this provides its own low level, hardware independent "machine language."
Unlike Unix, the Argante notion of "process" is not something spawned/killed, but is rather more like the VMS/MVS notion of functional services that are started/stopped from the console.
Mungi - another OS that runs atop L4
It is not a Unix-like system; it implements a single 64 bit address space, shared by all processes and processors.
From the makers of Chorus, the C++-based microkernel OS...
An IBM/University of Toronto project to produce a high performance general purpose OS kernel for cache-coherent multiprocessors.
They assume 64 bit SMP or NUMA CPUs, and implement, on top of that, a kernel that supports the Linux API, and which will implement the Linux ABI. The IPC implementation resembles that of L4; in order to avoid IPC overheads, much of the functionality is implemented in application level libraries, rather like the MIT Exokernel.
The fundamental attribute that distinguishes monolithic vs. microkernel vs. exokernel architectures is what the architecture implements in kernel space (that which runs in supervisor mode on the cpu) vs. what the architecture implements in user space (that which runs in non-supervisor mode on the cpu).
The monolithic architecture implements all operating system abstractions in kernel space, including device drivers, virtual memory, file systems, networking, device/cpu multiplexing, etc.
The microkernel architecture abstracts lower-level os facilities, implementing them in kernel space, and moves higher-level facilities to processes in user space. Usually what distinguishes higher vs. lower-level os facilities is that which can be implemented in a platform independent manner vs. that which cannot. Following from that another distinguishing characteristic is what is sufficiently general to provide for various operating-system "personalities" and what is not. In microkernel architectures device-drivers, virtual memory, process/task/thread management/scheduling, and other such facilities are implemented in the kernel, and parallel facilities that specialize those facilities for the operating system's personality are implemented in user-space processes. Also implemented in user space are file systems, networking, etc. that employ the lower-level facilities provided by the microkernel (like device drivers).
In contrast, the exokernel architecture implements nothing in kernel space. The exokernel's sole purpose is to securely multiplex hardware resources among user-space processes. Device drivers, virtual memory, even cpu multiplexing and process management are implemented in user space. Supervisor-mode hardware events, like timer ticks, page faults, etc., activate stub handlers in the kernel that simply pass the event to a user-level process that implements the relevant facility's policy. The same system can simultaneously implement forward and inverted page tables, compute-job-friendly or interactive-job-friendly process scheduling, and an application can pick and choose which ever policies will provide it with the best performance.
The exokernel architecture is essentially the extension of the philosophy of RISC cpu architecture to the operating system level. The only exokernel architecture that I know of (MIT Aegis) has come up with some very novel ways to implement this.
Were I to implement my dream system it would be of exokernel architecture.
|-- On 17 Mar 1998 21:12:52 GMT, |
I can understand you pet peeve about microkernels and message
passing when looking at Mach or Minix (not as well when looking at QNX , which performs quite well, and isn't
bloated). On the other hand, a lot of "system" work on Linux gets done
by efficient message passing; it's just a part of the system you are
not involved in at all (hint: it's the X Window System). Why does
message passing work quite well between X and the client? Because X
manages to pack a lot of messages together before it actually switches
tasks. That's because X's calls are asynchronous (with a few
exceptions, some of them mistakes like
Unix's syscalls all are synchronous. That makes them a bad target for a microkernel, and the primary reason why Mach and Minix are so bad - they want to emulate Unix on top of a microkernel. Don't do that.
If you want to make a good microkernel, choose a different syscall paradigm. Syscalls of a message based system must be asynchronous (e.g. asynchronous IO), and event-driven (you get events as answers to various questions, the events are in the order of completion, not in the order of requests). You can map Unix calls on top of this on the user side, but it won't necessarily perform well.
This seems pretty convincing. The L4 designers found indeed that message passing overhead led to their microkernels operating more slowly than monolithic kernels, running the same Unix APIs. If applications were redesigned to use asynchronous message passing, they might perform very well, but when they use the synchronous Unix APIs, performance will suffer.
This is also consistent with embedded applications running on QNX performing well: if you design the application to use its message passing APIs, it will work well.
Exokernels are a further extension of the microkernel approach where the "kernel" per se is almost devoid of functionality; it merely passes requests for resources to "user space" libraries.
This would mean that (for instance) requests for file access by one process would be passed by the kernel to the library that is directly responsible for managing file systems. Initial reports are that this in particular results in significant performance improvements, as it does not force data to even pass through kernel data structures.
They are presently still largely in "research mode;" there are not, at this time, any readily available implementations that one could run at home.
An OS based on the MIT Exokernel.
A modern (post-2010!) entrant based around the combination of:
Code is constructed using your own code, and a series of OCaml libraries
A runtime package that allows the code to form a standalone "operating system" or a normal Unix binary
The Xen virtual machine monitor
A resultant OCaml-based "operating system" can leave out layers that are not needed for the application, and hence have wildly less code to load, configure, and run than (say) a Linux distribution. It can boot up in as little as 50 milliseconds.
Memory and other resource consumption is also liable to be rather lower than for a usual Linux distribution in a VM.
If this was useful, let others know by an Affero rating