Different kernel design approaches
Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation. While monolithic kernels execute all of their code in the same address space (kernel space) in order to increase the performance of the system, microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[1] Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs, such as nanokernels and exokernels, are mostly investigated by researchers and are not in widespread use.
Monolithic kernels
Main article: Monolithic kernel
In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Monolithic systems are easier to design and implement than other solutions, and are extremely efficient if well-written. The main disadvantages of monolithic kernels are the dependencies between system components - a bug in a device driver might crash the entire system - and the fact that large kernels become very difficult to maintain.
Microkernels
Main article: Microkernel
In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls.
Microkernels generally underperform traditional designs, sometimes dramatically. This is due in large part to the overhead of moving in and out of the kernel, a context switch, in order to move data between the various applications and servers. It was originally believed that careful tuning could reduce this overhead dramatically, but by the mid-1990s most researchers had abandoned this approach. Recently, newer microkernels, optimized for performance, have addressed these problems.[5]
Monolithic kernels vs microkernels
As the computer kernel grows, a number of problems become evident. One of the most obvious is that the memory footprint increases. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support.[6] In order to reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code.
Due to the problems that monolithic kernels pose, they were considered obsolete by the early 1990s. As a result, the design of Linux using a monolithic kernel rather than a microkernel was the topic of a famous flame war between Linus Torvalds and Andrew Tanenbaum.[7] There is merit on both sides of the argument presented in the Tanenbaum/Torvalds debate.
Monolithic kernels tend to be easier to design correctly, and therefore may grow more quickly than a microkernel-based system. However, a bug in a monolithic system usually crashes the entire system, while this doesn't happen in a microkernel with servers running apart from the main thread. Monolithic kernel proponents reason that incorrect code doesn't belong in a kernel, and that microkernels offer little advantage over correct code. There are success stories in both camps. Microkernels are often used in embedded robotic or medical computers where crash tolerance is important and most of the OS components reside in their own private, protected memory space. This is impossible with monolithic kernels, even with modern module-loading ones. However, the monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower message passing system of microkernel designs.
Hybrid kernels
Main article: Hybrid kernel
The hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel.Hybrid kernels are essentially a compromise between the monolithic kernel approach and the microkernel system. This implies running some services (such as the network stack or the filesystem) in kernel space in order to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space.
Nanokernels
Main article: Nanokernel
A nanokernel delegates virtually all services - including even the most basic ones like interrupt controllers or the timer - to device drivers in order to make the kernel memory requirement even smaller than a traditional microkernel.[8]
[edit]
Exokernels
Main article: Exokernel
An exokernel is a type of kernel that does not abstract hardware into theoretical models. Instead it allocates physical hardware resources, such as processor time, memory pages, and disk blocks, to different programs. A program running on an exokernel can link to a library operating system that uses the exokernel to simulate the abstractions of a well-known OS, or it can develop application-specific abstractions for better performance.[9]
[edit]
Other designs
There are also alternative ways to design (and implement) a kernel which do not fit into the above named categories. Examples for this are Exec, the kernel of AmigaOS, which is considered to be "microkernel-like"[citation needed], and Unununium, which entirely lacks a kernel.[10]
2006-08-26 05:50:09
·
answer #1
·
answered by Paultech 7
·
0⤊
0⤋