Virtualizing is becoming more and more important: Its basic concept is that one imposes a virtual, abstract system onto an actual real system. Both software and hardware can be depicted in this way. In order to create a connection between the actual and the virtual system, one requires an additional layer – the hypervisor.
Anyone who uses technologies with an operating system is working with a kernel, though often without realizing it. The kernel organizes processes and data in every computer. It serves as the core of an operating system and the interface between software and hardware. This means that the kernel is in constant use and is a key component of an operating system.
The kernel not only serves as the core of the system but is also a program that controls all processor and memory access. It is responsible for the most important drivers and has direct access to the hardware. It’s the basis for interactions between hardware and software and manages their resources as efficiently as possible.
$1 Domain Names
Register great TLDs for less than $1 for the first year.
Why wait? Grab your favorite domain name today!
What is a kernel?
Structure of a kernel
A kernel is always built the same way and consists of several layers:
- The deepest layer is the interface with hardware (processors, memory, and devices), which manages network controllers and PCI express controllers, for example.
- On top of that is the memory management, which entails distributing RAM including the virtual main memory.
- Then comes process management (scheduler), which is responsible for time management and makes multitasking possible.
- The next layer contains device management.
- The highest layer is the file system. That’s where processes are assigned to RAM or the hard drive.
A kernel is central to all layers, from system hardware to application software. Its work ends where user access begins: at the Graphical User Interface (GUI). The kernel thus borders on the shell (that is, the user interface). You can picture the kernel as a seed or pit and the shell as the fruit that surrounds the pit.
What is a kernel in a computer program?
Think of the kernel in this context like a colonel: They both pass along commands. A program sends “system calls” to the kernel, for example when a file is written. The kernel, well-versed in the instruction set of the CPU, then translates the system call into machine language and forwards it to the CPU. All of this usually happens in the background, without the user noticing.
What are the kernel’s tasks?
The main task of the kernel is to multitask. This requires keeping up with time constraints and remaining open to other applications and expansions.
For every rule there are exceptions in such a lean, well-functioning system as an operating system. That’s why the kernel only serves as a go-between when it comes to system software, libraries, and application software. In Linux, the graphic interface is independent from the kernel.
In multi-user systems, the kernel also monitors access rights to files and hardware components. The Task Manager shows what those are at any given time. If a process is finished by the user, the Task Manager gives the kernel instructions for stopping the process and freeing the memory that was used for it.
When a computer powers up, the kernel is the first thing that’s loaded into the RAM. This happens in a protected area, the bootloader, so that the kernel can’t be changed or deleted.
Afterwards, the kernel initializes the connected devices and starts the first processes. System services are loaded, other processes are started or stopped, and user programs and memory allocation are initiated.
How does a kernel work?
This question is best answered by countering: What is a kernel not? The kernel is not the core of a processor, it’s the core of the operating system. A kernel is also not an API or framework.
Multikernel operating systems can use various cores of a multicore processor like a network of independent CPUs. How does that work? It comes down to the special structure of the kernel, which is composed of a series of different components:
- Since the kernel’s lowest layer is machine oriented, it can communicate directly with the hardware, processor, and memory. The functions of the kernel vary among its five layers, from processor management to device management. The highest layer cannot access machines, and instead is responsible for interfacing with software.
- Application programs run separately from the kernel in the operating system and merely draw on its functions. Without the kernel, communication between programs and hardware wouldn’t be possible.
- Several processes can run simultaneously thanks to the multitasking kernel. But it’s generally the case that only one action can be processed by the CPU at one time – unless you’re using a multicore system. The rapid change in processes that gives the impression of multitasking is taken care of by the scheduler.
From these components follow the four functions of the kernel:
- Memory management: Regulates how much memory is used in different places.
- Process management: Determines which processes the CPU can use, as well as when and how long they’re used for.
- Device driver: Intermediates between hardware and processes.
- System calls and security: Receives service requests from the processes.
When implemented properly, the functions of the kernel are invisible to users. The kernel works in its own setting, the kernel space. On the other hand, files, programs, games, browsers, and everything that the user sees are located in the user space. Interaction between these two use the system call interface (SCI).
The kernel in the operating system
To understand the function of the kernel in the operating system, imagine the computer as divided into three levels:
- Hardware: The foundation of the system, made up of RAM, the processor and input and output devices. The CPU carries out reading and writing operations and calculations for the memory.
- Kernel: The nucleus of the operating system in contact with the CPU.
- User processes: All running processes that the kernel manages. The kernel makes communication between processes and servers possible, also known as Inter-Process Communication (IPC).
There are two modes for the code in a system: kernel mode and user mode. The code in kernel mode has unlimited access to the hardware, whereas in user mode access is limited to the SCI. If there’s an error in user mode, not much happens. The kernel will intervene and repair any potential damage. On the other hand, a kernel crash can cause the entire system to crash. This is, however, unlikely due to the security measures in place.
What kind of kernels exist?
One type of kernel previously described is the multitasking kernel that describes several processes running simultaneously on one kernel. If you add access management to it, you’ll have a multiuser system, on which several users can work at the same time. The kernel is responsible for authentication, as it can allot or separate called processes.
What is an open source kernel?
It’s easy to lose track of the different kernel types. Linux systems and Android devices use a Linux kernel. Windows uses the NT kernel, which various subsystems draw on. Apple uses the XNU kernel.
The three types of kernels
There are various types of kernels that are used across different operating systems and end devices. They can be sorted into three groups:
- Monolithic kernels: A large kernel for various tasks. It’s responsible for memory and process management as well as communication between processes and offers functions for driver and hardware support. This is the kernel in operating systems like Linux, OS X, and Windows.
- Microkernel: The microkernel is deliberately small, so that errors and crashes don’t affect the entire operating system. To ensure that it can still fulfill the same functions as a large kernel, it’s organized into several modules. The OS X component Mach serves as the only decent example, since up until now there aren’t any operating systems with microkernels.
- Hybrid kernel: A combination of microkernel and monolith. The large kernel is more compact and can be broken down into modules. Further kernel parts can be appended dynamically. They’re used in part by Linux and OS X.