Online stores, company websites, or promotional content: it doesn’t matter what platform you’re using—availability remains key for a successful online business model. More and more companies are opting to use load balancing schemes to equally distribute the server requests of internet users across multiple computers. When properly applied, not only does load balancing permit high availability for...Better server access time with load balancing
Virtualization has revolutionized the world of information technology. The method of distributing a physical computer’s resources onto several virtual machines (VM) first occurred in the form of hardware virtualization . This approach is based on emulating hardware components to be able to supply different virtual servers with their own operating systems (OS) on one shared hosting system. A structure such as this is often used in the software development when different test environments should run on a single computer. Virtualization also forms the basis of various cloud-based web hosting products
One alternative to hardware virtualization is operating-system-level virtualization. This is where various server applications are realized in isolated virtual environments, or containers, which all run on the same operating system. This is also called container-based virtualization. Like virtual machines, which have their own operating systems, containers can also run different applications with varying requirements on the same physical system. Since containers don’t have their own OS, this virtualization technology is characterized by a considerably more streamlined installation process and a smaller overhead.
Server containers are nothing new, but today, the technology has come to prominence through open source projects, like Docker and CoreOS’s rkt.
What are server containers?
Hardware virtualization is supported by a so-called Hypervisor, which runs on the host system’s hardware and distributes its resources proportionately between the guest operating systems. With container-based virtualization, on the other hand, no additional operating systems are started; instead, the common OS creates isolated instances of itself. A complete runtime environment is available for applications to use on these virtual containers.
Software containers can be fundamentally regarded as server apps. To install an application, a container is loaded into a portable format (or image) with all the required files, which is then loaded onto a computer and started in a virtual environment. It’s possible to implement application containers on practically any operating system. While Windows systems use Virtuozzo (the software developed by Parallels), FreeBSD uses the virtualization environment Jails, and Linux systems support OpenVZ and LXC containers. Operating system virtualization has only become attractive for the mass market through container platforms such as Docker or rkt, which add basic features that make handling server containers a simpler task.
Side note: Docker and the comeback of container technology
Users dealing with container-based virtualization will invariably encounter Docker at some point. Thanks to its outstanding marketing, the open source project has quickly become synonymous with container technology. The command line tool, Docker, is used for starting, stopping and managing containers. It’s based on Linux techniques, like Cgroups and Namespaces to separate the resources of individual containers. Initially, the LXC interface of the Linux kernel was used; these days, however, Docker containers use a self-developed programming interface called Libcontainer.
One central feature of the Docker platform is the Docker Hub, an online service that contains a repository for Docker images so that self-created images can be shared easily with other users. For Linux users, installing a pre-built server container is as simple as going to the app store. Applications can be downloaded via simple command line instructions from the central Docker Hub and run on your own system.
Docker’s biggest competitor on the container solution market is rkt, which supports Docker images as well as its own format, app container images (ACI).
Characteristics of container-based virtualization
With application containers, all the files that are required for operating server applications are provided in one handy package, allowing for a more streamlined installation and simpler operation of complex server programs. However, their main selling points are the management and automation of container-based applications.
- Easier installation process: software containers are started from images. This refers to a container’s portable images, which consist of a single server program and all the required components, such as libraries, supporting programs, and configuration files. The differences between various operating system distributions can thus be compensated, allowing for a simpler installation process with just one command line instruction.
- Platform independence: images can be easily transferred from one system to another and are characterized by a high level of platform independence. To start a software container from an image, you just need an operating system with a corresponding container platform.
- Minimal virtualization overhead: a Linux with Docker consists of around 100 megabytes and can be set up in a matter of minutes. But it’s not only its compact size that’s a great selling point for system administrators; the container solution can keep virtualization overhead to a minimum. This contrasts with the significantly reduced performance with hardware virtualization, caused by the Hypervisor and additional operating systems. Furthermore, booting virtual machines can take several minutes, whereas container apps for servers are always immediately available.
- Isolated applications: every program in a server container runs independently from other software containers on the OS. This allows even applications with contradictory requirements to operate parallel on the same system with ease.
- Standardized administration and automation: as the management of all server containers takes place on one container platform (i.e. Docker) with the same tools, the applications in the data center can largely be automated. Container solutions are therefore especially suited to server structures, in which individual components are distributed across multiple servers, so that load is carried by several machines. For areas of application such as these, Docker provides tools to configure automation, which enable new instances to start in peak loads. Google also offers a software solution for orchestrating large container clusters, a software tailored especially for Docker, called Kubernetes.
How secure are container solutions?
Forgoing separate operating systems provides a performance advantage for container-based virtualization. However, this is accompanied by a reduced level of security. In hardware virtualization, security issues in operating systems normally just apply to virtual machines, but they affect the virtualization on operating system levels in all software containers. Containers are therefore not encapsulated to the same extent as virtual machines with their own OS. Indeed, an attack on the Hypervisor could cause significant damage in hardware virtualization systems. However, thanks to its low complexity, there are fewer opportunities for attackers to strike than, for instance, with a Linux kernel. Server containers therefore serve as a credible alternative to hardware virtualization, although for the time being, it can’t be considered a complete replacement.