Unlike a physical container, a Docker container exists in a virtual environment. A physical container is assembled based on a standardized specification. We see something similar with virtual containers. A Docker container is created from an immutable template called an “image”. A Docker image contains the dependencies and configuration settings required to create a container.
Just as many physical containers can stem from a single specification, any number of Docker containers can be created from a single image. Docker containers thus form the basis for scalable services and reproducible application environments. We can create a container from an image and also save an existing container in a new image. You can run, pause, and stop processes within a container.
Unlike a virtual machine (VM) virtualization, a Docker container does not contain its own operating system (OS). Instead, all the containers running on a Docker host access the same OS kernel. When Docker is deployed on a Linux host, the existing Linux kernel is used. If the Docker software runs on a non-Linux system, a minimal Linux system image is used via a hypervisor or virtual machine.
A certain amount of system resources is allocated to each container uponexecution. This includes RAM, CPU cores, mass storage and (virtual) network devices. Technically, “cgroups” (short for “control groups”) limit a Docker container’s access to system resources. “Kernel namespaces” are used to partition the kernel resources and distinguish the processes from each other.
Externally, Docker containers communicate over the network. To do this, specific services listen for exposed ports. These are usually web or database servers. The containers themselves are controlled on the respective Docker host via the Docker API. Containers can be started, stopped, and removed. The Docker client provides a command line interface (CLI) with the appropriate commands.