The com­pre­hen­sive Docker ecosystem offers de­vel­op­ers a number of pos­si­bil­i­ties to deploy ap­pli­ca­tions, or­ches­trate con­tain­ers and more. We’ll go over the most important Docker tools and give you an overview of the most popular third-party projects that develop open-source Docker tools.

Web Hosting
Hosting that scales with your ambitions
  • Stay online with 99.99% uptime and robust security
  • Add per­for­mance with a click as traffic grows
  • Includes free domain, SSL, email, and 24/7 support

What are the essential Docker tools/com­po­nents?

Today, Docker is far more than just a so­phis­ti­cat­ed platform for managing software con­tain­ers. De­vel­op­ers have created a range of diverse Docker tools to make deploying ap­pli­ca­tions via dis­trib­uted in­fra­struc­ture and cloud en­vi­ron­ments easier, faster and more flexible. In addition to tools for clus­ter­ing and or­ches­tra­tion, there is also a central app mar­ket­place and a tool for managing cloud resources.

Docker Engine

When de­vel­op­ers say “Docker”, they are usually referring to the open-source client-server ap­pli­ca­tionthat forms the basis of the container platform. This ap­pli­ca­tion is referred to as Docker Engine. Central com­po­nents of Docker Engine are the Docker daemon, a REST API and a CLI (command line interface) that serves as the user interface.

With this design, you can talk to Docker Engine through command-line commands and manage Docker images, Docker files and Docker con­tain­ers con­ve­nient­ly from the terminal.

Image: Schematic representation of the Docker engine
The main com­po­nents of the Docker engine: the Docker daemon, REST API and Docker CLI

You can find a detailed de­scrip­tion of Docker Engine in our Docker tutorial for beginners Docker tutorial: In­stal­la­tion and first steps.

Docker Hub

Docker Hub provides users with a cloud-based registry that allows Docker images to be down­loaded, centrally managed and shared with other Docker users. Reg­is­tered users can store Docker images publicly or in private repos­i­to­ries. Down­load­ing a public image (known as pulling in Docker ter­mi­nol­o­gy) does not require a user account. An in­te­grat­ed tag mechanism enables the ver­sion­ing of images.

In addition to the public repos­i­to­ries of other Docker users, there are also many resources from the Docker developer team and well-known open-source projects that can be found in the official repos­i­to­ries in Docker Hub. The most popular Docker images include the NGINX webserver, the Redis in-memory database, the BusyBox Unix tool kit and the Ubuntu Linux dis­tri­b­u­tion.

Image: Official repositories in the Docker node
You can find more than 100,000 free images in the official Docker repos­i­to­ries.

Or­ga­ni­za­tions are another important Docker Hub feature, which allow Docker users to make private repos­i­to­ries that are ex­clu­sive­ly available to a select group of people. Access rights are managed within an or­ga­ni­za­tion using teams and group mem­ber­ships.

Docker Swarm

Docker Engine contains a native function that enables its users to manage Docker hosts in clusters called swarms. The cluster man­age­ment and or­ches­tra­tion ca­pa­bil­i­ties built into the Docker engine are based on the Swarmkit toolbox. If using an older version of the container platform, the Docker tool is available as a stand­alone ap­pli­ca­tion.

Tip

Clusters are made up of any number of Docker hosts and are hosted on the in­fra­struc­ture of an external IaaS provider or in their own data center.

As a native Docker clus­ter­ing tool, Swarm gathers a pool of Docker hosts into a single virtual host and serves the Docker REST API. Any Docker tool as­so­ci­at­ed with the Docker daemon can access Swarm and scale across any number of Docker hosts. With the Docker Engine CLI, users can create swarms, dis­trib­ute ap­pli­ca­tions in the cluster, and manage the behavior of the swarm without needing to use ad­di­tion­al or­ches­tra­tion software.

Docker engines that have been combined into clusters run in swarm mode. Select this if you want to create a new cluster or add a Docker host to an existing swarm. In­di­vid­ual Docker hosts in a cluster are referred to as “nodes”. The nodes of a cluster can run as virtual hosts on the same local system, but more often a cloud-based design is used, where the in­di­vid­ual nodes of the Docker swarm are dis­trib­uted across different systems and in­fra­struc­tures.

The software is based on a master-worker ar­chi­tec­ture. When tasks are to be dis­trib­uted in the swarm, users pass a service to the manager node. The manager is then re­spon­si­ble for sched­ul­ing con­tain­ers in the cluster and serves as a primary user interface for accessing swarm resources.

The manager node sends in­di­vid­ual units, known as tasks, to worker nodes.

  • Services: Services are central struc­tures in Docker clusters. A service defines a task to be executed in a Docker cluster. A service pertains to a group of con­tain­ers that are based on the same image. When creating a service, the user specifies which image and commands are used. In addition, services offer the pos­si­bil­i­ty to scale ap­pli­ca­tions. Users of the Docker platform simply define how many con­tain­ers are to be started for a service.
  • Tasks: To dis­trib­ute services in the cluster, they are divided into in­di­vid­ual work units (tasks) by the manager node. Each task includes a Docker container as well as the commands that are executed in it.

In addition to the man­age­ment of cluster control and or­ches­tra­tion of con­tain­ers, manager nodes by default can also carry out worker node functions – unless you restrict the tasks of these nodes strictly to man­age­ment.

An agent program runs on every worker node. This accepts tasks and provides the re­spec­tive principal node status reports on the progress of the trans­ferred task. The following graphic shows a schematic rep­re­sen­ta­tion of a Docker Swarm:

Image: Schematic representation of a Docker Swarm
The manager-worker ar­chi­tec­ture of a Docker Swarm

When im­ple­ment­ing a Docker Swarm, users generally rely on the Docker machine.

Docker Compose

Docker Compose makes it possible to merge multiple con­tain­ers and execute with a single command. The basic element of Compose is the central control file based on the award-winning language YAML. The syntax of this compose file is similar to that of the open-source software Vagrant, which is used when creating and pro­vi­sion­ing virtual machines.

In the docker-compose.yml file, you can define any number of software con­tain­ers, including all de­pen­den­cies, as well as their re­la­tion­ships to each other. Such multi-container ap­pli­ca­tions are con­trolled according to the same pattern as in­di­vid­ual software con­tain­ers. Use the docker-compose command in com­bi­na­tion with the desired sub­com­mand to manage the entire life cycle of the ap­pli­ca­tion.

This Docker tool can be easily in­te­grat­ed into a cluster based on Swarm. This way, you can run multi-container ap­pli­ca­tions created with Compose on dis­trib­uted systems just as easily as you would on a single Docker host.

Another feature of Docker Compose is an in­te­grat­ed scaling mechanism. With the or­ches­tra­tion tool, you can com­fort­ably use the command-line program to define how many con­tain­ers you would like to start for a par­tic­u­lar service.

What third-party Docker tools are there?

In addition to the in-house de­vel­op­ment from Docker Inc., there are various software tools and platforms from external providers that provide in­ter­faces for the Docker Engine or have been specially developed for the popular container platform. Within the Docker ecosystem, the most popular open-source projects include the or­ches­tra­tion tool Ku­ber­netes, the cluster man­age­ment tool Shipyard, the multi-container shipping solution Panamax, the con­tin­u­ous in­te­gra­tion platform Drone, the cloud-based operating system OpenStack and the D2iQ DC/OS data center operating system, which is based on the cluster manager Mesos.

Ku­ber­netes

It’s not always possible for Docker to come up with their own or­ches­tra­tion tools like Swarm and Compose. For this reason, various companies have been investing in their own de­vel­op­ment work for years into creating tailor-made tools designed to fa­cil­i­tate the operation of the container platform in large, dis­trib­uted in­fra­struc­tures. Among the most popular solutions of this type is the open-source project Ku­ber­netes.

Ku­ber­netes is a cluster manager for container-based ap­pli­ca­tions. The goal of Ku­ber­netes is to automate ap­pli­ca­tions in a cluster. To do this, the or­ches­tra­tion tool uses a REST-API, a command line program and a graphical web interface as controls in­ter­faces. With these in­ter­faces, au­toma­tions can be initiated, and status reports can be requested. You can use Ku­ber­netes to:

  • execute container-based photos on a cluster,
  • install and manage ap­pli­ca­tions in dis­trib­uted systems,
  • scale ap­pli­ca­tions, and
  • use hardware as best as possible.

To this end, Ku­ber­netes combines con­tain­ers into logical parts, which are referred to as pods. Pods represent the basic units of the cluster manager, which can be dis­trib­uted in the cluster by sched­ul­ing.

Like Docker’s Swarm, Ku­ber­netes is also based on a master-worker ar­chi­tec­ture. A cluster is composed of a Ku­ber­netes master and a variety of workers, which are also called Ku­ber­netes nodes (or minions). The Ku­ber­netes master functions as a central control plane in the cluster and is made up of four basic com­po­nents, allowing for direct com­mu­ni­ca­tion in the cluster and task dis­tri­b­u­tion. A Ku­ber­netes master consists of an API server, the con­fig­u­ra­tion memory etcd, a scheduler and a con­troller manager.

  • API server: All au­toma­tions in the Ku­ber­netes cluster are initiated with REST-API via an API server. This functions as the central ad­min­is­tra­tion interface in the cluster.
  • etcd: You can think of the open-source con­fig­u­ra­tion memory etcd as the memory of a Ku­ber­netes cluster. The Key Value Store, which CoreOS developed specif­i­cal­ly for dis­trib­uted systems, stores con­fig­u­ra­tion data and makes it available to every node in the cluster. The current state of a cluster can be managed at any time via etcd.
  • Scheduler: The scheduler is re­spon­si­ble for dis­trib­ut­ing container groups (pods) in the cluster. It de­ter­mines the resource re­quire­ments of a pod and then matches this with the available resources of the in­di­vid­ual nodes in the cluster.
  • Con­troller manager: The con­troller manager is a service of the Ku­ber­netes master and controls or­ches­tra­tion by reg­u­lat­ing the state of the cluster and per­form­ing routine tasks. The main task of the con­troller manager is to ensure that the state of the cluster cor­re­sponds to the defined target state.

The overall com­po­nents of the Ku­ber­netes master can be located on the same host or dis­trib­uted over several master hosts within a high-avail­abil­i­ty cluster.

While the Ku­ber­netes master is re­spon­si­ble for the or­ches­tra­tion, the pods dis­trib­uted in the cluster are run on hosts, Ku­ber­netes nodes, which are sub­or­di­nate to the master. To do this, a container engine needs to run on each Ku­ber­netes node. While Docker is the de facto standard, Ku­ber­netes does not have to use a specific container engine.

In addition to the container engine, Ku­ber­netes nodes cover the following com­po­nents:

  • kubelet: kubelet is an agent that runs on each Ku­ber­netes node and is used to control and manage the node. As the central point of contact of each node, kubelet is connected to the Ku­ber­netes master and ensures that in­for­ma­tion is passed on to and received from the control plane.
  • kube-proxy: In addition, the proxy service kube-proxy runs on every Ku­ber­netes node. This ensures that requests from the outside are forwarded to the re­spec­tive con­tain­ers and provides services to users of container-based ap­pli­ca­tions. The kube-proxy also offers rudi­men­ta­ry load balancing.

The following graphic shows a schematic rep­re­sen­ta­tion of the master-node ar­chi­tec­ture on which the or­ches­tra­tion platform Ku­ber­netes is based:

Image: Schematic representation of the Kubernetes architecture
The master-node ar­chi­tec­ture of the or­ches­tra­tion platform Ku­ber­netes

In addition to the core project Ku­ber­netes, there are also numerous tools and ex­ten­sions that make it possible to add more func­tion­al­i­ty to the or­ches­tra­tion platform. The most popular are the mon­i­tor­ing and error diagnosis tools Prometheus, Weave Scope, and sysdig, as well as the package manager Helm. Plugins also exist for Apache Maven and Gradle, as well as a java API, which allows you to remotely control Ku­ber­netes.

Shipyard

Shipyard is a community-developed man­age­ment solution based on Swarm that allows users to maintain Docker resources like con­tain­ers, images, hosts and private reg­istries via a graphical user interface. It is available as a web ap­pli­ca­tion via the browser. In addition to the cluster man­age­ment features that can be accessed via a central web interface, Shipyard also offers user au­then­ti­ca­tion and role-based access control.

The software is 100% com­pat­i­ble with the Docker remote API and uses the open-source NoSQL database RethinkDB to store data for user accounts, addresses and oc­cur­rences. The software is based on the cluster man­age­ment toolkit Citadel and is made up of three main com­po­nents: con­troller, API and UI.

  • Shipyard con­troller: The con­troller is the core component of the man­age­ment tool Shipyard. The Shipyard con­troller interacts with RethinkDB to store data and makes it possible to address in­di­vid­ual hosts in a Docker cluster and to control events.
  • Shipyard API: The Shipyard API is based on REST. All functions of the man­age­ment tool are con­trolled via the Shipyard API.
  • Shipyard user interface (UI): The Shipyard UI is an AngularJS app, which presents users with a graphical user interface for the man­age­ment of Docker clusters in the web browser. All in­ter­ac­tions in the user interface take place via the Shipyard API.

Further in­for­ma­tion about the open-source project can be found on the official website of Shipyard.

Panamax

The de­vel­op­ers of the open-source software project Panamax aim to simplify the de­ploy­ment of multi-container apps. The free tool offers users a graphical user interface that allows complex ap­pli­ca­tions based on Docker con­tain­ers to be con­ve­nient­ly developed, deployed and dis­trib­uted using drag-and-drop.

Panamax makes it possible to save complex multi-container ap­pli­ca­tions as ap­pli­ca­tion templates and dis­trib­ute them in cluster ar­chi­tec­tures with just one click. Using an in­te­grat­ed app mar­ket­place hosted on GitHub, templates for self-created ap­pli­ca­tions can be stored in Git repos­i­to­ries and made available to other users.

The basic com­po­nents of the Panamax ar­chi­tec­ture can be divided into two groups: the Panamax Local Client and any number of remote de­ploy­ment targets.

The Panamax local client is the core component of this Docker tool. It is executed on the local system and allows complex container-based ap­pli­ca­tions to be created. The local client is comprised of the following com­po­nents:

  • CoreOS: In­stal­la­tion of the Panamax local client requires the Linux dis­tri­b­u­tion CoreOS as its host system, which has been specif­i­cal­ly designed for software con­tain­ers. The Panamax client is then run as a Docker container in CoreOS. In addition to the Docker features, users have access to various CoreOS functions. These include Fleet and Jour­nalctl, among others:
  • Fleet: Instead of in­te­grat­ing directly with Docker, the Panamax Client uses the cluster manager Fleet to or­ches­trate its con­tain­ers. Fleet is a cluster manager that controls the Linux daemon systemd in computer clusters.
  • Jour­nalctl: The Panamax client uses Jour­nalctl to request log messages from the Linux system manager systemd from the journal.
  • Local client installer: The local client installer contains all com­po­nents necessary for in­stalling the Panamax client on a local system.
  • Panamax local agent: The central component of the local client is the local agent. This is linked to various other com­po­nents and de­pen­den­cies via the Panamax API. These include the local Docker host, the Panamax UI, external reg­istries, and the remote agents of the de­ploy­ment targets in the cluster. The local agent interacts with the following program in­ter­faces on the local system via the Panamax API to exchange in­for­ma­tion about running ap­pli­ca­tions:
  • Docker remote API: Panamax searches for images on the local system via the Docker remote API and obtains in­for­ma­tion about running con­tain­ers.
  • etcd API: Files are trans­mit­ted to the CoreOS Fleet daemon via the etcd API.
  • systemd-journal-gatewayd.services: Panamax obtains the journal output of running services via systemd-journal-gatewayd.services.

In addition, the Panamax API also enables in­ter­ac­tions with various external APIs.

  • Docker registry API: Panamax obtains image tags from the Docker registry via the Docker registry API.
  • GitHub API: Panamax loads templates from the GitHub repos­i­to­ry using the GitHub API.
  • Kiss­Met­rics API: The Kiss­Met­rics API collects data about templates that users run.
  • Panamax UI: The Panamax UI functions as a user interface on the local system and enables users to control the Docker tool via a graphical interface. User input is directly forwarded to the local agent via Panamax API. The Panamax UI is based on the CTL Base UI Kit, a library of UI com­po­nents for web projects from Cen­tu­ryLink.

In Panamax ter­mi­nol­o­gy, each node in a Docker cluster without man­age­ment tasks is referred to as a remote de­ploy­ment target. De­ploy­ment targets consist of a Docker host, which is con­fig­ured to deploy Panamax templates with the help of the following com­po­nents:

  • De­ploy­ment target installer: The de­ploy­ment target installer starts a Docker host, complete with a Panamax remote agent and or­ches­tra­tion adapter.
  • Panamax remote agent: If a Panamax remote agent is installed, ap­pli­ca­tions can be dis­trib­uted over the local Panamax client to any desired endpoint in the cluster. The Panamax remote agent runs as a Docker container on every de­ploy­ment target in the cluster.
  • Panamax or­ches­tra­tion adapter: In the or­ches­tra­tion adapter, the program logic is provided for each or­ches­tra­tion tool available for Panamax in an in­de­pen­dent adapter layer. Because of this, users have the option to always choose the exact or­ches­tra­tion tech­nol­o­gy to be supported by their target en­vi­ron­ment. Pre-con­fig­ured adapters include Ku­ber­netes and Fleet:
  • Panamax Ku­ber­netes adapter: In com­bi­na­tion with the Panamax remote agent, the Panamax Ku­ber­netes adapter enables the dis­tri­b­u­tion of Panamax templates in Ku­ber­netes clusters.
  • Panamax Fleet adapter: In com­bi­na­tion with the Panamax remote agent, the Panamax Fleet adapter enables the dis­tri­b­u­tion of Panamax templates in clusters con­trolled with the help of the Fleet cluster manager.

The following graphic shows the interplay between the in­di­vid­ual Panamax com­po­nents in a Docker cluster:

Image: Schematic representation of the software architecture for the Panamax container management tool
The software ar­chi­tec­ture of the Panamax container man­age­ment tool

The CoreOS-based Panamax container man­age­ment tool provides users with a variety of standard container or­ches­tra­tion tech­nolo­gies through a graphical user interface, as well as the option to con­ve­nient­ly manage complex multi-container ap­pli­ca­tions in cluster ar­chi­tec­tures using any system (i.e., your own laptop).

With Panamax’s public template repos­i­to­ry, Panamax users have access to a public template library with various resources via GitHub.

Drone

Drone is a lean con­tin­u­ous in­te­gra­tion platform with minimal re­quire­ments. With this Docker tool, you can au­to­mat­i­cal­ly load your newest build from a Git repos­i­to­ry like GitHub and test it in isolated Docker con­tain­ers. You can run any test suite and send reports and status messages via email. For every software test, a new container based on images from the public Docker registry is created. This means any publicly available Docker image can be used as the en­vi­ron­ment for testing the code.

Tip

Con­tin­u­ous In­te­gra­tion (CI) refers to a process in software de­vel­op­ment, in which newly developed software com­po­nents—builds—are merged and run in test en­vi­ron­ments at regular intervals. CI is a strategy to ef­fi­cient­ly recognize and resolve in­te­gra­tion errors that can arise from col­lab­o­ra­tion between different de­vel­op­ers.

Drone is in­te­grat­ed in Docker and supported by various pro­gram­ming languages, such as PHP, Node.js, Ruby, Go and Python. The container platform is the only true de­pen­den­cy. You can create your own personal con­tin­u­ous in­te­gra­tion platform with Drone on any system that Docker can be installed on. Drone supports various version control repos­i­to­ries, and you can find a guide for the standard in­stal­la­tion with GitHub in­te­gra­tion on the open source project’s website under readme.drone.io.

Managing the con­tin­u­ous in­te­gra­tion platform takes place via a web interface. Here you can load software builds from any Git repos­i­to­ry, merge them into ap­pli­ca­tions, and run the result in a pre-defined test en­vi­ron­ment. In order to do this, a .drone.yml file is defined that specifies how to create and run the ap­pli­ca­tion for each software test.

Drone users are provided with an open-source CI solution that combines the strengths of al­ter­na­tive products like Travis and Jenkins into a user-friendly ap­pli­ca­tion.

OpenStack

When it comes to building and operating open-source cloud struc­tures, the open-source cloud operating system OpenStack is the software solution of choice.

With OpenStack you can manage computer, storage and network resources from a central dashboard and make them available to end users via a web interface.

The cloud operating system is based on a modular ar­chi­tec­ture that’s comprised of multiple com­po­nents:

  • Zun (container service): Zun is OpenStack’s container service and enables the easy de­ploy­ment and man­age­ment of con­tainer­ized ap­pli­ca­tions in the OpenStack cloud. The purpose of Zun is to allow users to manage con­tain­ers through a REST API without having to manage servers or clusters. To operate Zun, you’ll need to have three other OpenStack services, which are Keystone, Neutorn, and kryr-lib­net­work. The func­tion­al­i­ty of Zun can also be expanded through ad­di­tion­al OpenStack services such as Cinder and Glance.
  • Neutron (network component): Neutron (formally Quantum) is a portable, scalable API-supported system component used for network control. The module provides an interface for complex network topolo­gies and supports various plugins through which extended network functions can be in­te­grat­ed.
  • kuryr-lib­net­work (Docker driver): kuryr-lib­net­work is a driver that acts as an interface between Docker and Neutron.
  • Cinder (block storage): Cinder is the nickname of a component in the OpenStack ar­chi­tec­ture that provides per­sis­tent block storage for the operation of VMs. The module provides virtual storage via a self-service API. Through this, end users can make use of storage resources without being aware of which device is providing the storage.
  • Keystone (identity service): Keystone provides OpenStack users with a central identity service. The module functions as an au­then­ti­ca­tion and per­mis­sions system between the in­di­vid­ual OpenStack com­po­nents. Access to projects in the cloud is regulated by tenants. Each tenant rep­re­sents a user, and several user accesses with different rights can be defined.
  • Glance (image service): With the Glance module, OpenStack provides a service that allows images of VMs to be stored and retrieved.

You can find more in­for­ma­tion about OpenStack com­po­nents and services in our article on OpenStack.

In addition to the com­po­nents mentioned above, the OpenStack ar­chi­tec­ture can be extended using various modules. You can read about the different optional modules on the OpenStack website.

D2iQ DC/OS

DC/OS (Dis­trib­uted Cloud Operating System) is an open-source software for the operation of dis­trib­uted systems developed by D2iQ Inc. (formerly Mesos­phere). The project is based on the open-source cluster manager Apache Mesos and is an operating system for data centers. The source code is available to users under the Apache license Version 2 in the DC/OS repos­i­to­ries on GitHub. An en­ter­prise version of the software is also available at d2iq.com. Extensive project doc­u­men­ta­tion can be found on dcos.io.

You can think of DC/OS as a Mesos dis­tri­b­u­tion that provides you with all the features of the cluster manager (via a central user interface) and expands upon Mesos con­sid­er­ably.

DC/OS uses the dis­trib­uted system core of the Mesos platform. This makes it possible to bundle the resources of an entire data center and manage them in the form of an ag­gre­gat­ed system like a single logical server. This way, you can control entire clusters of physical or virtual machines with the same ease that you would operate a single computer with.

The software sim­pli­fies the in­stal­la­tion and man­age­ment of dis­trib­uted ap­pli­ca­tions and automates tasks such as resource man­age­ment, sched­ul­ing, and inter-process com­mu­ni­ca­tion. The man­age­ment of a cluster based on D2iQ DC/OS, as well as its included services, takes place over a central command line program (CLI) or web interface (GUI).

DC/OS isolates the resources of the cluster and provides shared services, such as service discovery or package man­age­ment. The core com­po­nents of the software run in a protected area – the core kernel. This includes the master and agent programs of the Mesos platform, which are re­spon­si­ble for resource al­lo­ca­tion, process isolation, and security functions.

  • Mesos master: The Mesos master is a master process that runs on a master node. The purpose of the Mesos master is to control resource man­age­ment and or­ches­trate tasks (abstract work units) that are carried out on an agent node. To do this, the Mesos master dis­trib­utes resources to reg­is­tered DC/OS services and accepts resource reports from Mesos agents.

  • Mesos agents: Mesos agents are processes that run on agent accounts and are re­spon­si­ble for executing the tasks dis­trib­uted by the master. Mesos agents deliver regular reports about the available resources in the cluster to the Mesos master. These are forwarded by the Mesos master to a scheduler (i.e. Marathon, Chronos or Cassandra). This decides which task to run on which node. The tasks are then carried out in a container in an isolated manner.

All other system com­po­nents as well as ap­pli­ca­tions run by the Mesos agents via executor run in the user space. The basic com­po­nents of a standard DC/OS in­stal­la­tion are the admin router, the Mesos DNS, a dis­trib­uted DNS proxy, the load balancer Minuteman, the scheduler Marathon, Apache ZooKeeper and Exhibitor.

  • Admin router: The admin router is a specially con­fig­ured webserver based on NGINX that provides DC/OS services as well as central au­then­ti­ca­tion and proxy functions.
  • Mesos DNS: The system component Mesos DNS provides service discovery functions that enable in­di­vid­ual services and ap­pli­ca­tions in the cluster to identify each other through a central domain name system (DNS).
  • Dis­trib­uted DNS proxy: The dis­trib­uted DNS proxy is an internal DNS dis­patch­er.
  • Minuteman: The system component Minuteman functions as an internal load balancer that works on the transport layer (Layer 4) of the OSI reference model.
  • DC/OS Marathon: Marathon is a central component of the Mesos platform that functions in the D2iQ DC/OS as an init system (similar to systemd). Marathon starts and su­per­vis­es DC/OS services and ap­pli­ca­tions in cluster en­vi­ron­ments. In addition, the software provides high-avail­abil­i­ty features, service discovery, load balancing, health checks and a graphical web interface.
  • Apache ZooKeeper: Apache ZooKeeper is an open source software component that provides co­or­di­na­tion functions for the operation and control of ap­pli­ca­tions in dis­trib­uted systems. ZooKeeper is used in D2iQ DC/OS for the co­or­di­na­tion of all installed system services.
  • Exhibitor: Exhibitor is a system component that is au­to­mat­i­cal­ly installed and con­fig­ured with ZooKeeper on every master node. Exhibitor also provides a graphical user interface for ZooKeeper users.

Diverse workloads can be executed at the same time on cluster resources that are ag­gre­gat­ed via DC/OS. This, for example, enables parallel operation on the cluster operating system of big data systems, mi­croser­vices, or container platforms such as Hadoop, Spark and Docker.

Within the D2iQ Universe, a public app catalog is available for DC/OS. With this, you can install ap­pli­ca­tions like Spark, Cassandra, Chronos, Jenkins, or Kafka by simply clicking on the graphical user interface.

What Docker tools are there for security?

Even though en­cap­su­lat­ed processes running in con­tain­ers share the same core, Docker uses a number of tech­niques to isolate them from each other. Core functions of the Linux kernel, such as Cgroups and Name­spaces, are usually used to do this.

Con­tain­ers, however, still don’t offer the same degree of isolation that can be ac­com­plished with virtual machines. Despite the use of isolation tech­niques, important core sub­sys­tems such as Cgroups as well as kernel in­ter­faces in the /sys and /proc di­rec­to­ries can be reached through con­tain­ers.

The Docker de­vel­op­ment team has ac­knowl­edged that these safety concerns are an obstacle for the es­tab­lish­ment of container tech­nol­o­gy on pro­duc­tion systems. In addition to the fun­da­men­tal isolation tech­niques of the Linux kernel, newer versions of Docker Engine also support the frame­works AppArmor, SELinux and Seccomp, which function as a type of firewall for core resources.

  • AppArmor: With AppArmor, access rights of con­tain­ers to the file systems are regulated.
  • SELinux: SELinux provides a complex reg­u­la­to­ry system where access control to core resources can be im­ple­ment­ed.
  • Seccomp: Seccomp (Secure Computing Mode) su­per­vis­es the invoking of system calls.

In addition to these Docker tools, Docker also uses Linux ca­pa­bil­i­ties to restrict the root per­mis­sions, which Docker Engine starts con­tain­ers with.

Other security concerns also exist regarding software vul­ner­a­bil­i­ties within ap­pli­ca­tion com­po­nents that are dis­trib­uted by the Docker registry. Since es­sen­tial­ly anyone can create Docker images and make them publicly ac­ces­si­ble to the community in the Docker Hub, there’s the risk of in­tro­duc­ing malicious code to your system when down­load­ing an image. Before deploying an ap­pli­ca­tion, Docker users should make sure that all of the code provided in an image for the execution of con­tain­ers stems from a trust­wor­thy source.

Docker offers a ver­i­fi­ca­tion program that software providers can use to have their Docker images checked and verified. With this ver­i­fi­ca­tion program, Docker aims to make it easier for de­vel­op­ers to build software supply chains that are secure for their projects. In addition to in­creas­ing security for users, the program aims to offer software de­vel­op­ers a way to dif­fer­en­ti­ate their projects from the multitude of other resources that are available. Verified images are marked with a Verified Publisher badge and, in addition to other benefits, given a higher ranking in Docker Hub search results.

Go to Main Menu