Ceph is a distributed storage system that easily integrates with Proxmox and provides a highly available, fault-tolerant storage solution. This guide walks you through installing a Ceph cluster on your Proxmox server step by step.

Step 1: Check the requirements

Before you begin installing Ceph on Proxmox, make sure your environment meets the basic requirements. Ceph is a storage system that replicates data across multiple servers. To ensure this redundancy works reliably, you need at least three Proxmox nodes. This allows the system to keep running even if one node fails.

Make sure that the bare-metal installation of Proxmox is also complete on every server and that each system is fully up to date. Each node should have its own, unused hard drive dedicated solely to Ceph OSDs. These drives will provide the actual storage for your cluster. A fast, stable network connection between nodes is equally important to keep latency low. You also need root access on all hosts since the installation performs system-level changes.

Use the following command to check which version of Proxmox is currently installed on your system:

pveversion
bash

Compare the version numbers across all nodes. If the versions differ or if your installation is outdated, update Proxmox so all systems are on the same version:

apt update && apt full-upgrade -y 
reboot
bash

Once all nodes are updated and reachable, your environment is ready for the Ceph installation.

Step 2: Activate the Ceph repository

To install Ceph through the package manager, you first need to enable the appropriate repository on each Proxmox node. This repository contains all required Ceph packages that Proxmox has adapted and tested. Log in as the root user on each host and run this command:

pveceph install
bash

This command configures the Proxmox Ceph repository and installs the core Ceph components. To activate the new package sources, update your package list:

apt update
bash

Step 3: Initialise the Ceph configuration on the first node

In this step, you’ll prepare the actual Ceph cluster on your first Proxmox node and define the network that the cluster will use for internal communication. You’ll also set up the first monitor, a core component of Ceph. It tracks the cluster’s state, manages cluster members and ensures all components stay synchronised.

Start the initialisation on the first Proxmox node with the following command:

pveceph init --network 10.0.0.0/24
bash

The subnet 10.0.0.0/24 is only an example. Use the internal network your Proxmox nodes use to communicate directly with each other. The pveceph init command creates the basic Ceph configuration on your first node. This includes the main cluster configuration file, the Ceph keyring needed for internal authentication and the system directories for Ceph services.

Once the initialisation is complete, you can set up the first monitor service:

pveceph createmon
bash

This command starts the monitor process and registers it in the cluster. At this point, you have a functional but still standalone node. The monitor immediately begins collecting status information, forming the foundation for communication with additional nodes.

Note

A typical Ceph cluster uses at least three monitors. This ensures the cluster can keep operating even if one monitor fails. With multiple monitors, Ceph can maintain a quorum, meaning a majority is available to make decisions about the cluster’s current state.

Step 4: Add more nodes to the cluster

To give Ceph the level of fault tolerance it’s designed for, you now need to add your remaining Proxmox nodes to the Ceph cluster. Each additional node increases both redundancy and storage capacity. Log in to the other nodes and run the following commands in sequence:

pveceph install 
pveceph createmon
bash

This sets up monitor services on the additional hosts. Once all monitors are active, you can check the cluster status from any node using the following command:

ceph -s
bash

This shows you which monitors and services are currently running. If multiple monitors appear, all nodes have been added to the cluster.

Step 5: Create OSDs

OSDs (Object Storage Daemons) are the core of your Ceph cluster. Every hard drive you assign to Ceph is used to create a dedicated OSD. These daemons write data to the disks, replicate it across the cluster, and serve it back when requested by another node or a virtual machine. The more OSDs your cluster has, the higher its storage capacity and performance. Before you begin, check which drives are available on your node using this command:

lsblk
bash

This lists all disks and partitions detected by the system. Make sure you only use unused drives for Ceph. Only select drives that do not contain the operating system and are not mounted. Once you’ve identified a suitable drive, in our case /dev/sdb, you can create an OSD on it:

pveceph createosd /dev/sdb
bash

The drive is automatically formatted, and Ceph sets up the required structure. The OSD daemon then starts and joins the cluster. All existing data on the selected drive will be deleted, so double-check that the disk is truly intended for Ceph.

Repeat this process on all nodes and for each drive you want to add. Depending on your hardware and cluster size, it may take a few minutes for all OSDs to be fully integrated into the cluster.

Next, check your newly created OSDs have been recognised and are running. Use:

ceph osd tree
bash

The tree view makes it easy to see how your storage devices are distributed across the cluster and whether they are running without issues.

Step 6: Enable the Ceph Manager and dashboard

To easily monitor and manage your Ceph cluster, you need to install the Ceph Manager (MGR). This service collects performance data, keeps track of all active components and provides additional features through various modules. One of these features is the integrated web dashboard. Install the manager service on your Proxmox node with:

pveceph createmgr
bash

Once the manager is running, you can enable the dashboard module. The MGR service provides it automatically, so you only need to activate it:

ceph mgr module enable dashboard
bash

The dashboard provides a user-friendly interface where you can view the cluster status and track OSD and monitor activity. It also highlights any alerts at a glance. Open it in your browser using the default port 8443:

https://<PROXMOX_IP>:8443

Replace PROXMOX_IP with the IP address of the Proxmox node where the Ceph manager is installed.

Step 7: Create and test Ceph pools

Once your Ceph cluster is set up and all OSDs are active, you can create the actual storage area where your data will live. Ceph organises data into pools. A pool is a logical unit that stores your files, disk images or container volumes. Each pool consists of many placement groups that distribute data across the OSDs to balance the load. With pools, you control how and where Ceph stores your data. For example, you can create one pool for virtual machines and another for backups or container images.

To create a new pool, run the following command on one of your Proxmox nodes:

pveceph pool create cephpool --size 3 --min_size 2 --pg_num 128
bash

This command creates a pool named cephpool. The parameters define how Ceph handles your data:

  • --size 3 means each file is stored three times. This provides fault tolerance as two copies are still available if one OSD fails.
  • --min_size 2 requires at least two copies to be active for the pool to function. This prevents Ceph from operating with incomplete data.
  • --pg_num 128 sets the number of placement groups, the logical data containers Ceph uses to distribute data across the OSDs. The more OSDs you have, the higher this value can be. This allows Ceph to distribute data more evenly across the cluster.
Note

The number of placement groups you set when creating a Ceph pool cannot be reduced later. You can increase the number as your cluster grows but lowering it is not supported as it can lead to data loss. So, make sure you plan for enough PGs from the start. As a rule, 100 PGs per OSD in the pool is a good starting point for small to medium environments.

After creating the pool, verify everything is working correctly:

ceph -s
bash

This command shows you the current status of your Ceph cluster. If you see HEALTH_OK, your pool has been set up correctly and your cluster is running reliably.

Step 8: Connect Ceph storage to Proxmox

Once your Ceph cluster has been set up and the first pool created, you need to connect your Ceph storage to Proxmox. This is so your virtual machines and containers can use the storage. Proxmox uses the RBD protocol for this. The easiest way to connect the two is through the Proxmox web interface. Open the interface and go to Datacenter > Storage > Add > RBD (Ceph).

In the dialog that appears, enter the required settings for your Ceph cluster.

  • Under ID, enter a unique name for the new storage target.
  • In the Monitors field, enter the IP addresses of your Ceph MONs. These are the nodes running the monitor services. Separate multiple addresses with commas. For example, 10.0.0.11,10.0.0.12,10.0.0.13.
  • In the Pool field, enter the name of the Ceph pool you created earlier, e.g., cephpool.
  • For the user field, you can typically enter admin.
  • The keyring is filled in automatically, as Proxmox retrieves the required authentication key from your Ceph configuration.
Note

If you prefer working from the command line, you can perform the same task with a single command:

pvesm add rbd ceph-storage --monhost <mon1,mon2,mon3> --pool cephpool --content images
bash

Replace <mon1,mon2,mon3> with the IP addresses of your monitor nodes.

Once added, the storage appears in the Proxmox interface. You can now select it as a target for virtual machines. Proxmox will then use Ceph as the underlying storage, and any VMs you create on it will automatically benefit from the cluster’s redundancy and fault tolerance.

Tip

Integrating Ceph is especially valuable if you plan to run a Kubernetes cluster on Proxmox. Ceph can serve as persistent storage for Kubernetes, giving your containers the same redundancy and high availability as your virtual machines.

Go to Main Menu