Running Kubernetes on Proxmox is a powerful way to manage container workloads in your own virtual environment. This guide shows you how to build a stable Kubernetes setup from scratch using virtual machines hosted on Proxmox. Whether you’re setting up a dev lab, CI pipeline or a small-scale production cluster, we’ll walk you through everything from prerequisites to load balancer configuration.

Step 1: What you need before you start

Before diving into the setup, make sure your environment meets a few technical requirements. Starting with a clean setup saves you a lot of time and helps avoid configuration errors later.

You’ll need a working Proxmox VE installation. For best performance, Proxmox should be set up as a bare-metal installation. Make sure both the web interface and SSH access are enabled. You’ll need them to run commands, upload images, and automate configurations.

To build a stable Kubernetes cluster, you’ll also need several virtual machines, ideally set up as dedicated Kubernetes nodes:

  • One master node (for the control plane)
  • As well as at least two worker nodes.

This setup gives you redundancy and mirrors real-world Kubernetes architecture. For testing, a smaller cluster with one master and one worker is fine.

Your Proxmox host should also have a working bridge interface that lets your virtual machines (VMs) connect to the LAN and the internet. This is essential for downloading updates and installing Kubernetes components.

Tip

For production environments, it’s a good idea to automate VM backups using Proxmox Backup Server. This lets you restore nodes quickly and keep downtime to a minimum.

Step 2: Download the cloud image and create a VM template

The easiest way to install Kubernetes is by using cloud images– preconfigured OS images (like Ubuntu or Debian) optimised for cloud-init automation. In this guide, we’ll be using Ubuntu 22.04 LTS, owing to its stability, clear documentation and easy integration with Kubernetes.

Begin by logging in to your Proxmox host via SSH. Then switch to the directory where Proxmox stores ISO and image files. You’ll download the latest Ubuntu cloud image to this location:

cd /var/lib/vz/template/iso
bash

Download the Ubuntu cloud image:

wget -O ubuntu-22.04-server-cloudimg-amd64.img https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
bash
Note

Alternatively, download the image locally and transfer it using scp (Secure Copy):

scp ubuntu-22.04-server-cloudimg-amd64.img root@<proxmox-ip>:/var/lib/vz/template/iso/
bash

Now create a base VM to use as a reusable template. Start by creating an empty VM with a unique ID, such as 9000, and assign it basic hardware resources:

qm create 9000 --name ubuntu-template --memory 2048 --net0 virtio,bridge=vmbr0
bash

Now import the downloaded image as a disk into your Proxmox storage (here, local-lvm):

qm importdisk 9000 /var/lib/vz/template/iso/ubuntu-22.04-server-cloudimg-amd64.img local-lvm
bash

Next, attach the imported disk to the VM and set the correct controller for it. This step connects the image to the virtual SCSI controller used by the VM:

qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
bash

To automatically assign IP addresses, hostnames and SSH keys when cloning the VMs, you’ll need to add a Cloud-Init drive. This drive stores the configuration data that Proxmox applies each time the VM boots. Use the following command to add the Cloud-Init drive and define the boot order:

qm set 9000 --ide2 local-lvm:cloudinit 
qm set 9000 --boot c --bootdisk scsi0
bash

Then enable the QEMU guest agent so that Proxmox can read status information from the VM. It’s also a good idea to enable a serial console. This gives you low-level access to the VM in case of an emergency:

qm set 9000 --agent 1 
qm set 9000 --serial0 socket --vga serial0
bash

With the setup complete, it’s time to convert the virtual machine into a template. In Proxmox, templates act as reusable blueprints meaning you can create as many clones from them as you need. This makes them ideal for setting up your Kubernetes Proxmox nodes.

qm template 9000
bash

Your Ubuntu template is now ready. You’ll use it as the foundation for both your master and worker nodes.

Step 3: Clone the master and worker VMs

In this step, you’ll clone the virtual machines from the Ubuntu template you set up earlier. These cloned VMs will act as the master and worker nodes of your Kubernetes cluster. Each VM should have its own IP address, unique hostname, and SSH key for security. You don’t need to configure anything manually inside the VMs since Proxmox takes care of the base configuration through cloud-init.

Start by cloning the base template (in this example, ID 9000) to create three virtual machines: one for the master and two for the worker nodes. You can also configure CPU and memory individually for each VM:

qm clone 9000 101 --name k8s-master-1 --full true 
qm set 101 --cores 2 --memory 4096 
qm clone 9000 102 --name k8s-worker-1 --full true 
qm set 102 --cores 2 --memory 4096 
qm clone 9000 103 --name k8s-worker-2 --full true 
qm set 103 --cores 2 --memory 4096
bash

Next, use cloud-init to configure the hostname, IP address and SSH key for each VM. You can either assign static IPs or use DHCP. This example uses static addressing:

# Configure master
qm set 101 --ipconfig0 ip=192.168.1.10/24,gw=192.168.1.1 
qm set 101 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 101 --ciuser ubuntu 
qm set 101 --nameserver 192.168.1.1 
qm set 101 --description "K8s Master 1" 
# Configure worker 
qm set 102 --ipconfig0 ip=192.168.1.11/24,gw=192.168.1.1 
qm set 102 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 102 --ciuser ubuntu 
qm set 103 --ipconfig0 ip=192.168.1.12/24,gw=192.168.1.1 
qm set 103 --sshkey "$(cat ~/.ssh/id_rsa.pub)" 
qm set 103 --ciuser ubuntu
bash
Note

Make sure the IP addresses match your local network. Use values from your router’s IP range and assign a unique address to each VM.

Finally, start all three virtual machines:

qm start 101 
qm start 102 
qm start 103
bash

Wait a moment for the VMs to finish booting then test the connection via SSH. Use the following command to connect to the master node:

ssh ubuntu@192.168.1.10

Step 4: Apply the base configuration on all virtual machines

Before installing Kubernetes, make a few system-wide changes to each VM. Disable swap, adjust kernel settings for networking and IP forwarding and sync the system clock. Doing so helps Kubernetes run reliably and ensures the containers can communicate with each other over the network.

Kubernetes requires swap to be disabled for its scheduler to work properly. You should also remove the swap entry in /etc/fstab so it’s not re-activated on reboot:

sudo swapoff -a 
sudo sed -i '/ swap / s/^/#/' /etc/fstab
bash

Next, configure the kernel so that network traffic between containers and nodes is processed correctly:

cat <<'EOF' | sudo tee /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
net.bridge.bridge-nf-call-ip6tables = 1 
EOF 
# Apply changes
sudo sysctl --system
bash

Kubernetes components and certificates rely on the system time being accurate. To keep the clock in sync, install and start chrony:

sudo apt update && sudo apt install -y chrony 
sudo systemctl enable --now chrony
bash

Finally, install some basic tools that you’ll need later:

sudo apt install -y curl apt-transport-https ca-certificates gnupg lsb-release
bash

At this point, each node should have swap disabled, networking configured and system time synced. This means your VMs are now ready to install Kubernetes and set up the cluster.

Step 5: Choosing a Kubernetes distribution

Before installing Kubernetes itself, you’ll need to choose the distribution that best fits your setup. In this guide, we’ll focus on two popular options:

  • RKE2 (Rancher Kubernetes Engine 2): RKE2 is a full-featured, production-grade Kubernetes distribution developed by Rancher. It’s a solid choice if you plan to use the Rancher management interface or want to run a cluster with multiple control plane nodes.
  • k3s: k3s is a lightweight Kubernetes distribution designed for test environments, home labs and systems with limited resources. It’s easy to install and uses less memory and CPU than a full Kubernetes setup.

For this guide, we’ll use RKE2, as it’s well-suited for building a robust cluster that can scale beyond testing if needed. If you’re just experimenting or setting up a quick dev environment, you might prefer k3s, but the installation process will differ slightly.

Step 6: Install RKE2 on the master node

With the basic setup complete, you can now install RKE2 on your master node. Start by connecting to the master via SSH:

ssh ubuntu@192.168.1.10
bash

Next, download and run the RKE2 installation script. To install a specific version, set the channel as follows:

curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_CHANNEL=v1.28 bash -
bash

Once installed, enable and start the RKE2 server service:

sudo systemctl enable --now rke2-server.service
bash

Use the following command to check if the service is running correctly:

sudo systemctl status rke2-server
bash

To manage the Kubernetes cluster from your local machine, copy the kubeconfig file:

sudo chmod 644 /etc/rancher/rke2/rke2.yaml 
scp ubuntu@192.168.1.10:/etc/rancher/rke2/rke2.yaml ~/rke2-kubeconfig
bash

Update the file to match the master node’s IP so kubectl connects to the correct server:

sed -i 's/127.0.0.1:6443/192.168.1.10:6443/' ~/rke2-kubeconfig 
export KUBECONFIG=~/rke2-kubeconfig
bash

Use the following command to check the connection to the master node:

kubectl get nodes
bash

If the master appears, the installation was successful. You’re now ready to add the worker nodes.

Step 7: Install the RKE2 agent on the worker nodes

With the master node running, it’s time to add the workers. To do so, you’ll need to install the RKE2 agent on each worker and connect them to the master.

Start by retrieving the node token from the master mode. You’ll need this token to authenticate the worker nodes when they join the cluster:

sudo cat /var/lib/rancher/rke2/server/node-token
bash

Make a note of the token. You’ll need to use it on each worker node.

Next, connect to a worker node via SSH:

ssh ubuntu@192.168.1.11
bash

Download the RKE2 installation script for and install the agent using:

curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_CHANNEL=v1.28 sh -
bash

Then create the config file that connects the worker to the master and includes the token:

sudo mkdir -p /etc/rancher/rke2 
cat <<EOF | sudo tee /etc/rancher/rke2/config.yaml 
server: https://192.168.1.10 
token: <INSERT_TOKEN_HERE> 
EOF
bash

Finally, enable and start the agent:

sudo systemctl enable --now rke2-agent.service
bash

Repeat these steps for each worker node. After a few minutes, run the following command to confirm they’ve all joined the cluster:

kubectl get nodes
bash

You should now see the master and all worker nodes listed in your Kubernetes cluster. Your setup is complete and ready for network plugins, load balancers and other components.

Step 8: Install the network CNI and load balancer

With the master and worker nodes set up, your cluster needs two last components: a Container Network Interface (CNI) so the pods can communicate with each other and a load balancer to make services available within your network. This guide uses Calico for networking and MetalLB for Layer 2 load balancing.

Calico handles pod-to-pod communication, assigns IP addresses and can also enforce network policies. Use this command to install it:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
bash

Once installed, check all Calico pods have started:

kubectl get pods -n kube-system
bash

All of them should show as Running or Completed. If any are still showing Pending, give it a few minutes. Calico needs time to roll out its network configuration across the cluster.

Kubernetes supports the LoadBalancer service type, which assigns external IPs to services. In a self-hosted environment like Proxmox, this requires a tool like MetalLB. Use the following command to install it:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
bash

Next, create a pool of IP addresses that MetalLB can use when assigning external IPs to your Kubernetes services. Use addresses that fit within your local network:

cat <<EOF | kubectl apply -f - 
apiVersion: metallb.io/v1beta1 
kind: IPAddressPool
metadata: 
name: my-ip-pool 
namespace: metallb-system 
spec: 
addresses: 
- 192.168.1.200-192.168.1.210 
--- 
apiVersion: metallb.io/v1beta1 
kind: L2Advertisement 
metadata: 
name: l2adv 
namespace: metallb-system 
spec: {} 
EOF
bash

Check the status of the MetalLB pods:

kubectl get pods -n metallb-system
bash

Once all pods show Running, your cluster is ready to go. Use the LoadBalancer service type to make your apps accessible within your local network. With Kubernetes now running on Proxmox, your setup is ready for deploying and managing applications.

Go to Main Menu