Prerequisites for my Kubernetes RKE2/K3S cluster
Prerequisites for my Kubernetes RKE2/K3S cluster
Hello there! 👋 Welcome to my first post, where I’ll be guiding you through the setup of my Kubernetes K3S/RKE2 cluster. This post will cover the hardware specifications, software choices, and the initial steps to get the environment ready. Let’s dive in!
Hardware Specifications
The backbone of my Kubernetes cluster consists of 3 Beelinks S12 Pro’s. These mini-PCs are compact yet powerful enough for a home lab cluster. Here’s what each node packs:
- RAM: 16GB
- CPU: Intel N100
- Primary Storage: 500GB NVMe SSD
- Secondary Storage: 500GB SATA SSD
This setup provides a good balance of performance and storage, ensuring my cluster runs smoothly.
Why Proxmox?
When choosing the platform for managing my virtual machines, I went with Proxmox. Proxmox is known for its robust features like easy VM backup and restoration, which are essential for maintaining a reliable environment. Additionally, it’s a best practice to separate the control plane (master node) and worker nodes into distinct VMs, and Proxmox makes this straightforward.
Setting Up the VMs
With Proxmox up and running, the next step is to create the virtual machines that will serve as the nodes in my Kubernetes cluster. Each Beelink S12 Pro will host the following:
- Control Plane (Master Node)
- Agents (Worker Node)
Selecting the OS: Ubuntu 24.04 Cloud Image
Initially, I considered using the Debian cloud image, but I discovered it wasn’t officially supported. Instead, I opted for the Ubuntu 24.04 cloud image—a solid, well-supported choice that’s widely used in the community.
Creating the VM Template
To streamline the creation of VMs, I first set up a template using the downloaded Ubuntu cloud image. Here’s how you can create the template:
Create the VM in shell:
1
qm create 5000 --memory 4096 --core 1 --name master-ubuntu-2404-cloud --net0 virtio,bridge=vmbr0
Import the cloud image to Proxmox:
1 2
cd /var/lib/vz/template/iso/ qm importdisk 5000 noble-server-cloudimg-amd64.img <your storage name such as local-lvm>
Configure the VM:
1 2 3 4
qm set 5000 --scsihw virtio-scsi-pci --scsi0 <storage name>:vm-5000-disk-0 qm set 5000 --ide2 <storage name>:cloudinit qm set 5000 --boot c --bootdisk scsi0 qm set 5000 --serial0 socket --vga serial0
Add SSH Key to the Cloud-Init Drive:
To enable SSH access to the VM, you need to add an SSH key to the cloud-init drive. You can do this via the Proxmox GUI or the command line. Here’s an example using the command line:
1
qm set 5000 --sshkey ~/.ssh/id_rsa.pub
This ensures that your VMs are accessible via this SSH-Key, which is crucial for remote management and automation.
This template will save time when deploying multiple VMs, as you won’t need to repeat these steps for each one. I have repeated these steps 2 times because i wanted one template for the Master Node and one for the Agent Node.
Allocating Resources to Each VM
To ensure the cluster runs efficiently, I carefully allocated resources to each VM based on its role:
- Control Plane Node:
- RAM: 4GB
- CPU: 1 Host Core
- Storage: 50GB SSD
- Worker Node:
- RAM: 12GB
- CPU: 3 Host Cores
- Storage: 50GB SSD for the OS and a dedicated 500GB SSD for persistent storage
What’s Next?
With the VMs set up, we’re ready to move on to deploying K3S/RKE2. In my next post, I’ll will set up RKE2 with the Ansible Playbook from JimsGarage, so stay tuned! If you’re following along, now would be a great time to ensure you have an Ansible VM ready to go.