Let’s start with what K3s is. K3s is made by Rancher and they say this about K3S: The certified Kubernetes distribution built for IoT & Edge computing. But it is also a great solution for a home lab. The package is less than 40MB and runs on about every LInux distribution. So let’s begin with the installation!
Requirements
To get a running, high available K3s cluster, you need to have 3 things:
- External database (I use PostgreSQL)
- External load balancer (I use NGINX)
- At least 3 servers (I use virtualized Ubuntu servers)
I’ll assume you have a database running, a NGINX server (without config, I’ll show you the config later on) and at least 3 Linux servers with a static IP address. We will use 2 Linux servers as Kubernetes master nodes and at least 1 Linux server as Kubernetes agent node.
Database
I made a database called k3s
on my PostgreSQL server. Along that I created a user called k3s
and gave it a password, login rights and full access to the k3s
database.
Load balancer
Next, I configured my load balancer. I have a NGINX docker container running on a separate server with the config shown below.
events {}
stream {
upstream k3s_servers {
server 192.168.0.10:6443;
server 192.168.0.11:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
This config does 2 things. The server
block tells NGINX to listen on port 6443 and configure it as a proxy, sending the requests to the upstream server group called k3s_servers
.
The upstream
block is a list of all my K3s master nodes. My 2 master nodes have IP addresses 192.168.0.10
and 192.168.0.11
, so I tell NGINX to route traffic to those 2 servers.
Install and configure master node
First, we’ll see what the datasource endpoint looks like. This is different for every database server, I’ll list a few. Just assume we have a database called k3s
, a user k3s_user
with password SuperSecret
. The database is running on server 192.168.0.5
. Ofcourse, you’ll have to adjust these values to match it with your environment.
Database engine | Datasource endpoint |
PostgreSQL | postgres://k3s_user:SuperSecret@192.168.0.5:5432/k3s |
MariaDB/MySQL | mysql://k3s_user:SuperSecret@tcp(192.168.0.5:3306)/k3s |
Now log in to the server where you’ll want the first master node. The command we are going to execute is shown next, but I’ll walk through what each parameter means:
curl -sfL https://get.k3s.io | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.0.6 --tls-san k3s.home --datastore-endpoint 'postgres://k3s_user:SuperSecret@192.168.0.5:5432/k3s'
Part curl -sfL https://get.k3s.io
will fetch the installation script from the K3s website.
Part sh -s - server
wil execute the fetched script as “server”. This means it will be installed as a Kubernetes master node.
Part --node-taint
will make sure that none of the pods will be scheduled on this node.
Part CriticalAddonsOnly=true:NoExecute
tells this node that only required configuration is allowed on this node.
Part --tls-san 192.168.0.6
and --tls-san k3s.home
tells the master node how we connect to the load balancer. Without this you’ll get a certificate warning. You can put this argument in this command as many times as you want, 1 for each URL you’ll use.
Part --datastore-endpoint 'postgres...
will configure the datastore endpoint.
Now press enter, wait a few seconds and tadaa, it works! You can check it’s running by executing the command sudo k3s kubectl get nodes
:
rens@k3sm-01:~$ sudo k3s kubectl get nodes
[sudo] password for rens:
NAME STATUS ROLES AGE VERSION
k3sm-01 Ready control-plane,master 5m v1.20.2+k3s1
Next part is to configure the second master node. Easy part is, it is just like the first master node! So execute the command again on the second server and you’ll be done.
Install and configure agent nodes
We need to execute a slightly modified version of the script. But we’ll also have to tell the agent node where the master nodes are. To do so, we’ll need to get the token from the master node. So on one of the master nodes, get the token located in this file:
sudo cat /var/lib/rancher/k3s/server/node-token
Save the output.
Now we’ll execute this script on every agent node.
curl -sfL https://get.k3s.io | K3S_URL=https://k3s.home:6443 K3S_TOKEN=<TokenFromPreviousOutput> sh -
What does all of this mean?
Part curl -sfL https://get.k3s.io
is the same as before.
Part K3S_URL=https://k3s.home:6443
will configure where the agent can find the master node.
Part K3S_TOKEN=<TokenFromPreviousOutput>
hands the token (fetched from the master node) to the agent. It needs this to allow connecting to the master nodes.
Part sh -
will execute the script.
Now press enter, wait a few seconds and tadaa, it works! Now you have 2 master nodes and 1 agent node. Every pod you’ll schedule on this cluster will run on the single agent node. If this is not enough, just get a 4th server and execute the script above. I eventually ended up with 4 agent nodes.
The final part is to get the kubeconfig so you can configure your local computer to connect to your cluster. You can find this kubeconfig file on one of the master nodes:
sudo cat /etc/rancher/k3s/k3s.yaml
This will give you an output like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <Base64EncodedValue>
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: <Base64EncodedValue>
client-key-data: <Base64EncodedValue>
You can almost copy this 1 to 1 to your computer, the only part you will need to change is the server attribute. This needs to be updated to use the URL you used when configuring the master nodes. In my case, that would be https://k3s.home:6443
.
There you go, a fully running, high available Kubernetes cluster using K3s.