Set-up a Kubernetes cluster in minutes

Wasilios Goutas
8 min readFeb 7, 2021

If you would like to set-up your own low budget Kubernetes cluster, do it as I did, with Rancher’s K3s.

Photo by Kent Pilcher on Unsplash

Not only Google provides a Kubernetes implementation as they invented it, but other companies as well. At the Cloud Native Computing Foundation you can find a list of Kubernetes programs certified by the CNCF. One of the companies providing such is Rancher which is providing two Open Source Kubernetes implementations, Rancher Kubernetes Engine (RKE) and K3s.

My reasons for choosing Rancher are: it is Open Source, Rancher’s web UI is great and they provide free online curses on Kubernetes and other trainings.

In this article I’m going to show you how you can set-up a lightweight Kubernetes cluster with K3s.

First the explanation about the naming of K3s which can be found on the documentation page of K3s

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.

Prerequisits

As K3s is a lightweight Kubernetes version it’s requirements to CPU and memory are quite low and defined by

  • 512MB RAM recommended is 1GB)
  • 1 CPU

Resource test results can be found here.

Well, what I really like is that K3s does not need much network ports. I set-up an RKE cluster first and tried to open the ports needed to let my hope PC be part of the cluster, which ended up in a nightmare of forwarding rules.

K3s needs to be able to communicate through the ports 6443(TCP), 8472(UDP), 10250(TCP) and for HA clusters the range of 2379–2380(TCP).

Compared to RKE this is relay a minimum port set which you have to take care of ;)

Rancher K3s inbound rule documentation

The operating system K3s is running on is Linux. The machines I’m making use of in this article, are all Linux Ubuntu systems.

installing the server

I use 2 VPS systems of a service provider running Ubuntu 20.04 which I’m going to run Kubernetes on. It will not become a High Availability cluster, as this would need at least 3 server, although K3s enables us to set-up up a HA cluster even with only 2 server — this is possible when making use of an external database which is an configuration option — . K3s supports the use of datastores other than etcd, which seems to be a unique feature of K3s and enables you to choose PostgreSQL, MySQL, MariaDB, embedded SQLite or etcd as Kubernetes key-value store.

A K3s cluster consists of at least one server node which would be a single node cluster. Additional agent nodes to the cluster is been done by registering to the server[s] by making use of a shared cluster token.

I’m going to show how to set-up a cluster containing one server and one agent and also how to set-up a cluster with multiple servers (the agent set-up will be the same for all configurations).

On the first of my machines I run following command to get the K3s server installed.

curl -sfL https://get.k3s.io | sh -

K3s server / one node cluster installation in < 1 min

The installation process supports also configuration options which needs to be set via environment variables. Details to the options can be found here. For me the defaults are good, but to demonstrate how to e.g. set the channel to download K3s from (the supported channels are: stable, latest, testing), I uninstall my previous installation by the command

k3s-uninstall.sh

and install the server from the channel ‘stable’ (which is the default) by running

curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=”stable” sh -s -

The system now is configured to start the K3s server as a ‘systemd’ service.

In the Kubernetes world, the kubectl command is the main tool to control the cluster. Once you installed K3s by using the above scripted installation, kubectl will be callable as it is a link to the k3s binary. Alternatively you can call it as parameter to k3s e.g. “k3s kubectl <kubectl command>“

k3s binary usage

Any agent you want to add to the K3s cluster controlled by the server installed above, need to be configured to use the token provided by the server.

cluster-info & node token

Read the token (the complete line) from the file “/var/lib/rancher/k3s/server/node-token” as you will need it during the agent installation.

registration of agents

The installation of agents is been done by calling the k3s binary with the command ‘agent’ and provide it additional parameters. The mandatory parameters are --server and --token. The server has to be addressed with https protocol on port 6443, the token comes from the K3s server installation. This parameters can be set also via environment variables as the command “k3s agent --help” shows us.

k3s agent --help

The installation script used for the server can be used also to set up an agent. You just need to add the command ‘agent’ and the parameters mentioned earlier. I will make use of the environment variables K3S_URL and K3S_TOKEN to provide this values to the installation script.

The script to set up an agent is the same as the one used to set up a server. Internally it is checking if the environment variable K3S_URL is set. If so, it installs the agent, otherwise the server. As I’m setting the variable before, the command to run the installation is exactly the same as in the server installation chapter.

curl -sfL https://get.k3s.io | sh -s -

k3s agent set-up

On the serer you will be able to check the availability of the newly added agent by calling

k3s kubectl get node

k3s kubectl get node

The set-up of a Kubernetes cluster, consisting of one server and one agent, is done by just a handful commands :)

The agent installation provides your system also with an script to uninstall it from the node. To uninstall the agent just call:

k3s-agent-uninstall.sh

High Availibility cluster

For a HA cluster you must have an odd number of server nodes (minimum 3). As I just have 2 nodes, what I’m going to set-up here will not be an HA cluster, and is just to demonstrate how the installation can be done.

Basically setting up a cluster with multiple servers means running one server instance with the flag --cluster-init to enable clustering and provide the same secret token to all server installations.

Before I start, I uninstall the server and agent instances from my nodes by the commands k3s-uninstall.sh and k3s-agent-uninstall.sh.

Every node needs to make use of the same secret token, so on both systems I could define the environment variable K3S_TOKEN, but in this case I will provide it as parameter to the installation script.

On the first node I need to run the server with the parameter --cluster-init so the installation command I make use of is the following:

curl -sfL https://get.k3s.io | sh -s server --cluster-init --token “topsecret”

k3s server installation with predefined token and initialization of cluster

On the second node the same token must be used and in addition the parameter --server pointing to the node running the server which initialized the cluster. This time I will provide the token as environment variable, just to show this option as well.

curl -sfL https://get.k3s.io | K3S_TOKEN=”topsecret” sh -s server --server https://vmd62521.contaboserver.net:6443

second server set-up
multi server set-up

Now both servers have been set-up and build one Kubernetes cluster.

Any additional server node would be added as the second one. With an odd number of servers (minimum of 3) or at least 2 servers configured to use an external data store the cluster would become an High Availability cluster which sounds complicated but as shown is quite easy to be done.

My private PC runs [K]Ubuntu 20.04, and I want it to be part of the K3s Kubernetes cluster. I have a dynamic DNS entry and am able to reach my home network from outside. Unfortunately my router has a problem forwarding the port range 2379–2380(TCP) :( so I fail on getting the prerequisites to add it as third server, but I could add it as agents to the cluster. Let’s try ;)

I set-up the agent by calling the installer with the command ‘agent’ and use the same token as the servers.

curl -sfL https://get.k3s.io | K3S_TOKEN=”topsecret” sh -s agent --server https://vmd62521.contaboserver.net:6443

10 sec. later my home PC is part of the K3s Kubernetes cluster :)

add local PC as agent to cluster

On the server nodes the command

k3s kubectl get nodes

lists my local PC as part of the cluster.

That’s it! The Kubernetes cluster has been set up.

Ranchers WebUI

At the beginning of the article I mentioned, that Rancher provides a great WebUI. As I think it is worth to write an own story on the installation of it, so I plan to do so soon.

Summary

In this article I tried to show you how easy it is to set up a lightweight and low budget Kubernetes cluster with K3s.

The only thing needed is the k3s binary which implements everything needed to create a Kubernetes cluster.

By using the script provided on https://get.k3s.io, the installation is very comfortable. The glue between servers and agents is the token which is been created during the server installation or is been provided by yourself during installation.

Thank you for reading!

My name is Wasilios Goutas, I’m Dipl. Ing. techn. Informatik (FH) which I studied in my home town Hamburg, Germany. I’m experienced software project manager in the automotive and semiconductor industries. In my private life I’m making use of AI libraries like Keras & Tensorflow to learn more about the technology which will play a significant role in tomorrows world.

Feel free to contact me on LinkedIn.

--

--

Wasilios Goutas

Dipl. Ing. Technische Informatik, SW Project Manager, Data Scientist