Posts by tags
  • Popular
  • Kubernetes 72
  • tools 24
  • databases 24
  • migrations 13
  • observability 12
  • A-Z
  • AIOps 1
  • ARM 1
  • AWS 3
  • benchmarking 2
  • best practices 7
  • business 4
  • caching 3
  • Calico 1
  • Cassandra 2
  • Ceph 5
  • cert-manager 1
  • CI/CD 9
  • CLI 4
  • ClickHouse 3
  • CNI 2
  • CockroachDB 1
  • comparison 9
  • databases 24
  • eBPF 2
  • Elasticsearch 5
  • etcd 4
  • failures 11
  • FinOps 1
  • Fluentd 1
  • GitLab 4
  • Helm 5
  • hyperconvergence 1
  • Ingress 3
  • Kafka 2
  • Keycloak 1
  • KeyDB 3
  • Kubernetes 72
  • Kubernetes operators 11
  • Linux 4
  • logging 5
  • Logstash 1
  • market 5
  • memcached 1
  • migrations 13
  • MongoDB 2
  • MySQL 2
  • networking 7
  • nginx 1
  • observability 12
  • Palark 7
  • PHP 1
  • PostgreSQL 10
  • Prometheus 4
  • Python 4
  • RabbitMQ 1
  • Redis 4
  • Rook 3
  • security 7
  • serverless 2
  • software development 2
  • SSL 1
  • storage 10
  • success stories 2
  • Terraform 3
  • tools 24
  • troubleshooting 8
  • Vault 1
  • Vector 2
  • virtualization 1
  • VPN 1
  • werf 3
  • YAML 2
  • ZooKeeper 1

Small Kubernetes for your local experiments: k0s, MicroK8s, kind, k3s, and Minikube

So you’ve come up with an idea to automate, unify, or transform something in a cluster, but you don’t want to risk ruining the cluster. This scenario is familiar to most people who’ve had experience with Kubernetes. To do that, what you need is an easy-to-set-up sandbox to test your idea without taking too much risk.

In cases like these, Kubernetes mini-clusters come to the rescue. You can run them on your desktop or laptop, tinker with primitives, build a new structure, and then delete them without any hesitation when the experiment is over.

Developers worldwide have met this demand by inventing various solutions that allow you to fire up a lightweight Kubernetes environment quickly and easily. All these solutions come with different designs and capabilities. The one you choose will depend on your needs and preferences. This article reviews some of the most popular ones, helping you to better understand them and choose the right tool. Fortunately, they are all relatively well-documented (both on the official websites and in the CLIs), which significantly speeds up the learning process and renders them easy to use. At the end of the article, we provide a comparison table detailing the main features of the solutions.

Tools

1. k0s

  • Website: k0sproject.io
  • GitHub repository: k0sproject/k0s
  • GitHub stars: 4,000+
  • Contributors: 30+
  • First commit: June 2020
  • Key developer: Mirantis
  • Supported K8s versions: 1.20 and 1.21

The name of the project speaks for itself: it is hard to imagine a system any more lightweight since it is based on a single, self-sufficient (statically built) file. All you need to do is download the current version of it from the project repository and you can proceed to configure and use the cluster. The file is compiled for Linux. Thus, the cluster can only run on that system (see the end of the article for more information on supported host systems). Note that only the root user can run it.

After the installation is complete (all you need to do is copy the file to /usr/local/bin), start k0s as a service using a helper script. Now you can access it as a cluster node (the master node by default):

k0s install controller ; systemctl start k0scontroller.service

k0s includes the kubectl CLI tool to connect to the Kubernetes API:

k0s kubectl get nodes

You can use k0s kubectl to create other Kubernetes objects: namespaces, Deployments, etc. To add a node to the k0s cluster, download and install the k0s binary file on the server intended to be used as the worker node. Next, generate an authentication token, which you will use to join the node to the cluster. This other server can run in a container or on a VM: all you have to do is ensure the API server’s network availability to register a node in the cluster.

To uninstall the k0s cluster, you first need to stop the service (k0s stop) and then invoke the reset command to remove all k0s-related files from the host.

The containerd daemon manages and runs containers in Pods. In addition, you can mount hostPath volumes to the Pods. Calico serves as the default CNI while kube-router is also available. Essentially, you can use any CNI you like, since k0s does not restrict the Kubernetes configuration in any way.

For user convenience, k0s provides auto-completion scripts for various shells: Bash, zsh, fish, and PowerShell (using WSL).

k0s is as minimalistic as possible: it is a plain vanilla Kubernetes without any modules or plugins. It does not feature support for cloud providers by default (however, you can add that during the startup). The software is installed in the same way as in a regular Kubernetes cluster – by declaring the necessary primitives (you can use Helm and other such tools).

2. MicroK8s

  • Website: microk8s.io
  • GitHub repository: ubuntu/microk8s
  • GitHub stars: ~5,700
  • Contributors: 120+
  • First commit: May 2018
  • Key developer: Canonical
  • Supported K8s versions: 1.19—1.21

This mini-cluster by Canonical is similar to the previous one: the cluster nodes require a manual setup and they can run on any Linux instances connected to the first (master) node over TCP/IP. Similarly, a token is required to add new nodes while the built-in kubectl tool handles API interaction.

Calico is also used as the default CNI. You will need root privileges to install it. MicroK8s is available as a snap package and supports 42 Linux distributions:

# snap install microk8s --classic

After the installation is complete, you can start the cluster:

# microk8s start
# microk8s kubectl get nodes
NAME            STATUS   ROLES    AGE    VERSION
thinkpad        Ready       2m     v1.20.7-34+df7df22a741dbc

Note that MicroK8s ships with a set of addons. You can enable and disable them at any time. For example, the following will enable the Kubernetes dashboard:

# microk8s enable dashboard
# microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    ...

Similar to k0s, MicroK8s comes with an internal registry for storing container images.

Another exciting feature is the microk8s inspect command. What it does is analyze the cluster and compile a complete report (as a tar.gz file) on its components for further study:

$ ls inspection-report/
apparmor
args
juju
k8s
kubeflow
network
snap.microk8s.daemon-apiserver
snap.microk8s.daemon-apiserver-kicker
snap.microk8s.daemon-cluster-agent
snap.microk8s.daemon-containerd
snap.microk8s.daemon-controller-manager
snap.microk8s.daemon-control-plane-kicker
snap.microk8s.daemon-kubelet
snap.microk8s.daemon-proxy
snap.microk8s.daemon-scheduler
sys
$ ls inspection-report/k8s/
cluster-info
cluster-info-dump
get-all
get-pv
get-pvc
version
$ cat inspection-report/k8s/version 
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-34+df7df22a741dbc", GitCommit:"df7df22a741dbc18dc3de3000b2393a1e3c32d36", GitTreeState:"clean", BuildDate:"2021-05-12T21:08:20Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-34+df7df22a741dbc", GitCommit:"df7df22a741dbc18dc3de3000b2393a1e3c32d36", GitTreeState:"clean", BuildDate:"2021-05-12T21:09:51Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

3. kind

  • Website: kind.sigs.k8s.io
  • GitHub repository: kubernetes-sigs/kind
  • GitHub stars: ~8,500
  • Contributors: 200+
  • First commit: September 2018
  • Key developer: Kubernetes SIG
  • Supported K8s versions: 1.21

kind (Kubernetes in Docker) is another lightweight tool for running local K8s clusters. Installation is perfectly straightforward: all you have to do is download the executable.

In order to create a cluster, you first need permissions to create Docker containers and networks. Creating a cluster is as simple as running kind create cluster*. This will start a node — a Docker container used for running other containers:

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                       NAMES
fee30f6d4b73        kindest/node:v1.21.1   "/usr/local/bin/entr…"   2 minutes ago       Up About a minute   127.0.0.1:45331->6443/tcp   kind-control-plane
$ kind get nodes
kind-control-plane
$ kubectl get nodes
NAME                 STATUS   ROLES                  AGE   VERSION
kind-control-plane   Ready    control-plane,master   2m    v1.21.1
$ docker exec -it kind-control-plane bash
root@kind-control-plane:/# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
2a0dfe12a5810       296a6d5035e2d       2 minutes ago       Running             coredns                   0                   e13acbf529288
38ef0ad97090a       296a6d5035e2d       2 minutes ago       Running             coredns                   0                   3460cf0419c19
ec11cbc0e9795       e422121c9c5f9       2 minutes ago       Running             local-path-provisioner    0                   a9ffa60dcc12d
fa8057bbf0df6       6de166512aa22       3 minutes ago       Running             kindnet-cni               0                   4f8481acba5fc
e341ce4c5cdfd       ebd41ad8710f9       3 minutes ago       Running             kube-proxy                0                   1b1755819c40a
88c6185beb5c5       0369cf4303ffd       3 minutes ago       Running             etcd                      0                   da01c1b2b0cdc
5cdf1b4ce6deb       d0d10a483067a       3 minutes ago       Running             kube-controller-manager   0                   a0b2651c06136
b704a102409e1       6401e478dcc01       3 minutes ago       Running             kube-apiserver            0                   c2119c740fff2
a5da036de5d10       7813cf876a0d4       3 minutes ago       Running             kube-scheduler            0                   92a22aa99ad29

* This will also create a Docker network. If the installation fails due to the following error:

ERROR: failed to create cluster: failed to ensure docker network: command "docker network create -d=bridge -o com.docker.network.bridge.enable_ip_masquerade=true -o com.docker.network.driver.mtu=1500 --ipv6 --subnet fc00:f853:ccd:e793::/64 kind" failed with error: exit status 1
Command Output: Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

…check whether the OpenVPN process is running in the system and stop it while the installation is in progress. After the installation is complete, you can resume its operation.

Additionally, while the cluster is being created, kubectl is configured to access the API. To create a more complex cluster, you need to specify a configuration file while setting up the cluster (using the --config flag). Here is an example of how to create a cluster consisting of three nodes:

kind create cluster --config=three-node-conf.yaml

…where three-node-conf.yaml features the following contents:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

Deleting is just as simple: invoke kind delete cluster to delete the cluster and the information it contains from the kubectl configuration. As a side note, auto-completion scripts for Bash, zsh, and fish are also supported.

Since the node is a Docker container, mounting HostPath volumes in Pods use the container filesystem. This way, you can forward the container’s directory to the filesystem of the host operating system. You can upload Docker images from the master host to the cluster nodes. However, it doesn’t come with any plugins or addons.

kind ships with a basic kindnetd plugin as the default CNI, but you can use other plugins as well. While support for custom CNIs is described as limited, many popular CNI manifests (e.g., Calico) work just fine.

Further configuration is done with kubectl. For example, you can install Ingress NGINX using the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

4. k3s (and k3d)

  • Website: k3s.io (and k3d.io)
  • GitHub repository: k3s-io/k3s (rancher/k3d)
  • GitHub stars: ~17,800 (~2800)
  • Contributors: 1,750+ (50+)
  • First commit: January 2019 (April 2019)
  • Key developer: CNCF (Rancher)
  • Supported K8s versions: 1.17—1.21

K3s is a Kubernetes distribution by Rancher with a name similar to K8s but “half as big” to emphasize its lightness and simplicity (albeit with less functionality). The general idea of it is not much different from k0s and MicroK8s. Upon launching, k3s creates a cluster node with one of the following two roles:

  • a server running as a master server: an API server, scheduler, and controller manager (with an SQLite database);
  • an agent running an ordinary Kubernetes node: a kubelet and containerd that manages CRI-O containers.

Most disk drivers and cloud provider drivers were excluded from the build to render the executable file smaller. Since it combines several standard Kubernetes components, memory usage is reduced.

In the simplest case, you can use Docker Desktop to run the cluster as part of a single node (no fully-fledged virtualization system is required).

In addition to the distribution, there is also a k3d utility that manages k3s nodes running in a Docker container. It runs in Linux and can be installed using a Bash script.

To start a cluster, all you need are permissions to create Docker containers and networks.

The following command can be used to create a cluster**:

$ k3d cluster create mycluster --servers 1 --agents 2
$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE   VERSION
k3d-mycluster-agent-0    Ready                     30s   v1.20.6+k3s1
k3d-mycluster-agent-1    Ready                     22s   v1.20.6+k3s1
k3d-mycluster-server-0   Ready    control-plane,master   39s   v1.20.6+k3s1

** See the note above concerning creating a Docker network during installation and the error that’s caused by the running OpenVPN process. In that case, however, the error message will be different:

Failed Cluster Preparation: Failed Network Preparation: Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

Each cluster node runs in its own container along with an nginx container that acts as a load balancer. Flannel is used as the CNI plugin, while Traefik serves as the ingress proxy. You can choose other CNIs as well; e.g., you can find specific instructions for Calico and Canal in the documentation. The auto-completion scripts for Bash, zsh, fish, and PowerShell are also supported.

Additionally, you can manage image repos: create custom repositories in the cluster and import images from the main system. This can come in handy if you build Docker images locally, as they will be available in the cluster right after the build.

5. Minikube

  • Website: minikube.sigs.k8s.io
  • GitHub repository: kubernetes/minikube
  • GitHub stars: ~21,800
  • Contributors: 650+
  • First commit: April 2016
  • Key developer: Kubernetes SIG
  • Supported K8s versions: 1.11—1.22

For the Debian and Red Hat-based Linux distributions, all you need to do is install the appropriate package to use Minikube. You can create a cluster using the following command (no root privileges are required; however, the user must have sufficient privileges to set up the virtualization system):

$ minikube start
* minikube v1.20.0 on Ubuntu 18.04
* Automatically selected the docker driver. Other choices: kvm2, ssh
…
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
…
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl get nodes
NAME       STATUS   ROLES                  AGE    VERSION
minikube   Ready    control-plane,master   48s    v1.20.2

Now you can use the kubectl config (it is updated with the access data for the new cluster). The auto-completion scripts for Bash, zsh, and fish are also supported.

For the local OS, Minikube implements a smth1-in-smth2 system, where:

  • smth1 is one of the following: docker, cri-o, or containerd;
  • smth2 is one of the following: virtualbox, vmwarefusion, kvm2, vmware, none, docker, podman, or ssh.

You can also choose which CNI plugin to use:

minikube help start
Starts a local Kubernetes cluster

Options:
...    
      --cni='': CNI plug-in to use. Valid options: auto, bridge, calico, cilium, flannel, kindnet, or path to a CNI manifest (default: auto)
      --container-runtime='docker': The container runtime to be used (docker, cri-o, containerd).
...
      --driver='': Driver is one of: virtualbox, vmwarefusion, kvm2, vmware, none, docker, podman, ssh (defaults to auto-detect)

Use the command below to add nodes to the cluster:

$ minikube node add
* Adding node m02 to cluster minikube

To view the current state of the cluster, use the following minikube status command:

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

minikube-m02
type: Worker
host: Running
kubelet: Running

The minikube mount command mounts the host directory into the VM (note that the 9P protocol is used for mounting). Therefore, you can edit host files directly by mounting HostPath volumes into the Pod (no docker cp is needed; however, you can use that command if you like).

Note that 9P suffers from performance and reliability issues in the event that it is used with a large number of files. The virtualization systems’ (VirtualBox, KVM, VMware) filesystem options can help you with that problem.

Minikube provides a set of addons that you can easily activate in the cluster:

$ minikube addons enable dashboard
…
* The 'dashboard' addon is enabled
$ minikube addons list
…
| dashboard                   | minikube | enabled ​   |
…
$ kubectl -n kubernetes-dashboard get pod
NAME                                        READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-f6647bd8c-rrxq6   1/1     Running   0          29s
kubernetes-dashboard-968bcb79-tk5qt         1/1     Running   0          29s

Similarly, you can enable registry, ingress, Istio, and many other components.

Minikube can also work with several clusters with different profiles simultaneously:

$ minikube start -p minik2
* [minik2] minikube v1.20.0 on Ubuntu 18.04
* Automatically selected the docker driver. Other choices: kvm2, ssh
* Starting control plane node minik2 in cluster minik2
…
$ minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|
| Profile  | VM Driver | Runtime |      IP      | Port | Version | Status  | Nodes |
|----------|-----------|---------|--------------|------|---------|---------|-------|
| minik2   | docker    | docker  | 192.168.58.2 | 8443 | v1.20.2 | Running |     1 |
| minikube | docker    | docker  | 192.168.49.2 | 8443 | v1.20.2 | Running |     2 |
|----------|-----------|---------|--------------|------|---------|---------|-------|

6. Alternative solutions

Some projects have not been included in this review because they are less popular or for other reasons. For example:

  • The Red Hat CRC tool (CodeReady Containers; 750+ GitHub stars) replaces Minishift in running a minimal OpenShift 4.x cluster on a laptop/desktop.
  • Weaveworks’ Firekube (~300 GitHub stars), a Kubernetes cluster running in the Firecracker virtual machine, is also worth mentioning. However, it does not seem to be active.

Supported operating systems

All the above distributions run on Linux. However, you can use them even if your host has a different OS (with the help of virtualization tools):

  • Multipass and VirtualBox are suitable in most cases;
  • In other cases, you may need to use special virtualization tools, such as WSL on Windows.

In the case of kind, k3d, and Minikube, you can go for one Linux VM (for a basic cluster), while in the case of k0s, Microk8s, and k3s, you will need to create several VMs equal to the number of cluster nodes.

Comparison table

Here’s a summary of basic capabilities:

k0s MicroK8s kind k3s + k3d Minikube
Managing nodes creation/deletion
Node management system Docker Docker virtualbox, vmwarefusion, kvm2, vmware, none, docker, podman, ssh
Container runtime containerd containerd containerd, CRI-O CRI-O Docker, CRI-O, containerd
Default CNI Calico Calico kindnet Flannel bridge
Mounting the filesystem of the host OS HostPath HostPath HostPath + docker mount HostPath + docker mount HostPath + … (depends on the virtualization system)
Addons
Unprivileged user ability to create clusters
Vanilla Kubernetes

Conclusion

The comparison was carried out within the context of a particular task (a locally running sandbox), but some of the distributions above are designed for niche use scenarios. For example, MicroK8s by Canonical, and K3s by Rancher are targeted at IoT and edge computing. I should therefore reiterate that the final selection will largely depend on the task at hand, resource considerations, and network infrastructure requirements. I hope that the information above will be of help to you in making the right choice.

Useful external links

Comments 4

Your email address will not be published. Required fields are marked *

  1. MEZGANI Ali

    Interesting what you do Flant. Thank you

  2. Steve Francis

    You should checkout Talos Linux. https://talos.dev. Quite popular on SBCs and the K8s @ Home community. (Also in datacenters.)

  3. Sascha Sternheim

    Nice comparison. But would be great to see the supported operating systems in the table.

  4. Pavel Anni

    Great review, very helpful!
    One suggestion: it would be nice to add the “multi-node support” row to the summary table. For some people it might be important.
    Do they all currently support multi-node? At some point in time minikube didn’t.
    Thanks!