Just-for-fun experiment: Deploying Kubernetes on two old laptops with Gentoo Linux

This article tells the story of a fascinating experiment involving the deployment and configuration of Kubernetes on two old laptops, one of which had an old processor. In conducting it, we referred to the Kubernetes The Hard Way tutorial, to which we did a bit of tweaking. On top of that, we went even more nuts and opted for Gentoo Linux (yes, you read that right!) as the host operating system. Let’s dive into this exciting, hardcore experience!

Introduction and background

Many of you have probably heard about Kelsey Hightower’s valuable and somewhat legendary repository for firing up a Kubernetes cluster without using any ready-made scripts or tools. This guide covers the creation of a Kubernetes cluster based on the Google Cloud Platform (GCP).

As even the author admits, the cost of creating such a cluster is greater than the 300 USD in free credits that new customers receive from Google. But what if you have one or more old laptops just sitting on the shelf collecting dust? Could you use them to host a Kubernetes cluster that will cost you nothing? Well, that’s exactly what we’re going to find out!

Note that this article is a complement to Kelsey Hightower’s original tutorial, which applies not only to GCP but to any other hardware or virtual machine (albeit with some reservations).

So, we had two Gentoo Linux-based laptops to experiment with:

  • An Intel Core i5-9300H-based Dell G3 3590 (AMD64 architecture) — we will refer to it as dell;
  • An Intel Core2 Duo-based HP Compaq 6720s (it’s i686 architecture, but we do want to make it harder, so it will be i386) — we will refer to it as the hpcom.

These two will become the K8s cluster nodes. We used the official documentation for amd64 and x32 to compile the system, although the kernel was built with extra tweaks for IPSet and Docker. We also enabled the NETFILTER_XT_MATCH_COMMENT and NETFILTER_XT_MATCH_STATISTIC kernel parameters required for kube-proxy. We encountered a couple of difficulties during the installation process, and the Gentoo community forum proved to be a great resource to help address them. Here are links to the solutions to the two most common problems: a black screen right after booting GRUB and a non-PAE kernel error.

We also had a basic router provided by the ISP, a strong desire to tinker with Kubernetes, and our main laptop for all of our tweaking activities.

How to use the tutorial

As mentioned above, minor adjustments were made to the original Kubernetes tutorial. For that reason, we will only cover those sections in which you will have to deviate from the original instructions. In doing so, we will retain the original section titles and their order.

The original tutorial contains 14 chapters. This article will provide a link to each chapter for easy navigation.

Let’s get started!

Installing Kubernetes

Prerequisites

This section is about installing the Google Cloud SDK and selecting the default region and compute zone, so you can just skip it.

Installing the Client Tools

No changes here: follow all the steps in the original tutorial.

Provisioning Compute Resources

You can safely skip this tutorial section as well, but you need to find out and save the local IP addresses of the prospective cluster nodes (in my case, they were 192.168.1.71 for the hpcom and 192.168.1.253 for the dell).

For convenience’s sake, save these addresses to the node_list.txt file:

cat <<EOF | tee node_list.txt
dell 192.168.1.253 # remember to substitute your addresses and names
hpcom 192.168.1.71
EOF

You can install and deploy NGINX on the main machine to imitate a Load Balancer. Next, append nginx.conf with the include passthrough.conf line. You can find that file in the default NGINX config directory (/etc/nginx/ for Linux; /opt/homebrew/etc/ for macOS, provided that the installation was carried out using brew).

Next, create a passthrough.conf file in the same directory. You can do so using the following command (make sure to change the path to the NGINX directory and your node_list.txt file):

cat <<EOF | tee /opt/homebrew/etc/nginx/passthrough.conf
stream {
    upstream kube {
$(cat node_list.txt | cut -d" " -f2 | xargs -I{} echo "$(printf '%8s')server {}:6443;")
    }

    server {
        listen       443;
        proxy_pass kube;
        proxy_next_upstream on;
    }
}

This will allow us to create a minimalistic high-availability-like cluster, which is sufficient for this article.

Provisioning a CA and Generating TLS Certificates

In this section, follow all the steps up to The Kubelet Client Certificates. At this point, the IP addresses of the nodes that we saved earlier will come in handy:

for instance in `cat node_list.txt | cut -d" " -f1`; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

INTERNAL_IP=`cat node_list.txt | grep $instance | cut -d" " -f2`

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance},${INTERNAL_IP},127.0.0.1 \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done

Follow the original instructions up to the The Kubernetes API Server Certificate step and replace the lines as follows:

{

KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=$(cut -d" " -f2 node_list.txt| tr '\n' ',')127.0.0.1,${KUBERNETES_HOSTNAMES} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

}

Follow all the instructions from The Service Account Key Pair and copy certificates using scp as described in the Distribute the Client and Server Certificates section. To render the process easier, create the ~/.ssh/config file on the host machine using the command below:

cat node_list.txt | xargs -n2 bash -c 'echo -e "Host $0\n\tHostname $1\n\tUser <Your nodes user>"' | tee ~/.ssh/config

Copy the files to nodes:

{
for instance in `cut -d" " -f1 node_list.txt`; do
  scp node_list.txt ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
}

In my case, both laptops will serve as the master and worker nodes, so I copied to them both control plane and worker node certificates.

Generating Kubernetes Configuration Files for Authentication

In this section, skip Kubernetes public IP address, and in the Kubernetes kubelet configuration file subsection, run the command below instead of the one provided in the tutorial:

for instance in $(cat node_list.txt | cut -d" " -f1); do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

Do the same thing for The kube-proxy Kubernetes Configuration File:

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Then, follow the instructions up to the step where the files are copied to the nodes. Use the command below instead:

{
for instance in $(cat node_list.txt | cut -d" " -f1); do
  scp ${instance}.kubeconfig kube-proxy.kubeconfig admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
}

Generating the Data Encryption Config and Key

In this section, follow the instructions, except for the instructions for copying files to nodes. Use the loop below to copy them:

{
for instance in $(cat node_list.txt | cut -d" " -f1); do
  scp encryption-config.yaml ${instance}:~/
done
}

Bootstrapping the etcd Cluster

Before proceeding to the original instructions, do the following (for each node):

1. Connect to the node:

ssh hpcom

2. Install the necessary tools:

sudo emerge --sync && emerge --ask dev-vcs/git sys-devel/make net-misc/wget net-misc/curl dev-lang/go app-shells/bash-completion

3. Pull and build etcd:

git clone https://github.com/etcd-io/etcd.git
cd etcd
git checkout v3.4.15
go mod vendor
./build
sudo mv bin/etcd* /usr/local/bin/

4. Configure the etcd we built in the previous step. Install the certificates:

{
  sudo mkdir -p /etc/etcd /var/lib/etcd
  sudo chmod 700 /var/lib/etcd
  sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}

5. Set the IP address and the hostname:

INTERNAL_IP=$(grep $(hostname -s) node_list.txt | cut -d' ' -f2)
ETCD_NAME=$(hostname -s)

6. Create a systemd unit:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd

[Service]
Type=notify
Environment="ETCD_UNSUPPORTED_ARCH=386" # for the i386 architecture only
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster $(cat node_list.txt | sed 's# #=https://#g' | tr '\n' ',' | sed 's#,#:2380,#g' | sed 's#,$##g') \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Now, go back to the original tutorial and follow all the steps, starting with Start the etcd Server.

Bootstrapping the Kubernetes Control Plane

First, perform the following actions on each node:

1. Connect to the node:

ssh hpcom

2. Create the config directory:

sudo mkdir -p /etc/kubernetes/config

3. Pull the Kubernetes source code and compile the control plane:

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes && git checkout v1.21.0
make kube-scheduler kube-apiserver kube-controller-manager kubectl
cd _output/bin
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

4. Move the certificates:

{
sudo mkdir -p /var/lib/kubernetes/
cd && sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    encryption-config.yaml /var/lib/kubernetes/
}

5. Configure NGINX to imitate HA mode (before doing so, create a file for Gentoo USE flags for additional modules):

cat <<EOF | sudo tee /etc/portage/package.use/nginx
www-servers/nginx NGINX_MODULES_HTTP: access gzip gzip_static gunzip proxy push_stream stub_status upstream_check upstream_hash upstream_ip_hash upstream_keepalive upstream_least_conn upstream_zone
www-servers/nginx NGINX_MODULES_STREAM: access geo limit_conn map return split_clients upstream_hash upstream_least_conn upstream_zone geoip realip ssl_preread geoip2 javascript
EOF

sudo emerge --ask www-servers/nginx

6. Add the server check to the http section of the nginx.conf file:

server {
    listen 127.0.0.1:80;
    server_name localhost;

    access_log /var/log/nginx/localhost.access_log main;
    error_log /var/log/nginx/localhost.error_log info;

    location /nginx_status {
        stub_status on;

        access_log off;
        allow 127.0.0.1;
        deny all;
    }

7. Append the include line to the file:

echo include passthrough.conf; | sudo tee -a /etc/nginx/nginx.conf

8. Create a passthrough.conf file in that same directory (just as you did on your main laptop):

cat <<EOF | sudo tee /etc/nginx/passthrough.conf
stream {
    upstream kube {
$(cat node_list.txt | cut -d" " -f2 | xargs -I{} echo "$(printf '%8s')server {}:6443;")
    }

    server {
        listen       443;
        proxy_pass kube;
        proxy_next_upstream on;
    }
}
EOF

9. The next step is to configure the API server. First, set the node IP addresses:

INTERNAL_IP=$(grep $(hostname -s) node_list.txt | cut -d' ' -f2)
KUBERNETES_PUBLIC_ADDRESS=127.0.0.1

10. Create a systemd unit:

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=$(cat node_list.txt | wc -l) \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=$(cut -d' ' -f2 node_list.txt | sed 's#^#https://#g' | sed 's#$#:2379#g' | xargs | tr ' ' ',') \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --runtime-config='api/all=true' \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Next, follow all the steps from the original tutorial up to The Kubernetes Frontend Load Balancer. You don’t need to install and configure a load balancer, but you can deploy MetalLB after configuring the cluster if you want to. You can skip the Verification section since NGINX is already deployed.

Use this command to check whether the API server is working properly:

curl --cacert /var/lib/kubernetes/ca.pem  https://127.0.0.1:443/healthz

Bootstrapping the Kubernetes Worker Nodes

1. Connect to each node and install the required packages:

ssh hpcom
sudo emerge --ask net-misc/socat net-firewall/conntrack-tools net-firewall/ipset sys-fs/btrfs-progs

2. Create the necessary directories; download and build the binaries:

sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

# crictl for i386
wget -q --show-progress --https-only --timestamping https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-386.tar.gz
tar -xvf crictl-v1.21.0-linux-386.tar.gz && sudo mv crictl /usr/local/bin/
# for the amd64 architecture
wget -q --show-progress --https-only --timestamping https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz  # for amd64
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz && sudo mv crictl /usr/local/bin/

# runc
cd && git clone git@github.com:opencontainers/runc.git && cd runc && git checkout v1.0.0-rc93
make
sudo make install

# CNI plugins
cd && git clone git@github.com:containernetworking/plugins.git && cd plugins && git checkout v0.9.1
./build_linux.sh
sudo mv bin/* /opt/cni/bin/

# containerd requirements for i386
cd && wget -c https://github.com/google/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_32.zip
sudo unzip protoc-3.11.4-linux-x86_32.zip -d /usr/local
# for amd64
cd && wget -c 
https://github.com/google/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip
sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local

sudo emerge --ask sys-fs/btrfs-progs

# containerd
cd && git clone git@github.com:containerd/containerd.git && cd containerd 
git clone git@github.com:containerd/containerd.gitss && cd containerd/
make 
sudo make install

# K8s
cd ~/kubernetes && git checkout v1.21.0
make kubelet kube-proxy
cd _output/bin
chmod +x kubelet kube-proxy
sudo mv kubelet kube-proxy /usr/local/bin/

3. Configure the CNI plugin:

POD_CIDR=10.200.$(grep -n $(hostname -s) node_list.txt | cut -d':' -f1).0/24
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.4.0",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.4.0",
    "name": "lo",
    "type": "loopback"
}
EOF

4. Configure containerd (we use a different config version here, not the one from the original repository):

sudo mkdir -p /etc/containerd/

cat <<EOF | sudo tee /etc/containerd/config.toml
version = 2
[plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "docker.io/alexeymakhonin/pause:i386" # k8s.gcr.io/pause:3.7 for amd64

  [plugins."io.containerd.grpc.v1.cri".containerd]
    snapshotter = "overlayfs"
    [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/sbin/runc"
      runtime_root = ""
EOF

The K8s pause container (aka containerd sandbox_image in the config) does not support the i386 architecture. You can build it yourself or use the ready-made image from my Docker Hub.

If you choose to do it yourself, compile a binary on the i386 node as follows:

cd ~/kubernetes/build/pause && mkdir bin
gcc -Os -Wall -Werror -static -DVERSION=v3.6-bbc2dbb9801 -o bin/pause-linux-i386 linux/pause.c

Next, copy the resulting binary along with the entire directory to the main laptop with Docker Buildx to build the image:

scp -r hpcom:~/kubernetes/build/pause ./ && cd pause
docker buildx build --pull --output=type=docker --platform linux/i386 -t docker.io/<DockerHub username>/pause:i386 --build-arg BASE=scratch --build-arg ARCH=i386 .
docker push docker.io/<DockerHub username>/pause:i386

Finally, in the containerd configuration file, replace sandbox_image with the image you have built.

Now create a containerd unit for systemd:

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

Follow the tutorial to configure kubelet and kube-proxy. Meanwhile, keep in mind that the mv command will fail with an error since ca.pem is already copied to /var/lib/kubernetes.

Verify that the node is active:

kubectl get node --kubeconfig ~/admin.kubeconfig

Configuring kubectl for Remote Access

We are only interested in the “The Admin Kubernetes Configuration File” subsection of this chapter. Follow it:

{
  KUBERNETES_PUBLIC_ADDRESS=127.0.0.1

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:443

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem

  kubectl config set-context kubernetes-the-hard-way \
    --cluster=kubernetes-the-hard-way \
    --user=admin

  kubectl config use-context kubernetes-the-hard-way
}

Provisioning Pod Network Routes

You can safely skip this section.

Deploying the DNS Cluster Add-on

You will have to split the instructions in this section into two parts. Run all of them unchanged on the amd64 laptop. As for the i386 machine, you will need to take a different approach. The thing is that there is no official coredns i386 image, so you will have to build it yourself (or get the one I have created). Run the following commands on the Docker Buildx machine to build the image yourself:

git clone git@github.com:coredns/coredns.git && cd coredns && git checkout v1.8.3
make CGO_ENABLED=0 GOOS=linux GOARCH=386
docker buildx build --pull --output=type=docker --platform linux/i386 -t docker.io/<DockerHub username>/coredns:i386 .
docker push docker.io/<DockerHub username>/coredns:i386

Next, replace the image in the config with the one you have built and deploy the CoreDNS add-on:

curl -L https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml --silent -o - | sed 's#image: coredns/coredns:1.8.3#image: docker.io/<DockerHub username>/coredns:i386#g' | kubectl apply -f -

Now, patch the created resource, substituting the annotation with i386 or amd64, depending on which node you want to assign coredns to:

cat <<EOF | yq -ojson  | tee patch.json
spec:
  template:
    spec:
      nodeSelector:
        kubernetes.io/arch: "386"
    
EOF

kubectl patch -n kube-system deployments.apps coredns --patch-file patch.json

Run the commands from the original tutorial to see if everything works.

At this point, you may encounter Pods stuck in the ContainerCreating state with the following error:

cgroups: cgroup mountpoint does not exist: unknown.

It’s easy to fix:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Smoke Test

Follow all but two of the tutorial steps and the encryption check command will look like this:

ssh hpcom
sudo ETCDCTL_API=3 etcdctl get \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem\
  /registry/secrets/default/kubernetes-the-hard-way | hexdump -C

The NODE_PORT service check command will look like this:

curl -I  http://$(grep hpcom node_list.txt | cut -d' ' -f2):${NODE_PORT}

Cleaning up

At this step, you can simply delete the system.

Conclusion

In this article, we rolled up our sleeves and gave a second life to computer equipment that for years had been sitting in the closet just collecting dust. We also learned how to get Kubernetes up and running on old laptops using slightly tweaked instructions from the Kubernetes The Hard Way tutorial.

The most interesting and valuable part was building the images for the i386 architecture. The latter has long been considered obsolete, but occasionally you may encounter it. We hope our experience will be helpful to those who have long wanted to delve deeper into Kubernetes and its configuration.

Comments

Your email address will not be published. Required fields are marked *