Posts by tags
  • Popular
  • Kubernetes 72
  • tools 24
  • databases 24
  • migrations 13
  • observability 12
  • A-Z
  • AIOps 1
  • ARM 1
  • AWS 3
  • benchmarking 2
  • best practices 7
  • business 4
  • caching 3
  • Calico 1
  • Cassandra 2
  • Ceph 5
  • cert-manager 1
  • CI/CD 9
  • CLI 4
  • ClickHouse 3
  • CNI 2
  • CockroachDB 1
  • comparison 9
  • databases 24
  • eBPF 2
  • Elasticsearch 5
  • etcd 4
  • failures 11
  • FinOps 1
  • Fluentd 1
  • GitLab 4
  • Helm 5
  • hyperconvergence 1
  • Ingress 3
  • Kafka 2
  • Keycloak 1
  • KeyDB 3
  • Kubernetes 72
  • Kubernetes operators 11
  • Linux 4
  • logging 5
  • Logstash 1
  • market 5
  • memcached 1
  • migrations 13
  • MongoDB 2
  • MySQL 2
  • networking 7
  • nginx 1
  • observability 12
  • Palark 7
  • PHP 1
  • PostgreSQL 10
  • Prometheus 4
  • Python 4
  • RabbitMQ 1
  • Redis 4
  • Rook 3
  • security 7
  • serverless 2
  • software development 2
  • SSL 1
  • storage 10
  • success stories 2
  • Terraform 3
  • tools 24
  • troubleshooting 8
  • Vault 1
  • Vector 2
  • virtualization 1
  • VPN 1
  • werf 3
  • YAML 2
  • ZooKeeper 1

ConfigMaps in Kubernetes: how they work and what you should remember

Please note that this is not a complete guide, but rather a reminder/tips collection for those who already use ConfigMap in Kubernetes or are in the middle of preparing their applications to use it.

A little background: from rsync to… Kubernetes

In the era of “classic system administration,” config files were usually stored next to the application itself (or in the repository, if you prefer). Everything was simple in delivering the code and related configs. You could even declare your rsync to be an early implementation of what we call today the continuous delivery (CD).

With the growing infrastructure, different config files were required for different environments (dev/stage/production). Applications were taught to decide which config to use while passing them as runtime arguments or as environment variables. The CD becomes even more complicated with the emergence of so useful Chef/Puppet/Ansible. Servers are getting their roles, and environments’ descriptions are no longer randomly stored as the IaC (Infrastructure as code) approach emerges.

What happened next? If software creators had managed to discern the essential advantages of Kubernetes and even accepted the need to adjust their application — and its design! — to work with the orchestrator (The 12 Factors can be painful sometimes…), then the migration followed. When the essential part was ready, a long-awaited application was up & running in K8s.

Here, they still could use configs via files located in the repository next to the application or passing ENV arguments to the container. However, in addition to these methods, the so-called ConfigMaps became available. This K8s primitive is intended for defining the configuration of the apps deployed to Kubernetes.

Briefly, your config is a dictionary of settings represented by key-value pairs. They are stored in YAML, and a K8s resource called ConfigMap is responsible for handling them.

What’s important to anticipate, these configurations are stored apart from the app code. Thus ConfigMaps are not just about nice features for your configurations (e.g. Go templates via Helm discussed below). It’s a standard way to manage them in Kubernetes following the modern approach to run/operate applications.

Here you can find a good example of an introductory guide to ConfigMaps. In this article, I will focus on some peculiarities of using them.

Basic ConfigMaps

So, how do configs look in Kubernetes? Let’s jump right in taking advantage of Go templates. Here is a typical example of a ConfigMap for an application deployed using a Helm chart:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app
data:
  config.json: |
    {
      "welcome": {{ pluck .Values.global.env .Values.welcome | quote }},
      "name": {{ pluck .Values.global.env .Values.name | quote }}
    }

Here, the values of .Values.welcome and .Values.name will be taken from the values.yaml file.

The pluck function helps to select the required line from the map:

$ cat .helm/values.yaml 
welcome:
  production: "Hello"
  test: "Hey"
name:
  production: "Bob"
  test: "Mike"

Actually, it allows you to select both specific lines and entire snippets of your config. For example, you might have the following ConfigMap:

data:
  config.json: |
    {{ pluck .Values.global.env .Values.data | first | toJson | indent 4 }}

… and your values.yaml might have the following contents:

data:
  production:
    welcome: "Hello"
    name: "Bob"

global.env mentioned here is the name of the environment. By changing this value when you deploy, you will render ConfigMaps with different contents. The first function is required here since pluck returns a list the head element of which contains the desired value.

What if there are multiple configs?

A single ConfigMap may contain multiple config files:

data:
  config.json: |
    {
      "welcome": {{ pluck .Values.global.env .Values.welcome | first | quote }},
      "name": {{ pluck .Values.global.env .Values.name | first | quote }}
    }
  database.yml: |
    host: 127.0.0.1
    db: app
    user: app
    password: app

You can even mount each config file separately:

volumeMounts:
- name: app-conf
  mountPath: /app/configfiles/config.json
  subPath: config.json
- name: app-conf
  mountPath: /app/configfiles/database.yml
  subPath: database.yml

… or get all configs from a directory:

volumeMounts:
- name: app-conf
  mountPath: /app/configfiles

If you modify the Deployment resource during the deploy process, Kubernetes will create a new ReplicaSet by scaling down to zero the existing one and scaling the new one up to the number of replicas specified. (This is true when using the RollingUpdate deployment strategy.)

These actions would result in a pod to be rescheduled with the new description. For example, image:my-registry.example.com:v1 will be replaced by image:my-registry.example.com:v2. It doesn’t matter what changes exactly have been there in the Deployment: the very fact of changes causes ReplicaSet (and, therefore, a pod) to be recreated. In this case, the new version of the config file will be mounted automatically in the new version of the application —and that’s great!

What happens when ConfigMap changes

There are four possible scenarios in response to ConfigMap changes. Let us take a look at them:

  1. The action: the ConfigMap that is mounted as a subPath volume has been modified.
    The effect: the config file on the container won’t be updated.
  2. The action: the ConfigMap had been modified and deployed in the cluster, then we deleted the pod manually.
    The effect: the new pod will mount the updated version of the resource by itself.
  3. The action: the ConfigMap has been modified; we used its hash sum in one of the Deployment annotations.
    The effect: even though we updated the ConfigMap only, the Deployment has also changed. Therefore, the old pod will be automatically replaced with the new one containing the updated version of the resource. Note it will work only if you use Helm (more details will follow below).
  4. The action: the ConfigMap mounted as a directory has been modified.
    The effect: the config file in the pod will be updated automatically, without restarting/rescheduling the pod.

Let us examine the above scenarios more closely.

Scenario 1

Did we modify the ConfigMap only? The application will not be restarted. When using ConfigMap as a subPath volume mount, there will be no changes until the pod is manually restarted.

It’s as simple as that: our ConfigMap of a specific version is mounted into the pod by Kubernetes. Since it is mounted as a subPath, nothing happens to this config anymore.

Scenario 2

Well, we cannot update the file without rescheduling the pod… Since we have 6 replicas in the Deployment, we can delete all the pods manually one by one. Then all the rescheduled pods will get the updated version of the ConfigMap.

Or — starting from K8s v1.15 — we have a better option with kubectl rollout restart deployment <your-deployment> command.

Scenario 3

Are you tired of performing such operations manually? If you’re fine with Helm, the solution to this problem is given in Helm Chart Development Tips and Tricks:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]

Here, the hash of the config is included in the pod’s template (spec.template) as an annotation.

Annotations are arbitrary key-value fields where you can store your values. If you insert them in the spec.template of the pod to be created, then these fields will be included in ReplicaSet and the pod itself. Kubernetes would notice changes in the pod’s template (because the sha256 sum will be different) and will perform a RollingUpdate (the annotation will be the only thing that changes).

As a result, we will preserve the version of the application and the Deployment description while triggering automatic pod rescheduling. This approach is similar to the manual one (via kubectl delete) but is still better since all the actions are performed automatically through the RollingUpdate mechanism.

Scenario 4

What if your application can track changes in the config and reload itself automatically? Here goes one major detail in ConfigMaps implementation…

In Kubernetes, if the ConfigMap is mounted with a subPath, it won’t update until the pod restarts (see the first three scenarios above). However, if you mount it as a directory (without subPath), your container will get a continuously up-to-date config file (no more need to restart the pod).

Also, you should keep in mind some other nuances:

  • The updates are projected into the container with some delay. This is due to the fact that Kubernetes mounts not really a file but an object.
  • The file inside the container is actually a symlink. Here is an example using subPath:
$ kubectl -n production exec go-conf-example-6b4cb86569-22vqv -- ls -lha /app/configfiles 
total 20K    
drwxr-xr-x    1 root     root        4.0K Mar  3 19:34 .
drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
-rw-r--r--    1 root     root          42 Mar  3 19:34 config.json
-rw-r--r--    1 root     root          47 Mar  3 19:34 database.yml

But what if we mount the ConfigMap as a directory and not as a subPath?

$ kubectl -n production exec go-conf-example-67c768c6fc-ccpwl -- ls -lha /app/configfiles 
total 12K    
drwxrwxrwx    3 root     root        4.0K Mar  3 19:40 .
drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
drwxr-xr-x    2 root     root        4.0K Mar  3 19:40 ..2020_03_03_16_40_36.675612011
lrwxrwxrwx    1 root     root          31 Mar  3 19:40 ..data -> ..2020_03_03_16_40_36.675612011
lrwxrwxrwx    1 root     root          18 Mar  3 19:40 config.json -> ..data/config.json
lrwxrwxrwx    1 root     root          19 Mar  3 19:40 database.yml -> ..data/database.yml

If you update the config (deploying it or making kubectl edit) and wait for 2 minutes (apiserver caching time), you’ll see the following:

$ kubectl -n production exec go-conf-example-67c768c6fc-ccpwl -- ls -lha --color /app/configfiles 
total 12K    
drwxrwxrwx    3 root     root        4.0K Mar  3 19:44 .
drwxr-xr-x    1 app      app         4.0K Mar  3 19:34 ..
drwxr-xr-x    2 root     root        4.0K Mar  3 19:44 ..2020_03_03_16_44_38.763148336
lrwxrwxrwx    1 root     root          31 Mar  3 19:44 ..data -> ..2020_03_03_16_44_38.763148336
lrwxrwxrwx    1 root     root          18 Mar  3 19:40 config.json -> ..data/config.json
lrwxrwxrwx    1 root     root          19 Mar  3 19:40 database.yml -> ..data/database.yml

Note the changed timestamp in the directory created by Kubernetes.

Tracking changes

Finally, let’s use a simple Go application to monitor changes in the config.

Here is our configuration:

$ cat configfiles/config.json 
{
  "welcome": "Hello",
  "name": "Alice"
}

Now, if you run it, you will see the following log output:

2020/03/03 22:18:22 config: &{Hello Alice}
2020/03/03 22:18:52 config: &{Hello Alice}

Next, we will deploy this application to Kubernetes using the ConfigMap as a configuration source instead of the file from the image. This Helm chart will do the job:

helm install -n configmaps-demo --namespace configmaps-demo ./configmaps-demo --set 'name.production=Alice' --set 'global.env=production'

Okay, we are ready to change the ConfigMap only:

-  production: "Alice"
+  production: "Bob"

… and update the Helm chart in the cluster:

helm upgrade configmaps-demo ./configmaps-demo --set 'name.production=Bob' --set 'global.env=production'

What will happen?

  • The v1 and v2 applications won’t restart since they do not see any changes in the Deployment; they still say “Hello” to Alice.
  • The v3 application will restart, consume the updated config, and say “Hello” to Bob.
  • The v4 application won’t restart. Since ConfigMap is mounted as a directory, the changes were noticed right away, and the config picked them up on-the-fly without restarting the pod. You can confirm this by reviewing event messages from fsnotify:
2020/03/03 22:19:15 event: "configfiles/config.json": CHMOD
2020/03/03 22:19:15 config: &{Hello Bob}
2020/03/03 22:19:22 config: &{Hello Bob}

If you’d like to see more examples — check how the tracking of ConfigMap’s changes is implemented in Prometheus-operator (and it’s a real-life application after all).

Note: I must remind you that all of the above is also true for Kubernetes Secrets (kind: Secret), and for a good reason: they are so similar to ConfigMaps…

Bonus! Third-party solutions

In case you’re interested in tracking changes in configuration files, here are few tools worthy of further investigation:

  • jimmidyson/configmap-reload sends an HTTP request if a configuration file has been changed. In the future, configmap-reload is expected to support sending SIGHUP signals, but the lack of commits since October 2019 leaves these plans in doubt;
  • stakater/Reloader watches for changes in ConfigMaps/Secrets and performs a “rolling upgrade” (as the author calls it) on resources associated with them.

It makes sense to run the above tools in a sidecar container next to the existing applications. However, if you have a deep understanding of Kubernetes & ConfigMaps and you favor the right way when it comes to modifying the configuration — i.e., as part of the deployment process instead of direct edit’ing — then these tools may seem unnecessary since they duplicate the basic Kubernetes functionality.

Conclusion

The emergence of ConfigMaps in Kubernetes marked the beginning of a whole new era in configuring applications. Fortunately, these improvements do not replace but complement the existing solutions. That is why administrators (or rather developers) who find the novel features unnecessary can still use the good ol’ config files.

On the other hand, the existing ConfigMaps users and those who are interested in this approach might view this article as a brief overview of ConfigMaps peculiarities and a good starting point for further study. Please, share your ConfigMap-related tips & tricks in the comments below!

Comments

Your email address will not be published. Required fields are marked *