Externalizing Configurations in Kubernetes Using ConfigMap and Secret

Configurations

Configurations are attributes needed by an application when it runs. These configurations are values which are used frequently across the application. They may be used to provide dynamic behavior of an application based on customer, geographical region, language and locale. Frequently used configurations are database url, username, password, etc. These values are not hardcoded along with the application code as they may change depending on different factors.

There is a 12 Factor recommendation which advises to separate configuration out of code. Now the question arises if configurations are to be separated from code, how the application will access them during runtime. The answer is PLATFORM. Platform meaning the system on which the application runs. The configurations would be added on the platform as environment variables.

To execute the examples given in the artcle, you must be using a Linux system having connected to a Kubernetes cluster via kubectl tool. This is the prerequisite.  

Configurations in Kubernetes

Application workloads run on Kubernetes Pods. If a developer is transforming his application to run on Kubernetes, he must use Kubernetes ConfigMaps and Secrets to store configurations which then could be injected into Pods to be consumed by applications running inside the Pod.

ConfigMap

ConfigMap is a Kubernetes API object which stores data in form of a set of name and value pairs. It stores non-confidential data to be used by application workloads. It stores data upto 1 Mb.

ConfigMaps could be used inside a Pod in below 3 ways.

  1. As a container command argument
  2. As an environment variable in container
  3. As a volume inside container

Below examples shows the above three use cases.

Usage of ConfigMaps

Before we use a configmap in a Pod we have to first create it. ConfigMaps could be created using command line (kubectl commands) or using manifest files.

Create ConfigMap

ConfigMap could be created in varieties of ways. We can create it using a manifest file with a ConfigMap spec. We can create it using kubectl create configmap command  with literal, files option.

kubectl create configmap weekdays --from-literal=FIRST_DAY=SUNDAY --from-literal=SECOND_DAY=MONDAY --from-literal=THIRD_DAY=TUESDAY --from-literal=FOURTH_DAY=WEDNESDAY --from-literal=FIFTH_DAY=THURSDAY --from-literal=SIXTH_DAY=FRIDAY --from-literal=SEVENTH_DAY=SATURDAY

In the example above a configmap called weekdays created using kubectl create configmap command. The same configmap could be created using a manifest file as well.

Create a manifest file called weekdays.yaml with below content and use the command “kubectl create -f weekdays.yaml” to create the configmap called weekdays.

apiVersion: v1
data:
  FIFTH_DAY: THURSDAY
  FIRST_DAY: SUNDAY
  FOURTH_DAY: WEDNESDAY
  SECOND_DAY: MONDAY
  SEVENTH_DAY: SATURDAY
  SIXTH_DAY: FRIDAY
  THIRD_DAY: TUESDAY
kind: ConfigMap
metadata:
  name: weekdays

The configmap creation could be verified using “kubectl get cm”, “kubectl get cm weekdays -o yaml” and “kubectl describe cm weekdays”.

ConfigMaps could also be created from existing properties files. For example let’s say there is a file called days.properties with below content.

FIFTH_DAY: THURSDAY   
FIRST_DAY: SUNDAY   
FOURTH_DAY: WEDNESDAY   
SECOND_DAY: MONDAY   
SEVENTH_DAY: SATURDAY   
SIXTH_DAY: FRIDAY   
THIRD_DAY: TUESDAY

A configmap can be created from it using the  command “kubectl create configmap weekdays –from-file=days.properties”. It will create the configmap weekdays. The configmaps could be verified by using commands “kubectl get configmap weekdays -o yaml” or “kubectl describe configmap weekdays”.

ConfigMap as container command argument and environment variable

Assuming that a configmap called weekdays is created using above examples, we will consume by injecting it into a Pod. Create a file called pod.yaml with below content and later use the command “kubectl apply -f pod.yaml”to create a pod called days-pod.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: days-pod
  name: days-pod
spec:
  containers:
  - image: alpine
    name: days-pod
    args:
    - sh
    - -c 
    - while true;do echo "First Day of the week is ${FIRST_DAY}";sleep 5;done
    envFrom:
    -  configMapRef:
         name: weekdays

You can verify the Pod and its environment variables using commands “kubectl get pod days-pod -o yaml” and “kubectl exec -it days-pod — env”. The last command will print all the environment variables of the Pod. You can use the command “kubectl logs days-pod -f” to tail the logs from the Pod which will keep on printing “First Day of the week is SUNDAY” in every 5 seconds. If you closely watch the Pod spec in the pod.yaml file, the envFrom attribute is reading the configmap weekdays and injecting it into the Pod with all names of the configmap becoming environment variables of the Pod. In the echo command we are referring to one the environment variable FIRST_DAY.

ConfigMap as Environment Variable in Pod

ConfigMap as Volume Mount

We can mount a configmap as a volume inside a Pod. For all the keys of the configmap there will be a file inside the Pod and the value of the key will be the content of the file. One advantage in this use case is if there will be a change in the configmap, the corresponding values will also be updated inside the Pod which uses it.

Assuming that there is a configmap called weekdays exists, create a manifest file called pod-volume.yaml with below content and use “kubectl apply -f pod-volume.yaml” to create a pod called days-pod-volume.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: days-pod-volume
  name: days-pod-volume
spec:
  volumes:
  - name: config-volume
    configMap:
        name: weekdays
  containers:
  - image: alpine
    name: days-pod-volume
    args:
    - sh
    - -c 
    - while true;do echo "First Day of the week is ";cat /etc/data/FIRST_DAY;sleep 5;done
    volumeMounts:
    - name: config-volume
      mountPath: /etc/data

You can verify the creation using “kubectl get pod days-pod-volume -o yaml” and “kubectl exec -it days-pod-volume – ls -l /etc/data”. For the last command there will be output like this.

ConfigMap as Volume contents

This pod has a volume mounted at /etc/data and all the 7 keys of the configmap have become files inside the directory. The echo command in the container is printing the content of FIRST_DAY.

ConfigMap as Volume
Update the ConfigMap and Observe Pod Change

Now we will update the ConfigMap and verify if the data changes in the Pod. Use the command “kubectl edit cm weekdays” and this will open the configmap in vi editor for edit. Change the value of FIRST_DAY from SUNDAY to FEBRUARY. Once done wait for 10 seconds and tail logs of the Pod days-pod-volume using “kubectl logs days-pod-volume”. You will observe that log printing FEBRUARY instead of SUNDAY.

Secrets

Secrets are configurations in Kubernetes which stores sensitive information inform of key and value pairs. The creation, usage and consumption of Secrets are same as ConfigMaps. The only difference is the values in secrets are base64 encoded. It eliminates the chance of accidentally revealing any sensitive information to unintended users. Secrets store data upto 1Mb.  Secrets are of different type to store different information like passwords, tokens, keys, dockerjson, basic-auth key, service account key, ssh key, tls key. We have to create correct type of secret to store appropriate information.

Usage of Secret

Secrets could be used inside a Pod in below following ways.

  1. Environment Variable
  2. Volume Mount as File
  3. Image Pull Secret

Before we use a secret, we have to create it either using “kubectl create secret command” or using a secret manifest file. Secrets could also be created from files like configmap.

Create Secret

Assume we need to create a secret called db-password with key DB_PASSWORD and value administrator.

Using Command line

kubectl create secret generic db-password --from-literal=DB_PASSWORD=administrator 

Execute the above command to create secret db-password. Validate the secret creation using commands “kubectl get secrets”, “kubectl get secret db-password -o yaml” and “kubectl describe secret db-password”. The last two commands will display the output in yaml format. The output of “kubectl get secret db-password -o yaml” will look like below where you could see the value for DB_PASSWORD is bas64 encoded.

Secret as YAML

You can also create the same secret from a password.properties file with below content.

DB_PASSWORD: administrator

Use the command “kubectl create secret generic db-password –from-file=password.properties”. Validate the same by using “kubectl get secret db-password -o yaml. The output will be like below.

Secret from File Output

Here the key is password.properties and value is REJfUEFTU1dPUkQ6IGFkbWluaXN0cmF0b3IK. Use the command “echo REJfUEFTU1dPUkQ6IGFkbWluaXN0cmF0b3IK| base64 -d” to check the decoded value.

Using Manifest file

We will create the same secret using manifest file now. First generate the base64 encoded value for string “administrator” using command “echo administrator | base64”. The output will be “YWRtaW5pc3RyYXRvcg==” which will be used in the manifest file. Create a manifest file called secret.yaml with below content.

apiVersion: v1
data:
  DB_PASSWORD: YWRtaW5pc3RyYXRvcg==
kind: Secret
metadata:
  name: db-password

Use “kubectl create -f secret.yaml” to create the secret.  Validate the secret creation using commands “kubectl get secrets”, “kubectl get secret db-password -o yaml” and “kubectl describe secret db-password”.

Once the secret is ready, we will consume it from inside a Pod.

Secret as container argument and environment variable

Now we will create a Pod which will consume the secret as an environment variable. Create a Pod manifest file called pod-secret.yaml using below content.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod-secret
  name: pod-secret
spec:
  containers:
  - image: alpine
    name: pod-secret
    args:
    - sh
    - -c 
    - while true;do echo "The Password is ${DB_PASSWORD}";sleep 5;done
    envFrom:
    - secretRef:
         name: db-password

Use “kubectl apply -f pod-secret.yaml” to create the pod called pod-secret. You can validate the Pod creation using “kubectl get pod pod-secret -o yaml” and “kubectl describe pod pod-secret”. The logs could be tailed using “kubectl logs pod-secret -f ”where you could see the Pod printing line “The Password is administrator” in every 5 seconds. The actual value of DB_PASSWORD is getting printed.  The environment variable could also be checked using command “kubectl exec -it pod-secret — env”.

Secret as Environment Variable

Secret as Volume Mount

When we mount a secret inside a Pod, for each key of the secret a file is created in the mountPath. The value of the key becomes the content of the file. One advantage in this case is on changing the value of any Key, the content of corresponding file inside the Pod also changes.

Create a Pod manifest file called pod-secret-volume.yaml using below content.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod-secret-volume
  name: pod-secret-volume
spec:
  volumes:
  - name: secret-volume
    secret:
        secretName: db-password
  containers:
  - image: alpine
    name: pod-secret-volume
    args:
    - sh
    - -c 
    - while true;do echo "The Password is "| cat /etc/data/DB_PASSWORD";sleep 5;done
    volumeMounts:
    - name: secret-volume
      mountPath: /etc/data

Use “kubectl apply -f pod-secret-volume.yaml” to create the pod called pod-secret-volume. You can validate the Pod creation using “kubectl get pod pod-secret-volume -o yaml” and “kubectl describe pod pod-secret-volume”. The logs could be tailed using “kubectl logs pod-secret-volume -f” where you could see the Pod printing the DB_PASSWORD in every 5 seconds. The environment variable could also be checked using command “kubectl exec -it pod-secret-volume — env”. There will be no environment variable called DB_PASSWORD as it is mounted inside the volume as a file called DB_PASSWORD. The file could be verified using command “kubectl exec -it pod-secret-volume — cat /etc/data/DB_PASSWORD”.

Secret as Volume Mount

Update Secret and Observe Change in Pod

We will change the value of the secret and verify if the Pod file also changes. Execute the command “echo administrators|base64” and copy the output which we will use in secret. Now to update the secret by using the command “kubectl edit secret db-password” and update the value of DB_PASSWORD using the copied value generated by base64. After ten seconds execute the command “kubectl exec -it pod-secret-volume — cat /etc/data/DB_PASSWORD” and “kubectl logs pod-secret-volume -f” to observe the changed value.

Conclusion

Both Secret and ConfigMap are very powerful features of Kubernetes. Once you know how to create and consume them, you could transform your application so that they can use the Kubernetes platform for deploying application without any hitch.

Multi-Container Pod Design Patterns in Kubernetes

Pods

In Kubernetes Pods are the single deployable units. If any application must be deployed it has to be deployed in a Pod as a container. Though applications run in containers, container must be part of the Pod. The Pod specification has an attribute containers where container specifications are declared. The attribute is plural. That means we can declare more than one container in a Pod specification.

Multi-Container Design Consideration

But Kubernetes administrators always choose one container Pods over multi-container Pods. One Container Per Pod is an unwritten practice across the industry. Let’s see what advantage a multi-container pod has to offer.

The Pod has an IP. All containers in the Pod share the same IP. If any volume is created for the Pod all the containers those are part of the Pod could mount it. So, containers can share storage. They can also communicate with each other over localhost.

In that case why still One Container Pods are still preferred.  Let’s take a use case of a web application having UI, backend, database and messaging tiers. We will deploy all the four tiers as four containers in a single Pod. The resource, configuration, operation requirements are different for all the four containers. The backend and frontend are customer facing. If there would be a requirement to scale these to tiers, that can’t be done separately, as we can’t scale containers but pods. So, if we will scale up the Pod, multiple instances of database and messaging tier will also be created though that is not required.

Therefore, it is better to deploy them separately as managing and scaling them as individual Pods would be better.    

 Figure 1: Use Case for Multi-Tier Application Deployment in Same Pod

In what case then we could use multiple containers in same Pod?

Case 1 – If the lifecycle of containers is same.

Case 2 – If two containers are very highly coupled

Case 3 – If   we need to make our application deployable to Kubernetes without any code change. It would be in such cases where the application code lacks something to take advantage of Kubernetes features.  In that case we can bring in a secondary container along with our application container which will break such barrier.

Multi-Container Design Patterns

Adapter Pattern

Our homes are supplied power in AC mode whereas the laptops we use consume power in DC mode. In that case we use AC adapters which draws power from AC outlets, then converts it to DC and supplies to the laptop. Without changing the power supply mode at our home, we could charge our laptops with the help of an adapter.

How can we relate it to Kubernetes? For example, if we have installed a centralized monitoring tool in the Kubernetes cluster, which needs all application logs to be printed in “APP-NAME – HOSTNAME – DATE – SEVERITY” format. But the cluster could have many applications written in variety of languages printing logs in variety of formats. In that case it would not be wise for all applications to change their logging format as if the tool changes in future the format may has to change again. To solve this issue, we can spawn a second container which reads logs of the main application container, processes it into the format desired by the monitoring tool. Problem solved.

Ambassador Pattern

An ambassador is an envoy who represents his country in the outside world.  How he can help us in Kubernetes Pod?

Take an example. You have a legacy application where the DB URL is hard coded inside the application as localhost. It is difficult to change legacy applications as it will bring in changes at lot of places. If you have to make it deployable in Kubernetes cluster, you need to change code. You can do so without code change by using an Ambassador pattern. An ambassador container co-locates an application container in same Pod. It works as a proxy. It connects to the correct Database depending upon the Dev, QA or Stage environment. The main application can connect to such external URLs as localhost through the ambassador container. The ambassador pattern finds the correct URL and supplies to the application container at localhost. The main application container doesn’t need to worry about the correct URL. That is assigned on to the ambassador container.

Sidecar Pattern

A sidecar is attached with a motor bike.  It adds a seat or two to the motorbike without any changes to it. It is not an integral part of the bike, but it enhances the capability of the bike. A sidecar container also behaves in the same way. It enhances the capability of the main container deployed with it, without making any changes to the main container. For example, if you have an application which generates log files in certain folder. If your application monitoring tool in the Kubernetes cluster needs the logs to be stored in some external storage for all the applications deployed to the cluster, it simply can’t be done at the application level. Rather we could employ a sidecar container which will store the log files in the required storage easily without making any code change at the main application level.

Figure 2: Multi Container Pods

                             

Conclusion

All these patterns are very useful for doing cross-cutting jobs without making the main application to change. They provide support for the main container and must be deployed as secondary containers. These workloads must be written in such a way that they could be reusable in different Pods. To summarize Adapter pattern is used where we have to convert or process the output of main container to some standard format. Ambassador pattern is used to provide network proxy. Sidecar pattern is used to provide helper/utility services to the main container.

Try using these patterns to get best out of your Kubernetes cluster.