Introduction to Deployment Strategies

The last part of this Kubernetes 101 series focused on ReplicaSets and Deployments and why it is better to use Deployments rather than Pods to manage your Kubernetes applications. In this part of the series, we will walk you through different types of Deployment strategies to give you the insight of their functionalities as well as to know which particular strategy would suit best a specific use case. It is advisable to have prior knowledge of the Kubernetes objects like Pods, Deployments and ReplicaSets to follow the hand-on practice. 

Deployment Strategies

A Kubernetes Deployment strategy encompasses the methods of creating, upgrading, or downgrading to a different version of a Kubernetes application. For those that are familiar with software installation and update on PCs and laptops, the software application that is being updated remains inaccessible to the user during update or installation. Similarly in Kubernetes,  the application will remain inaccessible to the users during an upgrade if a proper Deployment Strategy is not used during the creation of the application. In Kubernetes, new versions are continuously developed and deployed, which means at some point you will be upgrading from an old version to the new one. However, the applications must always be accessible to the users; thus, the need for a user-friendly upgrade strategy. In this guide, we will be looking at two strategies and outline which one best suits our applications.

Types of Deployment Strategies

  1. Recreate: This strategy type will first destroy the existing Pods before new ones are created. During the period when the old application is down, and the new one is being brought up, the application is inaccessible to users. Looking at our previous exercises, we used the recreate strategy as it is in the YAML file. As seen under the events part of the Deployment description in part 6A, the three instances of the Pod were scaled down simultaneously before creating the new ones. Don’t forget that we updated the Deployment twice. 

    During the first update, the recreate strategy scaled the Pod my-deployment-97cfc859f down to 0, then brought up my-deployment-79f645dc59, and scaled it up to 3 Replicas. This was replicated during the second update by scaling the Pod my-deployment-79f645dc59 down to 0, bringing up a new one my-deployment-5997f87f5f, and scaling it to 3 Replicas. This type of strategy does not allow rolling back to the previous version because it has been destroyed before an update is performed. However, don’t fret; there is another Deployment strategy that overcomes this limitation.

  2. RollingUpdate: In contrast to the recreate strategy, the RollingUpdate strategy, which can also be referred to as zero downtime rollouts, is the process of updating a Kubernetes object or application sequentially by replacing each previous Pod instance with a new one. Its functionalities entail taking down the older Pod instance and bringing up a new one consecutively during an upgrade by cycling through updating the Pods according to the parameters: maxSurge and maxUnavailable

    RollingUpdate gives users unhindered access to their applications during an update and allows rolling back to the previous version in case of an error during upgrade, bugs with the new version, or if the updated version is unstable. This strategy type is the default.

    maxSurge: This is optional with the default value set to 25% or 1. It specifies the maximum number of Pods that can be created above the desired number.
    maxUnavailable: This is also optional with the default value set to 25% or 1. It specifies the maximum number of unavailable Pods during an update.

    Both properties’ values can be represented using either an absolute number(2) or percentage(25%).

How To Create a Deployment with the RollingUpdates Strategy

Step 1) Modify your configuration file, copy, and paste the below configuration with correct indentation as the values of the strategy property of your deployment YAML file. Save and exit the terminal:

strategy:
  type: RollingUpdate
  rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

The complete YAML file will look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
       maxSurge: 1
       maxUnavailable: 0 
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      -  name: my-deployment-container
         image: nginx

Step 2) Check the status of the Pod. We can see all of our Pods are running:

$kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
my-deployment-97cfc859f-8m4zk      1/1     Running      0       59s
my-deployment-97cfc859f-b9j8w      1/1     Running      0       59s
my-deployment-97cfc859f-hbmxn      1/1     Running      0       59s

Step 3) Check the status of the Deployment:

$kubectl get deployments my-deployment 
NAME            	READY   UP-TO-DATE   AVAILABLE   AGE
my-deployment        3/3        3           3        51s

Step 4) Check the status of the latest rollout:

$ kubectl get replicasets
NAME                     	DESIRED     CURRENT    READY         AGE
my-deployment-97cfc859f   	   3          3          3           9m21s

Step 5) Next, you will update the Deployment’s container image from nginx to nginx:1.18.0:

$kubectl set image deployment/my-deployment my-deployment-container=nginx:1.18.0 --record
deployment.apps/my-deployment image updated

Check the Pod status. Once again we have all the pods running.

$ kubectl get pods
NAME                             READY   STATUS      RESTARTS   AGE
my-deployment-79f645dc59-2ft5f   1/1     Running  	    0       19s
my-deployment-79f645dc59-gcfgt   1/1     Running        0       34s
my-deployment-79f645dc59-wtd8k   1/1     Running        0       23s

Check the rollout current status. We can see that we now have two deployments. The new one and the old one.

$kubectl get replicaset
NAME                       DESIRED   CURRENT   READY   AGE
my-deployment-79f645dc59     3          3        3     7m6s
my-deployment-97cfc859f      0          0        0     8m23s

Check the Deployment description:

$kubectl describe deployments my-deployment
Name:                   my-deployment
Namespace:              default
CreationTimestamp:      Thu, 30 Jul 2020 11:39:18 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
                        kubernetes.io/change-cause: kubectl set image deployment/my-deployment my-deployment-container=nginx:1.18.0 --record=true
Selector:               app=my-app
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:  app=my-app
  Containers:
   my-deployment-container:
    Image:        nginx:1.18.0
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  	Reason
      ----           	------ 	 ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   my-deployment-79f645dc59 (3/3 replicas created)
Events:
  Type          Reason             Age          From                   Message
  ----   	         ------               ----              ----                                    -------
  Normal       ScalingReplicaSet   2m23s        deployment-controller  Scaled up replica set my-deployment-97cfc859f to 3
  Normal       ScalingReplicaSet   66s          deployment-controller  Scaled up replica set my-deployment-79f645dc59 to 1
  Normal       ScalingReplicaSet   55s          deployment-controller  Scaled down replica set my-deployment-97cfc859f to 2
  Normal       ScalingReplicaSet   55s          deployment-controller   Scaled up replica set my-deployment-79f645dc59 to 2
  Normal       ScalingReplicaSet   51s          deployment-controller  Scaled down replica set my-deployment-97cfc859f to 1
  Normal       ScalingReplicaSet   51s          deployment-controller   Scaled up replica set my-deployment-79f645dc59 to 3
  Normal       ScalingReplicaSet   46s          deployment-controller  Scaled down replica set my-deployment-97cfc859f to 0

Looking at the events property in the Deployment description, we see that the update was done sequentially. When the Deployment was created, a ReplicaSet my-deployment-97cfc859f was also created and scaled up to 3 replicas. After the update, another ReplicaSet my-deployment-79f645dc59 was created and scaled up to 1 replica, while the old ReplicaSet my-deployment-97cfc859f was scaled down to 2 replicas. It continued by scaling the new ReplicaSet to 2 and the old one to 1 replica respectively until the old one has been scaled down to 0 and the new one scaled up to 3 which is the desired number of the replicas. Finally, the new ReplicaSet will have 3 replicas available, while the old ReplicaSet will have 0.

Rolling Back to a Previous Version

There are times the new version might not be stable or may be filled with bugs. In this case, the application can be rolled back to the previous working or stable version. This can be performed by following these steps.

Step1) Check the rollout history:

$kubectl rollout history deployment my-deployment
deployment.apps/my-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         kubectl set image deployment/my-deployment my-deployment-container=nginx:1.18.0 --record=true

We updated the Deployment container image from nginx to nginx:1.18.0. However, you can revert to the previous version(nginx) from the current one(nginx:1.18.0).

Step 2) To rollback to the previous version:

$ kubectl rollout undo deployment my-deployment
deployment.apps/my-deployment rolled back

Step 3) To confirm the current version, check the Deployment description:

$kubectl describe my-deployment
Name:                   my-deployment
Namespace:              default
CreationTimestamp:      Thu, 30 Jul 2020 21:27:31 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=my-app
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:  app=my-app
  Containers:
   my-deployment-container:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  		Reason
  ----           	------  		------
  Available      True    	        MinimumReplicasAvailable
  Progressing    True                   NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   my-deployment-97cfc859f (3/3 replicas created)
Events:
  Type      Reason                 Age             From                   Message
  ----    	     ------             ----                      ----                        -------
  Normal    ScalingReplicaSet     10m                deployment-controller  Scaled up replica set my-deployment-97cfc859f to 3
  Normal    ScalingReplicaSet     9m51s              deployment-controller  Scaled up replica set my-deployment-79f645dc59 to 1
  Normal    ScalingReplicaSet     9m44s              deployment-controller  Scaled down replica set my-deployment-97cfc859f to 2
  Normal    ScalingReplicaSet     9m44s              deployment-controller  Scaled up replica set my-deployment-79f645dc59 to 2
  Normal    ScalingReplicaSet     9m41s              deployment-controller  Scaled down replica set my-deployment-97cfc859f to 1
  Normal    ScalingReplicaSet     9m41s              deployment-controller  Scaled up replica set my-deployment-79f645dc59 to 3
  Normal    ScalingReplicaSet     9m38s              deployment-controller  Scaled down replica set my-deployment-97cfc859f to 0
  Normal    ScalingReplicaSet      53s               deployment-controller  Scaled up replica set my-deployment-97cfc859f to 1
  Normal    ScalingReplicaSet      50s               deployment-controller  Scaled down replica set my-deployment-79f645dc59 to 2
  Normal    ScalingReplicaSet      44s (x4 over 50s) deployment-controller  (combined from similar events): Scaled down replica set my-deployment-79f645dc59 to 0

The new container image from the description is nginx which was used to create the Deployment before the update.

Deployment Pause and Resume

You can pause a Deployment to make multiple changes and fixes and then resume it. The pause will stop the rollouts trigger while the changes are being made. It will also put all the changes and fixes made in a queue until it is resumed.We will use our previous Deployment YAML manifest file for this exercise together with a running Kubernetes cluster and kubectl command-line tool configured to talk to the cluster. You can find out more about creating a Kubernetes cluster using our open source cluster lifecycle management tool KubeOne here. Follow the steps below to pause and resume a Deployment.

Step 1) Create a Deployment:

$ vim my-deployment.yaml

Copy the Deployment manifest YAML file, paste, save, and exit the terminal.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
       maxSurge: 1
       maxUnavailable: 0 
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      -  name: my-deployment-container
         image: nginx

Then run:

$kubectl create -f my-development.yaml  ## To create the Deployment
deployment.apps/my-deployment created

Step 2) Check the Deployment:

$kubectl get deployments my-deployment 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
my-deployment      3/3         3            3       4m13s

Step 3) Check the description:

$ kubectl describe deployments my-deployment 
Name:                   my-deployment
Namespace:              default
CreationTimestamp:      Fri, 31 Jul 2020 10:15:47 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=my-app
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:  app=my-app
  Containers:
   my-deployment-container:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   my-deployment-97cfc859f (3/3 replicas created)
Events:
Type      Reason              Age        From                   Message
  ----           ------                    ----             ----                                     -------
Normal    ScalingReplicaSet   8m6s       deployment-controller  Scaled up replica set my-deployment-97cfc859f to 3

As seen above, there is a running Deployment with 3 Pods.

Step 4) Pause the Deployment:

$kubectl rollout pause deployment.v1.apps/my-deployment
deployment.apps/my-deployment paused

Step 5) You can check if the Deployment is paused under the condition parameter in the Deployment description:

$kubectl get description my-deployment
Conditions:
  Type             Status           Reason
  ----                 ------            ------
  Available        True             MinimumReplicasAvailable
  Progressing      Unknown          DeploymentPaused
  OldReplicaSets:  <none>
  NewReplicaSet:   my-deployment-97cfc859f (3/3 replicas created)

Step 6) Update the Deployment by changing the container image to nginx:1.18.0 and scale up the replica to 5. You can make as many changes as you want before resuming the Deployment. To change the container image version to nginx:1.18.0, run the below command:

$ kubectl set image deployment/my-deployment my-deployment-container=nginx:1.18.0 --record
deployment.apps/my-deployment image updated

To scale up the Replicas to 5 from 3, run:

$kubectl scale deployment.v1.apps/my-deployment --replicas=5
deployment.apps/my-deployment scaled

Step 7) Check the rollout history:

$kubectl rollout history deployment.v1.apps/my-deployment
deployment.apps/my-deployment
REVISION  CHANGE-CAUSE
1         <none>
No activities to display 

Step 8) Check the status of the Deployment:

$kubectl get deployments my-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
my-deployment      5/5         0            5       31m

The deployment has been scaled up to 5 replicas; however, the “UP-TO-DATE” column shows 0; which means that the 5 replicas are still in the queue and have not been updated because of the Deployment pause state.

Step 9) To resume the Deployment:

$ kubectl rollout resume deployment.v1.apps/my-deployment
deployment.apps/my-deployment resumed

Check the status of the Pods:

$kubectl get pods
NAME                             READY     STATUS              RESTARTS   AGE
my-deployment-79f645dc59-vkcvg   0/1       ContainerCreating      0       12s
my-deployment-97cfc859f-5sqmz    1/1       Running                0       5m41s
my-deployment-97cfc859f-6w44n    1/1       Running                0       5m41s
my-deployment-97cfc859f-95r22    1/1       Running                0       2m2s
my-deployment-97cfc859f-f5rmn    1/1       Running                0       2m2s
my-deployment-97cfc859f-sqsmk    1/1       Running                0       5m41s

The Deployment has started creating a new Pod. Leave it for a few seconds and check the status of the Pod again.

$kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
my-deployment-79f645dc59-44lqk   1/1     Running      0       36s
my-deployment-79f645dc59-c55bk   1/1     Running      0       16s
my-deployment-79f645dc59-jmbbp   1/1     Running      0       30s
my-deployment-79f645dc59-kbvkr   1/1     Running      0       23s
my-deployment-79f645dc59-vkcvg   1/1     Running      0       49s

All the Pods have now been created and are running. The old ones have been replaced with the new ones.

Check the status of the Deployment

$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
my-deployment      5/5         5            5       6m40s

The Deployment is now up to date with the desired replicas

Check the rollout history:

$kubectl rollout history deployment.v1.apps/my-deployment
deployment.apps/my-deployment
REVISION  CHANGE-CAUSE
1         <none>
2      kubectl set image deployment/my-deployment my-deployment-container=nginx:1.18.0 --record=true

To clean up:

$kubectl delete my-deployment.yaml
deployment.apps "my-deployment" deleted

Check if it has been deleted:

$kubectl get deployments my-deployment
No resources found in default namespace.

Using Deployment Strategies

Deployment strategies are an essential feature in Kubernetes that give you more control of your application on how an update should be performed. Having seen the functionalities of both strategies, you can see that the RollingUpdate strategy usually suits applications better than the recreate strategy because we do not want the users to experience any downtime during an update. 

Next, in our series, we will look at how to expose your app to the outside world using services. In that guide, you will learn Kubernetes services, types, and their usage. Furthermore we will go into details on keeping the state of app using volumes and volumeMounts. We will walk you through different types of volume and how to use them in a Pod. We’d love to hear from you!  Please contact us with any thoughts or questions you might have about Deployments.

Learn More

Seyi Ewegbemi

Seyi Ewegbemi

Student Worker