Zero Downtime Deployment with Kubernetes.

Prakash Singh Rajpurohit
8 min readJul 26, 2020

Website downtime can be truly frustrating for both your business and customers. It can break a line of communication, leave both parties in the dark and has the potential to impact your brand and your business in a variety of ways. Included in this lost productivity, damage to brand perception, customer dissatisfaction. drop in search engine ranking and many more. Downtime can be cause due to human error (ex. bug in code), equipment failure (ex. hardware failure), malicious attack (ex. DDoS attack), etc.

Although most websites and web services strive for zero downtime, downtime is inevitable. Even the giants like Google, Amazon and Facebook experience downtime occasionally. Although technology has improved and providers have systems in place to help eliminate downtime, unforeseen circumstances still cause downtime.

In this task-4 I have built a system which solve the issue of downtime by integrating Kubernetes, Jenkins, Docker, Git and GitHub. Below are the agenda’s of this task

Agenda:

Create A dynamic Jenkins cluster and perform DevOps task-3 using the dynamic Jenkins cluster. (You can go through with my task-3 which I have posted on 19/july/2020)

  1. Create container image that’s has Linux and other basic configuration required to run Slave for Jenkins.(example here we require kubectl to be configured )
  2. When we launch the job it should automatically starts job on slave based on the label provided for dynamic approach.
  3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins
  4. Job1: Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:
  5. Create the new image dynamically for the application and copy the application code into that corresponding docker image
  6. Push that image to the docker hub (Public repository)( Github code contain the application code and Dockerfile to create a new image )
  7. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:
  8. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.
  9. If Application created first time, then Expose the application. Else don’t expose it.

Pre-requisites:

  1. Must have 2 system of linux (For this task I am using 2 VM of linux OS).
  2. Must have Jenkins installed in system A.
  3. Must have Docker installed in system B.
  4. Must have Minikube installed.

Task Workflow:

Task-4 Workflow.

Here you can go through this video to understand the complete workflow of this task.

Task-4 Part1

Step1:

I have created 2 dockerfile,

1. In this dockerfile I configure httpd server and php interpreter. We will use this dockerfile to create a image and run the developer code on httpd server. For task-4 wrote the code in html.

FROM centos:latest
RUN yum install httpd -y
RUN yum install php -y
CMD /usr/sbin/httpd -DFOREGROUND
EXPOSE 80
COPY index.html /var/www/html

2. This dockerfile I have created for dynamic slave worker for master node in jenkins. Jenkins master will use the image of this dockerfile, launch the slave node and run the job2 command in it. When job2 command run successfully in this dynamic slave node (container) of jenkins then it will get terminate. In terms of resource management it is very good planning.

For creating dockerfile for jenkins dynamic cluster node their are some pre-requisite that we have to follow:

  1. It should be linux os.
  2. Enable SSH connection.
  3. Give user name and password.
  4. Configure kubectl.
FROM ubuntu:16.04RUN apt-get update && apt-get install -y openssh-server && apt-get install openjdk-8-jre -y
RUN mkdir /var/run/sshd
RUN echo 'root:redhat' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# Fixing SSH login:
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
#Install and setup Kubectl:
RUN apt-get install curl -y
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl

This is the docker command I use to build the container image.

docker build -t jenkins /root/devt2/jenkinsDoc

After image created, I push the image on docker hub registry, so that jenkins master can pull this image and can use for dynamic slave node.

Step2

Job1:

Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:

  1. Create the new image dynamically for the application and copy the application code into that corresponding docker image.
  2. Push that image to the docker hub (Public repository)

Here I wrote a code in html and push developer code and Dockerfile to Github.

Git and Github.

Here you can see the code file which I push to Github.

Devloper code and job1 part-2

Here Jenkins pull the application code and dockerfile from github and store in jenkins workspace. I used poll scm so that jenkins go after every 1 minute to Github to check is there any commit done by developer or not, if yes then jenkins will run the job.

Job1 part-2

Here I have given Image name (you can give any name) and docker hub credential so that jenkins can push the image on my docker hub account.

Job1 part-3

Here you can see finally image push to my docker hub account successfully.

Image push to docker hub registry.

Here you can go through this video of job-1

Task-4 Part1.

Step3

job2 should run on the dynamic slave of Jenkins configured with Kubernetes kubectl command. So let configure jenkins dynamic cluster. First we need 2 system, in 1 system our jenkins is running and in other docker. So here jenkins is a client for Docker server, but jenkins can’t contact to docker server because in docker server network support is disable. To enable the network support we have to add tcp protocol in docker config file.

Enable Network support in docker server which is running in system-1.

When we do any changes in docker config file run these command otherwise it won’t get update.

systemctl daemon-reload
systemctl restart docker

We have enable the network support in system 2 now anyone can access the docker service of this system-2 docker. Now in system-1 we have to tell to jenkins that in this server docker server is running for that we have to run 1 command as shown below.

export DOCKER_HOST=docker_server_system-ip:4243
System-2 jenkins

Now jenkins can contact docker server, can run any docker cmd. Now lets setup jenkins dynamic cluster.

Follow this step to configure jenkins dynamic node :

Go to Manage Jenkins →Manage nodes and clouds → Configure Clouds → Select Docker Cloud details.

In Docker Cloud details we have to give detail of docker server so that jenkins can contact to that server and run the Job2 kubectl command.

Jenkins Cloud Configuration Part-1

After Docker cloud details we fill we have to fill details in Docker Agent Templates. In this we tell the jenkins that I want to use this docker image as slave node which is in docker hub (prakash01/kubernetes).

Jenkins Cloud Configuration Part-2

This part is very important in this we have to mount kubectl config file (which is in my system-2) with salve node(container) directory. If we don’t do this slave node(container) can’t run the kubectl command. In Dockerfile I haven’t mention to create container directory so here I am mounting this with root.

In next part I have given the credential of slave node(container) which I have given at the time of building the image (user: root, password: redhat) you can check in step-1.

Jenkins Cloud Configuration Part-3

Jenkins Cloud Configuration is done, Now let’s go to job2

Job2 I have created a deployment which can do rolling update and also monitor the pods by replica set. For deployment I am using the same image which we have push on docker hub in job1. I kept 3 replicas so their would be 3 pods running in kuberentes cluster. After deployment I expose the pods so that client can access the web-server and load balancer will take care of traffic/load balancing.

In right side you can check slave dynamic slave node automatically (dynamically) launch.

Job-2

Web-Server Output:

You can see that what we have done in Job1 exactly same application code is deployed on webserver.

Web-Server-1

Now again I perform from starting and push the updated code to github, this time I got this output with zero downtime. It’s really amazing.

Workflow

Job1 and Job2 build pipeline.

Build Pipeline.

Here you can go through this video of complete Task-4.

Task-4 Part3

Github :

Thank-you for reading.

If you find this article helpful, it would be appreciable if you could give 1 clap for it.

--

--

Prakash Singh Rajpurohit

Cloud Stack Developer. Working on Machine Learning, DevOps, Cloud and Big Data.