Integrate Prometheus and Grafana on top of Kubernetes.

Prakash Singh Rajpurohit
6 min readAug 2, 2020
Prometheus Grafana Integration on top of Kuberntes.

In DevOps we can do monitoring of profiling, Tracer, Log, Metrics, etc. In this article we gonna monitor metrics data. Metrics is a way to collect the information in real time of all the resources/component. Collecting metrics is called Instrumentation(tell you which one is your target). Monitoring is a thing you have to do every second, without the monitoring no one knows what is happening? if you don’t know what is happening you can’t make any decision.

For matrics monitoring we gonna use Prometheus and Grafana. The combination of Prometheus and Grafana is becoming a more and more common monitoring stack used by DevOps teams for storing and visualizing time series data. Prometheus acts as the storage backend and Grafana as the interface for analysis and visualization.

→ Role of Prometheus is to fetch data in real time from exporter and store in their database in metrics format, This type of data is known as Time Series Data Base (TSDB). In database prometheus store 3 things : Time, Key-value and Tags/labels. Every target have different label. Exporter help to Prometheus to fetch/pull data in different platform like Docker Constainer, OS, Mysql, etc. In Prometheus we can write queries in PromQL language to filter the metrics.

→ Role of Grafana to get the metrics data from prometheus to create visualizations/graphs on dashboard for monitoring. Real time monitoring is actually done by using grafana because by this we can creating some real-time graphs and through these graphs we can keep an eye or monitor our systems.

In this project we launch Prometheus and Grafana on top of Kubernetes. We will do monitor of system using Prometheus and Grafana. We will use Prometheus as a data source in Grafana and create some graphs on dashboard corresponding to the system metrics data.

Before we start monitoring of system we have to first download exporter in target node, which will help the Prometheus to pull/fetch data and store data in Prometheus data base in metrics format. Here I am using node exporter in RHEL-8 virtual machine.

Here I downloaded node_exporter tar file and extract it. After extracting the file we use this command to run the node exporter in background.

nohup ./node_exporter &
configuration file we have to write the IP of our target means the system that we need to monitor and with the IP address we write the port number of the exporter/agent running in that system and in our case node exporter is running which has port number 9100.

As we are performing this pratcial on top of kubernetes so we gonna need kuberntes cluster, for this I am running kubernetes cluster in minikube. Here I am doing everything by YAML files in kubernetes.

Important thing that we need to do before deploying Prometheus and Grafana is that in configuration file of Prometheus we have to write the IP of our target node means the system that we are going to monitor and in IP address we write the port number of the node exporter running in that system and in our case by-default node exporter is running on port number 9100. To know more about default port number for different exporter you can click on below link.

Prometheus Configuration file(prometheus.yml) is present in the Prometheus pod and if the pod get replaced or terminated then the configuration file will also gets replaced and then our Prometheus server will lost the connectivity with the node exporter. So for this we have to make the configuration file persistent. As we know that for making data persistent in kubernetes we can create PVC(Persistent Volume Claim), but PVC is a dynamical type storage and it is generally used to save large amount of data, so we can’t use PVC to make configuration files persistent.

In Kubernetes to make configuration files persistent we uses ConfigMap. Basically in the ConfigMap we write the content of the configuration file. When the pod runs then it always first go to the configuration file and if the pod gets replaced/terminate then the new pod will also get the data from this ConfigMap only. So in this way we can make the prometheus.yml file persistent, Now the node exporter is always get connected with the Prometheus server.

Similarly Grafana has a configuration file i.e. datasource.yml in which it stores the information like name and URL of data sources and in our case the data source for Grafana is Prometheus. And we always need the connectivity between Prometheus and Grafana, so we write the content of Grafana configuration file in the ConfigMap.

YAML code for ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
name: prom-graf-config
data:
prometheus.yml: |-
global:
scrape_interval: 30s
scrape_configs:
- job_name: 'Prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'rhel-8_Node'
static_configs:
- targets: [' 192.168.99.103:9100']

datasource.yml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://192.168.99.107:30001

I used YAML code for deploying the prometheus and Grafana. I also created resources like Deployment, ReplicaSet, PVC and NodePort service.

For Prometheus:

apiVersion: v1
kind: Service
metadata:
name: prometheus
labels:
app: prometheus
spec:
ports:
- port: 9090
nodePort: 30001
selector:
app: prometheus
tier: backend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prom-deploy
labels:
app: prometheus
spec:
selector:
matchLabels:
app: prometheus
tier: backend
template:
metadata:
labels:
app: prometheus
tier: backend
spec:
containers:
- image: prom/prometheus
name: prometheus
ports:
- containerPort: 9090
name: prometheus
volumeMounts:
- name: prom-config
mountPath: /etc/prometheus/prometheus.yml
subPath: prometheus.yml
- name: pvc-prometheus
mountPath: /prometheus
volumes:
- name: prom-config
configMap:
name: prom-graf-config
- name: pvc-prometheus
persistentVolumeClaim:
claimName: pvc-prometheus

For Grafana:

apiVersion: v1
kind: Service
metadata:
name: grafana
labels:
app: grafana
spec:
ports:
- port: 3000
nodePort: 30002
selector:
app: grafana
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-grafana
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: graf-deploy
labels:
app: grafana
tier: frontend
spec:
selector:
matchLabels:
app: grafana
tier: frontend
template:
metadata:
labels:
app: grafana
tier: frontend
spec:
containers:
- image: grafana/grafana
name: grafana
ports:
- containerPort: 3000
name: grafana
volumeMounts:
- name: graf-config
mountPath: /etc/grafana/provisioning/datasources/datasource.yml
subPath: datasource.yml
- name: pvc-grafana
mountPath: /var/lib/grafana
volumes:
- name: graf-config
configMap:
name: prom-graf-config
- name: pvc-grafana
persistentVolumeClaim:
claimName: pvc-grafana

I used kustomization.yaml file to run all the file in sequence order, so that their would be no conflict while creating deployment.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- prom-config.yaml
- prometheus.yaml
- grafana.yaml

To run the kustomization file use below command. You can see that Prometheus and Grafana successfully deployed. I used NodePort service to expose(port forwading)Prometheus and Grafana pod so that we use them.

kubectl create -k .
deployed prometheus and grafana.
deployed prometheus and grafana.

Now we can access Prometheus and Grafana server. You can see memory free in RHEL-8 is showing. So our node exporter is running perfectly fine and helping Pometheus to fetch the data in real time.

Prometheus Server

For first time login to Grafana default username and password is admin. As we have already written in configuration file of Grafana that Prometheus is a datasource for Grafana, so as soon as we login their would be connectivity between them.

Grafana Login

You can see below that Grafana is now fetching the data from Prometheus in every 5 sec and show us beautiful visual for monitoring.

Creating Grafana Dashboard

Here is the beautiful visualization dashboard of Grafana for matrix monitoring. So now you can create beautiful graphs from Grafana and use for system monitoring.

Dashboard

You can connect with me on linkedin : Prakash Singh

You can get code of this task in my github account : Click here to get code

Thank-you for reading.

If you find this article helpful, it would be appreciable if you could give 1 clap for it.

--

--

Prakash Singh Rajpurohit

Cloud Stack Developer. Working on Machine Learning, DevOps, Cloud and Big Data.