Kubernetes is a very dynamic environment. In order to monitor this, we need a tool like Prometheus which can monitor such dynamic environment. In this tutorial, we will setup Prometheus which will collect the data from Kubernetes cluster and visualize it in Grafana. We will use Helm chart to setup Prometheus and Grafana easily 😎

Prerequisite

Setting default storage class

  1. First, let’s check the status of the storage class in our cluster. We deployed this storage class while we setup dynamic NFS provisioning.
$kubectl get storageclass    

NAME                 PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
nfs-client           cluster.local/nfs-client-provisioner   Delete          Immediate              true                   76m
standard (default)   rancher.io/local-path                  Delete          WaitForFirstConsumer   false                  103m

Our storage class name is “nfs-client” which is currently not default. We need to make this storage class default so that Prometheus and Grafana use that storage class without explicitly defining it in their configuration files.

  1. Disabling default storage class for “standard” and making “nfs-client” as default
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  1. Check the default storage class
$ kubectl get storageclass                                                                                                      

NAME                   PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   cluster.local/nfs-client-provisioner   Delete          Immediate              true                   83m
standard               rancher.io/local-path                  Delete          WaitForFirstConsumer   false                  110m

Default storage class is now set to nfs-client. Cool.

 

Setting up Prometheus

  1. Next, we will setup Prometheus using Helm chart. Make sure that Helm version is 3
$ helm version --short                                                                                                             

v3.2.4+g0ad800e
  1. Adding Prometheus to Helm repo and update
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
  1. We will save the Helm values of Prometheus in current directory. We need to change some settings before deploying the Helm chart in the cluster.
$ helm inspect values prometheus-community/prometheus > ./prometheus.values
  1. The above command will download the values required for deploying Prometheus and save in prometheus.values file in current directory. Open the prometheus.values and modify the service block like below
service:
  annotations: {}
  labels: {}
  clusterIP: ""

  ## List of IP addresses at which the Prometheus server service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: []

  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  servicePort: 80
  sessionAffinity: None
  nodePort: 32322     #This line is added
  type: NodePort      #Changed ClusterIP to NodePort

In above service block, we changed the service type to NodePort so that we can access Prometheus externally. We also define nodePort to 32322. Save and exit the file.

  1. Provisioning Prometheus with custom values
$ helm install prometheus prometheus-community/prometheus  --values ./prometheus.values  --namespace prometheus --create-namespace

<img class="size-full wp-image-669 aligncenter" src="/uploads/2020/09/promgra1.png" alt="" width="1467" height="841" srcset="/uploads/2020/09/promgra1.png 1467w, /uploads/2020/09/promgra1-768x440.png 768w" sizes="(max-width: 1467px) 100vw, 1467px" />

  1. Run the following command to get the IP address and port as described in the picture to access Prometheus server. In my case it is http://172.20.0.2:32322
$ export NODE_PORT=$(kubectl get --namespace prometheus -o jsonpath="{.spec.ports[0].nodePort}" services prometheus-server)
$ export NODE_IP=$(kubectl get nodes --namespace prometheus -o jsonpath="{.items[0].status.addresses[0].address}")
$ echo http://$NODE_IP:$NODE_PORT
  1. Visit the URL with port. You should see a page like this

<img class="size-full wp-image-671 aligncenter" src="/uploads/2020/09/promgra2.png" alt="" width="1381" height="464" srcset="/uploads/2020/09/promgra2.png 1381w, /uploads/2020/09/promgra2-768x258.png 768w" sizes="(max-width: 1381px) 100vw, 1381px" />

Note: If you can’t access the server, you might need to add a custom route. Run the below command (ONLY if you can’t access the server). Replace the IP address according to your Prometheus URL. Also change the network (-net) address. After that, you should see above page.

sudo route add -net 172.20.0.0 netmask 255.255.255.0 gw 172.20.0.2

Setting up Grafana

Now we will setup Grafana to visualize the data from Prometheus which are now currently collecting the data from Kubernetes cluster.

  1. Adding the Grafana to the Helm repo and update
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
  1. Like Prometheus, we will save the Helm values of Grafana in current directory.
$ helm inspect values grafana/grafana &gt; ./grafana.values
  1. Open the grafana.values. We need to modify in fields

i) Find the service block and modify like below

## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

service:
  type: NodePort       # Change from ClusterIP to NodePort
  nodePort: 32323      # This line is added. Defined nodePort to 32323
  port: 80
    # targetPort: 4181 To be used with a proxy extraContainer
  annotations: {}
  labels: {}
  portName: service

ii) Setting the admin user and password

# Administrator credentials when not using an existing secret (see below)
adminUser: admin
adminPassword: averystrongpassword

ii) Enabling PVC

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  type: pvc
  enabled: enabled   (Changed false to enabled)

Save and exit.

  1. Install Grafana with custom values
$ helm install grafana grafana/grafana  --values ./grafana.values  --namespace grafana --create-namespace
  1. You should see the output like below. Similarly, in order to get the IP and port for accessing Grafana dashbord, run the following commands

<img class="size-full wp-image-673 aligncenter" src="/uploads/2020/09/promgra3.png" alt="" width="1190" height="365" srcset="/uploads/2020/09/promgra3.png 1190w, /uploads/2020/09/promgra3-768x236.png 768w" sizes="(max-width: 1190px) 100vw, 1190px" />

$ export NODE_PORT=$(kubectl get --namespace grafana -o jsonpath="{.spec.ports[0].nodePort}" services grafana)
$ export NODE_IP=$(kubectl get nodes --namespace grafana -o jsonpath="{.items[0].status.addresses[0].address}")
$ echo http://$NODE_IP:$NODE_PORT

In my case, the IP and port is http://172.20.0.2:32323. Visit the URL and you should see the Grafana dashboard. Login with your admin username and password.

Configure Grafana for collecting data from Prometheus

  1. Our setup part is done. Now we will configure Grafana for collecting the data from Prometheus. Click “Add your first data source” and select “Prometheus”

<img class="size-full wp-image-675 aligncenter" src="/uploads/2020/09/promgra4.png" alt="" width="1898" height="936" srcset="/uploads/2020/09/promgra4.png 1898w, /uploads/2020/09/promgra4-768x379.png 768w, /uploads/2020/09/promgra4-1536x757.png 1536w, /uploads/2020/09/promgra4-730x360.png 730w" sizes="(max-width: 1898px) 100vw, 1898px" /> <img class="size-full wp-image-676 aligncenter" src="/uploads/2020/09/promgra5.png" alt="" width="1373" height="647" srcset="/uploads/2020/09/promgra5.png 1373w, /uploads/2020/09/promgra5-768x362.png 768w" sizes="(max-width: 1373px) 100vw, 1373px" />

  1. Set “URL” to the Prometheus server URL. Also set HTTP method to “GET”. When done, “Save & Test”. You should see “Data source is working” message.

<img class="aligncenter size-full wp-image-678" src="/uploads/2020/09/promgra6.png" alt="" width="960" height="934" srcset="/uploads/2020/09/promgra6.png 960w, /uploads/2020/09/promgra6-768x747.png 768w" sizes="(max-width: 960px) 100vw, 960px" />

  1. Now we will import a dashboard to visualize the collected data. From left menu, click import

<img class="aligncenter size-medium wp-image-680" src="/uploads/2020/09/promgra7.png" alt="" width="1148" height="880" srcset="/uploads/2020/09/promgra7.png 1148w, /uploads/2020/09/promgra7-768x589.png 768w" sizes="(max-width: 1148px) 100vw, 1148px" />

  1. You can visit grafana’s community built dashboard. Select any of the dashboard and copy it’s id. Paste it to the import page and click “Load”

<img class="aligncenter size-full wp-image-681" src="/uploads/2020/09/promgra8.png" alt="" width="1311" height="518" srcset="/uploads/2020/09/promgra8.png 1311w, /uploads/2020/09/promgra8-768x303.png 768w" sizes="(max-width: 1311px) 100vw, 1311px" />

<img class="aligncenter size-full wp-image-679" src="/uploads/2020/09/promgra9.png" alt="" width="710" height="767" />

  1. Select “Prometheus” as data source and click “Import”

<img class="aligncenter size-full wp-image-682" src="/uploads/2020/09/promgra10.png" alt="" width="652" height="750" />

A nice beautiful dashboard will be presented to you which will show the current cluster’s info

<img class="aligncenter size-full wp-image-683" src="/uploads/2020/09/promgra11.png" alt="" width="1920" height="937" srcset="/uploads/2020/09/promgra11.png 1920w, /uploads/2020/09/promgra11-768x375.png 768w, /uploads/2020/09/promgra11-1536x750.png 1536w" sizes="(max-width: 1920px) 100vw, 1920px" />

We just successfully setup Prometheus and Grafana to monitor and visualize our Kubernetes cluster.