Monitoring#

Logs provide valuable information, but it is incomplete without monitoring data. The Prometheus® and Grafana® bundle serve as the base for the Rackspace KaaS monitoring solution. Prometheus is a robust monitoring and alerting tool for distributed systems that helps cloud administrators to watch the health of both Kubernetes nodes and the workloads that run on the Kubernetes cluster. Data visualization powered by Grafana provides insights into the cluster health through a web-based graphical user interface.

Rackspace KaaS deploys two instances of Prometheus in each Kubernetes cluster. One instance is for Kubernetes applications monitoring and is for Rackspace customers. The other instance monitors the Kubernetes cluster itself and is for Rackspace operators. Customers can view the Prometheus internal dashboard but cannot modify any existing settings or add new settings. Grafana preconfigures both internal and customer-facing Prometheus instances as data resources.

Because Prometheus primarily focuses on operational monitoring, Rackspace KaaS stores only the recent Prometheus log history. For optimal storage use, Rackspace KaaS sets the default retention period for Prometheus logs and data to three days. However, if you need to store Prometheus data for longer, you can adjust the retention policy by contacting your Rackspace representative or submitting a support ticket.

Implementation details#

There are currently no specific implementation details for the Rackspace KaaS ServiceMesh service.

Specific usage instructions#

For an example of how to use Grafana, see Use Grafana.

For an example of how to use Prometheus, see Example: Deploy a MySQL database with Prometheus monitoring.

For an example of using Prometheus and Grafana, see Configure application metrics monitoring.

Use Grafana#

Rackspace KaaS provides preconfigured dashboards for the cluster resource monitoring including resource utilization analytics for pods, deployments, Kubernetes nodes, and etcd nodes. It also provides other useful information that might help you with capacity planning for your clusters.

When you create a new pod in your Kubernetes cluster, Grafana automatically adds it to the preconfigured Pod Dashboard, where you can monitor pod resource utilization. If you need to set up additional metrics to track the health of your Kubernetes applications, you can create or import custom Grafana dashboards and use them with the customer-facing Prometheus instance.

To use Grafana, complete the following steps:

  1. Log in to the Grafana UI by using the URL and credentials provided in Access the Rackspace KaaS dashboards.

  2. Click Home.

  3. Select a dashboard to display. For example, choose Pods and select a namespace and pod to display, as shown in the following image:

    ../../../_images/grafana-example.png

Use Prometheus#

Your Rackspace KaaS environment comes with two instances of Prometheus. Rackspace KaaS preconfigures the first instance to monitor the Kubernetes cluster itself and the other instance to monitor your cloud-native applications. Do not modify the default configuration of the Prometheus instance that monitors your Kubernetes cluster. Instead, use the other Prometheus instance to watch your applications, and to set up Grafana dashboards to visualize Prometheus’ data.

If your organization already has a monitoring solution that you want to use to monitor your Kubernetes applications, contact your Rackspace representative to discuss implementation details.

If you decide to use the Prometheus instance deployed by Rackspace KaaS to monitor your applications, you need to modify all the required configuration files that specify how Prometheus Operator communicates with service monitors and Prometheus exporters. Prometheus provides exporters for the most commonly used applications, and many third-party vendors develop their own Prometheus exporters. The main functionality of an exporter is to provide an endpoint for Prometheus to scrape metrics from application services.

At a high level, Prometheus sends an HTTPS request to the exporter. The exporter interrogates the application and provides metrics in the Prometheus format to its endpoint conforming to the Prometheus HTTP API.

Because each application is different, no solution fits all use cases. Therefore, you need to define individual configuration files and select appropriate exporters for each application.

Typically, deploying an application with Prometheus monitoring involves the following steps:

  1. Create a deployment and service configuration file for your application.
  2. Expose an application-specific Prometheus exporter endpoint.
  3. Create a service monitor in Kubernetes that defines how Prometheus polls the exporter for data.
  4. Configure Grafana notifications and dashboards.

Example: Deploy a MySQL database with Prometheus monitoring#

As an example, deploy a MySQL database with Prometheus monitoring. You can use the MySQL Prometheus exporter called mysqld_exporter to expose MySQL metrics to Prometheus.

For this example, you need to create the following items:

  • A MySQL deployment
  • A MySQL service
  • A PersistentVolumeClaim for the MySQL database.
  • A MySQL exporter deployment
  • A MySQL exporter service
  • A MySQL exporter service monitor

The Prometheus Operator deployed by Rackspace KaaS searches for the monitor-app label in the service monitor configuration files. If the label is different from that, Rackspace KaaS can still deploy the application, but Prometheus cannot discover it. Therefore, create the service monitor in the monitoring namespace and use the monitor-app label, as shown in the following MySQL configuration file (mysql.yaml):

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      name: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: mysql
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-pv-claim
  labels:
    app: mysql
spec:
  accessModes:
   - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: mysql

The following text is the MySQL exporter configuration file (mysqld.yaml):

apiVersion: v1
kind: Service
metadata:
  name: mysqld-exporter
  labels:
    app: mysqld-exporter
spec:
  type: ClusterIP
  ports:
    - name: http-metrics
      port: 9104
selector:
  app: mysqld-exporter

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysqld-exporter
  labels:
    app: mysqld-exporter
spec:
  selector:
    matchLabels:
      app: mysqld-exporter
  replicas: 1
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9104"
      labels:
        app: mysqld-exporter
    spec:
      containers:
      - name: mysqld-exporter
        image: prom/mysqld-exporter
        env:
        - name: DATA_SOURCE_NAME
          value: root:password@(mysql:3306)/
        ports:
        - containerPort: 9104
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: mysqld-exporter
  namespace: monitoring
  labels:
    monitor-app: mysqld-exporter
spec:
  jobLabel: k8s-app
  selector:
    matchLabels:
      app: mysqld-exporter
  namespaceSelector:
    matchNames:
    - default
  endpoints:
  - port: http-metrics
    interval: 30s

To deploy the MySQL database with Prometheus monitoring, complete the following steps:

  1. Deploy the MySQL database by using the mysql.yaml file from the preceding example:

    kubectl apply -f deployments/stable/mysql.yaml
    

    System response:

    deployment "mysql" created
    persistentvolumeclaim "mysql-pv-claim" created
    service "mysql" created
    
  2. Deploy the MySQL exporter by using the mysqld.yaml file from the preceding example:

    kubectl apply -f deployments/stable/mysqld.yaml
    

    System response:

    service "mysqld-exporter" created
    deployment "mysqld-exporter" created
    servicemonitor "mysqld-exporter" created
    
  3. Verify that the deployment was created by running the following command:

    kubectl get deployment,service,pvc -l app=mysql
    

    Example of system response:

    NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    mysql     1         1         1            1           20s
    
    NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    svc/mysql   ClusterIP   10.3.33.6    <none>        3306/TCP   20s
    
    NAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc/mysql-pv-claim   Bound     pvc-4d2fed9f-ce3d-11e8-a58f-fa163ea62d3e   200Gi      RWO            openstack      20s
    
  4. Deploy the MySQL exporter by running the following command:

    kubectl apply -f mysqld.yaml
    

    Example of system response:

    service "mysqld-exporter" created
    deployment "mysqld-exporter" created
    servicemonitor "mysqld-exporter" created
    
  5. Verify that the deployment and service were created by running the following command:

    kubectl get deployment,service -l app=mysqld-exporter
    

    Example of system response:

    NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    mysql     1         1         1            1           2m
    
    NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    mysqld-exporter   ClusterIP   10.3.163.206   <none>        9104/TCP   2m
    
  6. Verify that the service monitor was created by running the following command:

    kubectl get servicemonitor mysqld-exporter -n monitoring
    

    Example of system response:

    NAME              AGE
    mysqld-exporter   2m
    
  7. Log in to the Prometheus UI.

  8. Go to Status > Targets.

    The mysql exporter must be present in the list of endpoints:

    ../../../_images/scr_prometheus_mysql_target.png
  9. Go to the Grafana dashboard.

  10. Import or create a dashboard to monitor your MySQL database.

  11. In the prometheus field, select prometheus-customer as a source. The following screenshot shows a sample MySQL Overview Grafana dashboard:

    ../../../_images/scr_prometheus_mysql_grafana.png
  12. Configure Grafana’s notifications for your communication channel as described in the Grafana documentation

Troubleshooting Prometheus#

If you cannot see the mysql-exporter endpoint in the Prometheus UI, follow these troubleshooting guidelines:

  1. Verify that the deployments, services, service monitor, and the PVC were created successfully:

    kubectl get deployment,service,pvc -l app=mysql
    

    Example of system response:

    NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    svc/mysql   ClusterIP   10.3.33.6    <none>        3306/TCP   1h
    
    NAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc/mysql-pv-claim   Bound     pvc-4d2fed9f-ce3d-11e8-a58f-fa163ea62d3e   200Gi      RWO            openstack      1h
    
  2. Verify that the MySQL exporter’s service and deployment were created:

    kubectl get deployment,service -l app=mysqld-exporter
    

    Example of system response:

    NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deploy/mysqld-exporter   1         1         1            1           1h
    
    NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    svc/mysqld-exporter   ClusterIP   10.3.228.251   <none>        9104/TCP   1h
    
  3. Verify that the MySQL service monitor was created:

    kubectl get servicemonitor -l monitor-app=mysqld-exporter -n monitoring
    

    Example of system response:

    NAME              AGE
    mysqld-exporter   51m
    
  4. Check your .yaml files:

    • Verify that you used the monitor-app label for the service monitor and that the service monitor is deployed in the monitoring namespace.
  5. Check the mysql-exporter log files for any error:

    kubectl logs deployment/mysqld-exporter
    

    Example of system response:

    time="2018-10-12T16:39:22Z" level=info msg="Starting mysqld_exporter (version=0.11.0, branch=HEAD, revision=5d7179615695a61ecc3b5bf90a2a7c76a9592cdd)" source="mysqld_exporter.go:206"
    time="2018-10-12T16:39:22Z" level=info msg="Build context (go=go1.10.3, user=root@3d3ff666b0e4, date=20180629-15:00:35)" source="mysqld_exporter.go:207"
    time="2018-10-12T16:39:22Z" level=info msg="Enabled scrapers:" source="mysqld_exporter.go:218"
    time="2018-10-12T16:39:22Z" level=info msg=" --collect.global_variables" source="mysqld_exporter.go:222"
    time="2018-10-12T16:39:22Z" level=info msg=" --collect.slave_status" source="mysqld_exporter.go:222"
    time="2018-10-12T16:39:22Z" level=info msg=" --collect.info_schema.tables" source="mysqld_exporter.go:222"
    time="2018-10-12T16:39:22Z" level=info msg=" --collect.global_status" source="mysqld_exporter.go:222"
    time="2018-10-12T16:39:22Z" level=info msg="Listening on :9104" source="mysqld_exporter.go:232"