There are quite a few use cases for monitoring outside of Kubernetes, especially for previously built infrastructure and otherwise legacy systems. Additional monitoring adds an extra layer of complexity to your monitoring setup and configuration, but fortunately Prometheus makes this extra complexity easier to manage and maintain, inside of Kubernetes.
In this post I will describe a nice clean way to monitor things that are internal to Kubernetes using Prometheus and the Prometheus Operator. The advantage of this approach is that it allows the Operator to manage and monitor infrastructure, and it allows Kubernetes to do what it’s good at; make sure the things you want are running for you in an easy to maintain, declarative manifest.
If you are already familiar with the concepts in Kubernetes then this post should be pretty straight forward. Otherwise, you can pretty much copy/paste most of these manifests into your cluster and you should have a good way to monitor things in your environment that are external to Kubernetes.
Below is an example of how to monitor external network devices using the Prometheus SNMP exporter. There are many other exporters that can be used to monitor infrastructure that is external to Kubernetes but currently it is recommended to set up these configurations outside of the Prometheus Operator to basically separate monitoring concerns (which I plan on writing more about in the future).
Create the deployment and service
Here is what the deployment might look like.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: snmp-exporter spec: replicas: 1 selector: matchLabels: app: snmp-exporter template: metadata: labels: app: snmp-exporter spec: containers: - image: oakman/snmp-exporter command: ["/bin/snmp_exporter"] args: ["--config.file=/etc/snmp_exporter/snmp.yml"] name: snmp-exporter ports: - containerPort: 9116 name: metrics
And the accompanying service.
apiVersion: v1 kind: Service metadata: labels: app: snmp-exporter name: snmp-exporter spec: ports: - name: http-metrics port: 9116 protocol: TCP targetPort: metrics selector: app: snmp-exporter
At this point you would have a pod in your cluster, attached to a static IP address. To see if it worked you can check to make sure a service IP was created. The service is basically what the Operator uses to create targets in Prometheus.
kubectl get sv
From this point you can 1) set up your own instance of Prometheus using Helm or by deploying via yml manifests or 2) set up the Prometheus Operator.
Today we will walk through option 2, although I will probably cover option 1 at some point in the future.
Setting up the Prometheus Operator
The beauty of using the Prometheus Operator is that it gives you a way to quickly add or change Prometheus specific configuration via the Kubernetes API (custom resource definition) and some custom objects provided by the operator, including AlertManager, ServiceMonitor and Prometheus objects.
The first step is to install Helm, which is a little bit outside of the scope of this post but there are lots of good guides on how to do it. With Helm up and running you can easily install the operator and the accompanying kube-prometheus manifests which give you access to lots of extra Kubernetes metrics, alerts and dashboards.
helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/
helm install --name prometheus-operator --set rbacEnable=true --namespace monitoring coreos/prometheus-operator
helm install coreos/kube-prometheus --name kube-prometheus --namespace monitoring
After a few moments you can check to see that resources were created correctly as a quick test.
kubectl get pods -n monitoring
NOTE: You may need to manually add the “prometheus” service account to the monitoring namespace after creating everything. I ran into some issues because Helm didn’t do this automatically. You can check this with kubectl get events.
Prometheus Operator configuration
Below are steps for creating custom objects (CRDs) that the Prometheus Operator uses to automatically generate configuration files and handle all of the other management behind the scenes.
These objects are wired up in a way that configs get reloaded and Prometheus will automatically get updated when it sees a change. These object definitions basically convert all of the Prometheus configuration into a format that is understood by Kubernetes and converted to Prometheus configuration with the operator.
First we make a servicemonitor for monitoring the the snmp exporter.
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: snmp-exporter prometheus: kube-prometheus # tie servicemonitor to correct Prometheus name: snmp-exporter spec: jobLabel: k8s-app selector: app: snmp-exporter namespaceSelector: matchNames: - monitoring endpoints: - interval: 60s port: http-metrics params: module: - if_mib # Select which SNMP module to use target: - 1.2.3.4 # Modify this to point at the SNMP target to monitor path: "/snmp" targetPort: 9116
Next, we create a custom alert and tie it our Prometheus Operator. The alert doesn’t do anything useful, but is a good demo for showing how easy it is to add and manage alerts using the Operator.
Create an alert-example.yml configuration file, add it as a configmap to k8s and mount it in as a configuration with the ruleSelector label selector and the prometheus operator will do the rest. Below shows how to hook up a test rule into an existing Prometheus (kube-prometheus) alert manager, handled by the prometheus-operator.
kind: ConfigMap apiVersion: v1 metadata: name: josh-test namespace: monitoring labels: role: alert-rules # Standard convention for organizing alert rules prometheus: kube-prometheus # tie to correct Prometheus data: test.rules.yaml: | groups: - name: test.rules # Top level description in Prometeheus rules: - alert: TestAlert expr: vector(1)
Once you have created the rule definition via configmap just use kubectl to create it.
kubectl create -f alert-example.yml -n monitoring
Testing and troubleshooting
You will probably need to port forward the pod to get access to the IP and port in the cluster
kubectl port-forward snmp-exporter-<name> 9116
Then you should be able to visit the pod in your browser (or with curl).
localhost:9116
The exporter itself does a lot more so you will probably want to play around with it. I plan on covering more of the details of other external exporters and more customized configurations for the SNMP exporter.
For example, if you want to do any sort of monitoring past basic interface stats, etc. you will need to generate and build your own set of MIBs to gather metrics from your infrastructure and also reconfigure your ServiceMonitor object in Kubernetes to use the correct MIBs so that the Operator updates the configuration correctly.
Conclusion
The amount of options for how to use Prometheus is one area of confusion when it comes to using Prometheus, especially for newcomers. There are lots of ways to do things and there isn’t much direction on how to use them, which can also be viewed as a strength since it allows for so much flexibility.
In some situations it makes sense to use an external (non Operator managed Prometheus) when you need to do things like manage and tune your own configuration files. Likewise, the Prometheus Operator is a great fit when you are mostly only concerned about managing and monitoring things inside Kubernetes and don’t need to do much external monitoring.
That said, there is some support for external monitoring using the Prometheus Operator, which I want to write about in a different post. This support is limited to a handful of different external exporters (for the time being) so the best advice is to think about what kind of monitoring is needed and choose the best solution for your own use case. It may turn out that both types of configurations are needed, but it may also end up being just as easy to use one method or another to manage Prometheus and its configurations.