Monitoring PVE 8 via Prometheus on Kubernetes: Difference between revisions

From Jwiki
No edit summary
No edit summary
Line 186: Line 186:
</syntaxhighlight>
</syntaxhighlight>


Scrape:
<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
# pve-scrape-config-aligned.yaml
# ---
apiVersion: monitoring.coreos.com/v1alpha1
apiVersion: monitoring.coreos.com/v1alpha1
kind: ScrapeConfig
kind: ScrapeConfig

Revision as of 17:02, 29 August 2025

Monitor Proxmox with Prometheus Exporter on Kubernetes

This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.

1. Create a Read-Only User and API Token on Each Proxmox Host

This one-time setup must be performed on each Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.

The script below will:

  • Create a user named `pve-exporter@pve` for monitoring.
  • Assign the built-in `PVEAuditor` role to the new user.
  • Create an API token named `exporter-token`.
  • Assign the `PVEAuditor` role directly to the API token to override any potential ACL inheritance issues.
# 1. Create the user
pveum useradd pve-exporter@pve

# 2. Assign the standard PVEAuditor role to the USER
pveum aclmod / -user pve-exporter@pve -role PVEAuditor

# 3. Create the API token for the user
pveum user token add pve-exporter@pve exporter-token

# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN
pveum aclmod / -token 'pve-exporter@pve!exporter-token' -role PVEAuditor

Important: The `pveum user token add` command will output the Token ID (e.g., `pve-exporter@pve!exporter-token`) and the Secret Value. Copy the full secret value immediately, as you will not be able to see it again.

(Optional) Cleanup Script

If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.

pveum aclmod / -delete 1 -token 'pve-exporter@pve!exporter-token'
pveum user token remove pve-exporter@pve exporter-token
pveum userdel pve-exporter@pve

2. Create the Kubernetes Manifest for Each Host

On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.

Important: Before saving, replace the placeholder values:

  • `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).
  • `YOUR_API_TOKEN_SECRET`: The secret value you just generated.
  • `ahsoka.tatooine.jgy.local`: Update with your Proxmox host's fully qualified domain name or IP address.
apiVersion: v1
kind: Secret
metadata:
  name: jgy-pve-exporter-secrets
  namespace: monitoring
type: Opaque
stringData:
  ahsoka-token: "asd"
  thrawn-token: "asd"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: jgy-pve-exporter-config-template
  namespace: monitoring
data:
  pve.yml: |
    # --- Module for ahsoka ---
    ahsoka:
      user: pve-exporter@pve
      token_name: ahsoka-token
      token_value: "${PVE_AHSOKA_TOKEN}"
      verify_ssl: false

    # --- Module for thrawn ---
    thrawn:
      user: pve-exporter@pve
      token_name: thrawn-token
      token_value: "${PVE_THRAWN_TOKEN}"
      verify_ssl: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jgy-pve-exporter
  namespace: monitoring
  labels:
    app: jgy-pve-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jgy-pve-exporter
  template:
    metadata:
      labels:
        app: jgy-pve-exporter
    spec:
      volumes:
      - name: config-template-volume
        configMap:
          name: jgy-pve-exporter-config-template
      - name: processed-config-volume
        emptyDir: {}
      - name: tmp
        emptyDir: {}
      initContainers:
      - name: init-config-secrets
        image: busybox:1.36
        command: ['/bin/sh', '-c']
        args:
        - |
          sed -e "s|\${PVE_AHSOKA_TOKEN}|${PVE_AHSOKA_TOKEN}|g" \
              -e "s|\${PVE_THRAWN_TOKEN}|${PVE_THRAWN_TOKEN}|g" \
              /etc/config-template/pve.yml > /etc/processed-config/pve.yml
        env:
        - name: PVE_AHSOKA_TOKEN
          valueFrom:
            secretKeyRef:
              name: jgy-pve-exporter-secrets
              key: ahsoka-token
        - name: PVE_THRAWN_TOKEN
          valueFrom:
            secretKeyRef:
              name: jgy-pve-exporter-secrets
              key: thrawn-token
        volumeMounts:
        - name: config-template-volume
          mountPath: /etc/config-template
          readOnly: true
        - name: processed-config-volume
          mountPath: /etc/processed-config
      containers:
      - name: pve-exporter
        image: prompve/prometheus-pve-exporter:3.5.5
        args:
        - "--config.file=/etc/prometheus/pve.yml"
        - "--web.listen-address=:9106"
        ports:
        - name: http-metrics
          containerPort: 9106
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: http-metrics
          initialDelaySeconds: 10
          periodSeconds: 15
        readinessProbe:
          httpGet:
            path: /
            port: http-metrics
          initialDelaySeconds: 5
          periodSeconds: 5
        securityContext:
          runAsNonRoot: true
          runAsUser: 1000
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
        volumeMounts:
        - name: processed-config-volume
          mountPath: /etc/prometheus
          readOnly: true
        - name: tmp
          mountPath: /tmp
        resources:
          requests:
            cpu: '0'
            memory: 128Mi
          limits:
            cpu: '0'
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: jgy-pve-exporter
  namespace: monitoring
  labels:
    app: jgy-pve-exporter
spec:
  selector:
    app: jgy-pve-exporter
  ports:
  - name: http-metrics
    port: 9106
    targetPort: http-metrics


Scrape:

apiVersion: monitoring.coreos.com/v1alpha1
kind: ScrapeConfig
metadata:
  name: jgy-proxmoxes
  namespace: monitoring
  labels:
    prometheus: jgy-prometheus
spec:
  staticConfigs:
    - targets:
        - ahsoka.local
  metricsPath: /pve
  relabelings:
    - sourceLabels: [__address__]
      targetLabel: __param_target
    - sourceLabels: [__address__]
      regex: '([^.]+)\..*'
      targetLabel: __param_module
    - sourceLabels: [__param_target]
      targetLabel: instance
    - targetLabel: __address__
      replacement: jgy-pve-exporter.monitoring.svc:9106

3. Apply the Kubernetes Manifest

Apply the single YAML file to your cluster to deploy all resources.

kubectl apply -f pve-exporters.yaml

4. Verify the Deployment

Check that the pods are running and that Prometheus is successfully scraping the targets.

# Check pod status for all exporters, replacing with your host names
kubectl get pods -n monitoring -l app --selector='app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)'

After a minute, navigate to your Prometheus UI, go to Status -> Targets, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of UP.

Notes:

  • The `PVE_VERIFY_SSL: "false"` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `"true"` if you use a valid, trusted certificate.
  • The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.
  • All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.