Monitoring PVE 8 via Prometheus on Kubernetes: Difference between revisions

From Jwiki
No edit summary
No edit summary
Line 32: Line 32:
pveum aclmod / -delete 1 -token 'pve-exporter@pve!exporter-token'
pveum aclmod / -delete 1 -token 'pve-exporter@pve!exporter-token'
pveum user token remove pve-exporter@pve exporter-token
pveum user token remove pve-exporter@pve exporter-token
pveum user delete pve-exporter@pve
pveum userdel pve-exporter@pve
</syntaxhighlight>
</syntaxhighlight>



Revision as of 16:30, 29 August 2025

Monitor Proxmox with Prometheus Exporter on Kubernetes

This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.

1. Create a Read-Only User and API Token on Each Proxmox Host

This one-time setup must be performed on each Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.

The script below will:

  • Create a user named `pve-exporter@pve` for monitoring.
  • Assign the built-in `PVEAuditor` role to the new user.
  • Create an API token named `exporter-token`.
  • Assign the `PVEAuditor` role directly to the API token to override any potential ACL inheritance issues.
# 1. Create the user
pveum useradd pve-exporter@pve

# 2. Assign the standard PVEAuditor role to the USER
pveum aclmod / -user pve-exporter@pve -role PVEAuditor

# 3. Create the API token for the user
pveum user token add pve-exporter@pve exporter-token

# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN
pveum aclmod / -token 'pve-exporter@pve!exporter-token' -role PVEAuditor

Important: The `pveum user token add` command will output the Token ID (e.g., `pve-exporter@pve!exporter-token`) and the Secret Value. Copy the full secret value immediately, as you will not be able to see it again.

(Optional) Cleanup Script

If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.

pveum aclmod / -delete 1 -token 'pve-exporter@pve!exporter-token'
pveum user token remove pve-exporter@pve exporter-token
pveum userdel pve-exporter@pve

2. Create the Kubernetes Manifest for Each Host

On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.

Important: Before saving, replace the placeholder values:

  • `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).
  • `YOUR_API_TOKEN_SECRET`: The secret value you just generated.
  • `ahsoka.tatooine.jgy.local`: Update with your Proxmox host's fully qualified domain name or IP address.
# ===================================================================
# ==         CONFIGURATION FOR PROXMOX HOST: ahsoka                ==
# ===================================================================
# ---
# 1. Secret for "ahsoka" - This holds the unique token components for this host.
apiVersion: v1
kind: Secret
metadata:
  name: jgy-pve-exporter-ahsoka-auth
  namespace: monitoring # Or your preferred namespace
stringData:
  # The user part of the token
  PVE_USER: "pve-exporter@pve"
  # The name (ID) of the API token
  PVE_TOKEN_NAME: "exporter-token" # e.g., exporter-token
  # The secret value of the API token
  PVE_TOKEN_VALUE: "<token>"
---
# 2. Deployment for the "jgy-pve-exporter-ahsoka" instance
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jgy-pve-exporter-ahsoka
  namespace: monitoring
  labels:
    app: jgy-pve-exporter-ahsoka
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jgy-pve-exporter-ahsoka
  template:
    metadata:
      labels:
        app: jgy-pve-exporter-ahsoka
    spec:
      containers:
      - name: pve-exporter
        image: prompve/prometheus-pve-exporter:3.5.5
        args:
        - "--web.listen-address=:9106"
        ports:
        - name: http-metrics
          containerPort: 9106
        env:
        - name: PVE_USER
          valueFrom:
            secretKeyRef:
              name: jgy-pve-exporter-ahsoka-auth
              key: PVE_USER
        - name: PVE_TOKEN_NAME
          valueFrom:
            secretKeyRef:
              name: jgy-pve-exporter-ahsoka-auth
              key: PVE_TOKEN_NAME
        - name: PVE_TOKEN_VALUE
          valueFrom:
            secretKeyRef:
              name: jgy-pve-exporter-ahsoka-auth
              key: PVE_TOKEN_VALUE
        - name: PVE_VERIFY_SSL
          value: "false"
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 100m
            memory: 128Mi
---
# 3. Service for the "jgy-pve-exporter-ahsoka" instance
apiVersion: v1
kind: Service
metadata:
  name: jgy-pve-exporter-ahsoka
  namespace: monitoring
  labels:
    app: jgy-pve-exporter-ahsoka
spec:
  selector:
    app: jgy-pve-exporter-ahsoka
  ports:
  - name: http-metrics
    port: 9106
    targetPort: "http-metrics"
---
# 4. ServiceMonitor for the "jgy-pve-exporter-ahsoka" instance
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: jgy-pve-exporter-ahsoka
  namespace: monitoring
  labels:
    release: prometheus # Label must match your Prometheus Operator's discovery selector
spec:
  selector:
    matchLabels:
      app: jgy-pve-exporter-ahsoka
  endpoints:
  - port: "http-metrics"
    path: /pve
    params:
      target:
      - "ahsoka.tatooine.jgy.local" # <-- Replace with your Proxmox host's FQDN or IP
    relabelings:
    - sourceLabels: [__param_target]
      targetLabel: instance

To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host's name (e.g., `thrawn`) and update the new host's unique credentials and target address.

3. Apply the Kubernetes Manifest

Apply the single YAML file to your cluster to deploy all resources.

kubectl apply -f pve-exporters.yaml

4. Verify the Deployment

Check that the pods are running and that Prometheus is successfully scraping the targets.

# Check pod status for all exporters, replacing with your host names
kubectl get pods -n monitoring -l app --selector='app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)'

After a minute, navigate to your Prometheus UI, go to Status -> Targets, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of UP.

Notes:

  • The `PVE_VERIFY_SSL: "false"` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `"true"` if you use a valid, trusted certificate.
  • The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.
  • All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.