Monitoring PVE 8 via Prometheus on Kubernetes
Monitor Proxmox with Prometheus Exporter on Kubernetes
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.
1. Create a Read-Only User and API Token on Proxmox
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.
The script below will:
- Create a role named
PVEExporterwith the necessary audit permissions. - Create a user named
pve-exporter@pvespecifically for this purpose (it does not require a password). - Assign the read-only role to the new user at the root level (
/). - Create an API token named
exporter-tokenfor the user.
# Create the role with read-only privileges
pveum roleadd PVEExporter -privs "Datastore.Audit Sys.Audit"
# Create the user (password login is not needed for token auth)
pveum useradd pve-exporter@pve
# Assign the role to the user for the entire datacenter
pveum aclmod / -user pve-exporter@pve -role PVEExporter
# Create the API token for the user
pveum user token add pve-exporter@pve exporter-token
Important: The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. You will not be able to see it again.
2. Create the Combined Kubernetes Manifest
On your local machine, create a single YAML file named pve-exporter-full.yaml. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.
Important: Before saving, replace the placeholder values for YOUR_API_TOKEN_ID (e.g., pve-exporter@pve!exporter-token) and YOUR_API_TOKEN_SECRET with the values you just generated. Also, update your Proxmox host's IP address.
apiVersion: v1
kind: Secret
metadata:
name: pve-exporter-credentials
namespace: monitoring # Or your preferred namespace
stringData:
# The PVE_USER for token auth is the full Token ID
PVE_USER: "YOUR_API_TOKEN_ID" # e.g., pve-exporter@pve!exporter-token
# The PVE_PASSWORD for token auth is the Token Secret
PVE_PASSWORD: "YOUR_API_TOKEN_SECRET"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pve-exporter
namespace: monitoring
labels:
app: pve-exporter
spec:
replicas: 1
selector:
matchLabels:
app: pve-exporter
template:
metadata:
labels:
app: pve-exporter
spec:
containers:
- name: pve-exporter
image: prompve/prometheus-pve-exporter:latest
ports:
- name: http-metrics
containerPort: 9221
env:
- name: PVE_USER
valueFrom:
secretKeyRef:
name: pve-exporter-credentials
key: PVE_USER
- name: PVE_PASSWORD
valueFrom:
secretKeyRef:
name: pve-exporter-credentials
key: PVE_PASSWORD
- name: PVE_VERIFY_SSL
value: "false"
---
apiVersion: v1
kind: Service
metadata:
name: pve-exporter
namespace: monitoring
labels:
app: pve-exporter
spec:
selector:
matchLabels:
app: pve-exporter
ports:
- name: http-metrics
port: 9221
targetPort: http-metrics
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pve-exporter
namespace: monitoring
labels:
release: prometheus # Label must match your Prometheus Operator's discovery selector
spec:
selector:
matchLabels:
app: pve-exporter
endpoints:
- port: http-metrics
path: /pve
params:
target:
- "192.168.1.100" # <-- Replace with your Proxmox host's IP address
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- sourceLabels: [__param_target]
targetLabel: target
3. Apply the Kubernetes Manifest
Apply the single YAML file to your cluster to deploy all resources at once.
kubectl apply -f pve-exporter-full.yaml
4. Verify the Deployment
Check that the pod is running and that Prometheus is successfully scraping the target.
# Check pod status
kubectl get pods -n monitoring -l app=pve-exporter
After a minute, navigate to your Prometheus UI, go to Status -> Targets, and verify that a target named pve-exporter is present and has a state of UP.
Notes:
- The
PVE_VERIFY_SSL: "false"setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to"true"if you use a valid, trusted certificate. - The
ServiceMonitorresource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to yourprometheus.ymlfile. - All Kubernetes resources are deployed to the
monitoringnamespace. Adjust if you use a different one.