Monitoring PVE 8 via Prometheus on Kubernetes: Difference between revisions
No edit summary |
No edit summary |
||
| Line 1: | Line 1: | ||
== Monitor Proxmox with Prometheus Exporter on Kubernetes == | == Monitor Proxmox with Prometheus Exporter on Kubernetes == | ||
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes. | This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes. | ||
=== 1. Create a Read-Only User and API Token on Proxmox === | === 1. Create a Read-Only User and API Token on Proxmox === | ||
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands. | The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands. | ||
The script below will: | The script below will: | ||
* Create a role named <code> | * Create a role named <code>ExporterRole</code> with the necessary audit permissions. | ||
* Create a user named <code>pve-exporter@pve</code> specifically for this purpose (it does not require a password). | * Create a user named <code>pve-exporter@pve</code> specifically for this purpose (it does not require a password). | ||
* Assign the read-only role to the new user at the root level (<code>/</code>). | * Assign the read-only role to the new user at the root level (<code>/</code>). | ||
* Create an API token named <code>exporter-token</code> for the user. | * Create an API token named <code>exporter-token</code> for the user. | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# Create the role with read-only privileges | # Create the role with read-only privileges and a valid name | ||
pveum roleadd | pveum roleadd ExporterRole -privs "Datastore.Audit Sys.Audit" | ||
# Create the user (password login is not needed for token auth) | # Create the user (password login is not needed for token auth) | ||
pveum useradd pve-exporter@pve | pveum useradd pve-exporter@pve | ||
# Assign the role to the user for the entire datacenter | # Assign the role to the user for the entire datacenter | ||
pveum aclmod / -user pve-exporter@pve -role | pveum aclmod / -user pve-exporter@pve -role ExporterRole | ||
# Create the API token for the user | # Create the API token for the user | ||
pveum user token add pve-exporter@pve exporter-token | pveum user token add pve-exporter@pve exporter-token | ||
</syntaxhighlight> | </syntaxhighlight> | ||
'''Important:''' The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. '''You will not be able to see it again.''' | '''Important:''' The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. '''You will not be able to see it again.''' | ||
==== (Optional) Cleanup Script ==== | |||
If you need to re-run the setup, first delete the old resources to avoid errors. | |||
<syntaxhighlight lang="bash"> | |||
pveum user token remove pve-exporter@pve exporter-token | |||
pveum user delete pve-exporter@pve | |||
pveum role delete ExporterRole | |||
</syntaxhighlight> | |||
=== 2. Create the Combined Kubernetes Manifest === | === 2. Create the Combined Kubernetes Manifest === | ||
On your local machine, create a single YAML file named <code>pve-exporter-full.yaml</code>. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret. | On your local machine, create a single YAML file named <code>pve-exporter-full.yaml</code>. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret. | ||
'''Important:''' Before saving, replace the placeholder values for <code>YOUR_API_TOKEN_ID</code> (e.g., <code>pve-exporter@pve!exporter-token</code>) and <code>YOUR_API_TOKEN_SECRET</code> with the values you just generated. Also, update your Proxmox host's IP address. | '''Important:''' Before saving, replace the placeholder values for <code>YOUR_API_TOKEN_ID</code> (e.g., <code>pve-exporter@pve!exporter-token</code>) and <code>YOUR_API_TOKEN_SECRET</code> with the values you just generated. Also, update your Proxmox host's IP address. | ||
<syntaxhighlight lang="yaml"> | <syntaxhighlight lang="yaml"> | ||
apiVersion: v1 | apiVersion: v1 | ||
| Line 119: | Line 117: | ||
targetLabel: target | targetLabel: target | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=== 3. Apply the Kubernetes Manifest === | === 3. Apply the Kubernetes Manifest === | ||
Apply the single YAML file to your cluster to deploy all resources at once. | Apply the single YAML file to your cluster to deploy all resources at once. | ||
| Line 125: | Line 122: | ||
kubectl apply -f pve-exporter-full.yaml | kubectl apply -f pve-exporter-full.yaml | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=== 4. Verify the Deployment === | === 4. Verify the Deployment === | ||
Check that the pod is running and that Prometheus is successfully scraping the target. | Check that the pod is running and that Prometheus is successfully scraping the target. | ||
| Line 133: | Line 129: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
After a minute, navigate to your Prometheus UI, go to '''Status -> Targets''', and verify that a target named <code>pve-exporter</code> is present and has a state of '''UP'''. | After a minute, navigate to your Prometheus UI, go to '''Status -> Targets''', and verify that a target named <code>pve-exporter</code> is present and has a state of '''UP'''. | ||
'''Notes:''' | '''Notes:''' | ||
* The <code>PVE_VERIFY_SSL: "false"</code> setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to <code>"true"</code> if you use a valid, trusted certificate. | * The <code>PVE_VERIFY_SSL: "false"</code> setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to <code>"true"</code> if you use a valid, trusted certificate. | ||
Revision as of 13:39, 29 August 2025
Monitor Proxmox with Prometheus Exporter on Kubernetes
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.
1. Create a Read-Only User and API Token on Proxmox
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands. The script below will:
- Create a role named
ExporterRolewith the necessary audit permissions. - Create a user named
pve-exporter@pvespecifically for this purpose (it does not require a password). - Assign the read-only role to the new user at the root level (
/). - Create an API token named
exporter-tokenfor the user.
# Create the role with read-only privileges and a valid name
pveum roleadd ExporterRole -privs "Datastore.Audit Sys.Audit"
# Create the user (password login is not needed for token auth)
pveum useradd pve-exporter@pve
# Assign the role to the user for the entire datacenter
pveum aclmod / -user pve-exporter@pve -role ExporterRole
# Create the API token for the user
pveum user token add pve-exporter@pve exporter-token
Important: The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. You will not be able to see it again.
(Optional) Cleanup Script
If you need to re-run the setup, first delete the old resources to avoid errors.
pveum user token remove pve-exporter@pve exporter-token
pveum user delete pve-exporter@pve
pveum role delete ExporterRole
2. Create the Combined Kubernetes Manifest
On your local machine, create a single YAML file named pve-exporter-full.yaml. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.
Important: Before saving, replace the placeholder values for YOUR_API_TOKEN_ID (e.g., pve-exporter@pve!exporter-token) and YOUR_API_TOKEN_SECRET with the values you just generated. Also, update your Proxmox host's IP address.
apiVersion: v1
kind: Secret
metadata:
name: pve-exporter-credentials
namespace: monitoring # Or your preferred namespace
stringData:
# The PVE_USER for token auth is the full Token ID
PVE_USER: "YOUR_API_TOKEN_ID" # e.g., pve-exporter@pve!exporter-token
# The PVE_PASSWORD for token auth is the Token Secret
PVE_PASSWORD: "YOUR_API_TOKEN_SECRET"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pve-exporter
namespace: monitoring
labels:
app: pve-exporter
spec:
replicas: 1
selector:
matchLabels:
app: pve-exporter
template:
metadata:
labels:
app: pve-exporter
spec:
containers:
- name: pve-exporter
image: prompve/prometheus-pve-exporter:latest
ports:
- name: http-metrics
containerPort: 9221
env:
- name: PVE_USER
valueFrom:
secretKeyRef:
name: pve-exporter-credentials
key: PVE_USER
- name: PVE_PASSWORD
valueFrom:
secretKeyRef:
name: pve-exporter-credentials
key: PVE_PASSWORD
- name: PVE_VERIFY_SSL
value: "false"
---
apiVersion: v1
kind: Service
metadata:
name: pve-exporter
namespace: monitoring
labels:
app: pve-exporter
spec:
selector:
matchLabels:
app: pve-exporter
ports:
- name: http-metrics
port: 9221
targetPort: http-metrics
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pve-exporter
namespace: monitoring
labels:
release: prometheus # Label must match your Prometheus Operator's discovery selector
spec:
selector:
matchLabels:
app: pve-exporter
endpoints:
- port: http-metrics
path: /pve
params:
target:
- "192.168.1.100" # <-- Replace with your Proxmox host's IP address
relabelings:
- sourceLabels: [__param_target]
targetLabel: instance
- sourceLabels: [__param_target]
targetLabel: target
3. Apply the Kubernetes Manifest
Apply the single YAML file to your cluster to deploy all resources at once.
kubectl apply -f pve-exporter-full.yaml
4. Verify the Deployment
Check that the pod is running and that Prometheus is successfully scraping the target.
# Check pod status
kubectl get pods -n monitoring -l app=pve-exporter
After a minute, navigate to your Prometheus UI, go to Status -> Targets, and verify that a target named pve-exporter is present and has a state of UP.
Notes:
- The
PVE_VERIFY_SSL: "false"setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to"true"if you use a valid, trusted certificate. - The
ServiceMonitorresource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to yourprometheus.ymlfile. - All Kubernetes resources are deployed to the
monitoringnamespace. Adjust if you use a different one.