<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.jandzsogyorgy.hu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gyurci08</id>
	<title>Jwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.jandzsogyorgy.hu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gyurci08"/>
	<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php/Special:Contributions/Gyurci08"/>
	<updated>2026-05-05T13:08:43Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.6</generator>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=344</id>
		<title>Connecting to a Kubernetes Pod for JMX Debugging</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=344"/>
		<updated>2026-01-05T10:33:26Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Connecting to a Kubernetes Pod for JMX Debugging (Rancher Environment) =&lt;br /&gt;
&lt;br /&gt;
This guide provides a comprehensive walkthrough for developers to configure access to a Rancher-managed Kubernetes cluster for the first time and connect to a Java application for JMX monitoring.&lt;br /&gt;
&lt;br /&gt;
The process involves three main stages:&lt;br /&gt;
# Installing the Kubernetes command-line tool, &#039;&#039;&#039;kubectl&#039;&#039;&#039;.&lt;br /&gt;
# Setting up cluster access using a &#039;&#039;&#039;kubeconfig&#039;&#039;&#039; file from Rancher.&lt;br /&gt;
# Forwarding a local port to the pod to establish a secure JMX connection.&lt;br /&gt;
&lt;br /&gt;
== 1. Prerequisite: Install kubectl ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; is the command-line tool for interacting with the Kubernetes API. Before proceeding, you must install it on your local machine.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
Open a PowerShell terminal &#039;&#039;&#039;as an Administrator&#039;&#039;&#039; and run one of the following commands.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
# Using Chocolatey package manager&lt;br /&gt;
choco install kubernetes-cli&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;Or, using Scoop package manager:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
scoop install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== macOS ===&lt;br /&gt;
On macOS, use the Homebrew package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Linux (Debian/Ubuntu) ===&lt;br /&gt;
On Debian-based systems, use the native &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, verify that &#039;&#039;&#039;kubectl&#039;&#039;&#039; is available in your path by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl version --client&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Configure Cluster Access ==&lt;br /&gt;
&lt;br /&gt;
Since this is your first time connecting, you will set up your local configuration from scratch. The kubeconfig file you get from Rancher is specifically configured for your user and its permissions.&lt;br /&gt;
&lt;br /&gt;
=== Create the .kube Directory ===&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; expects its configuration to be in a hidden directory in your user&#039;s home folder.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p ~/.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
mkdir %USERPROFILE%\.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Place the Kubeconfig File ===&lt;br /&gt;
You will receive a kubeconfig file from your DevOps team or download it directly from the Rancher UI. Rename this file to &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; and move it into the &amp;lt;code&amp;gt;.kube&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mv /path/to/your/rancher-kubeconfig.yaml ~/.kube/config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
move C:\path\to\your\rancher-kubeconfig.yaml %USERPROFILE%\.kube\config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will now be the default configuration file that &#039;&#039;&#039;kubectl&#039;&#039;&#039; uses for all commands.&lt;br /&gt;
&lt;br /&gt;
=== Verify Cluster Access ===&lt;br /&gt;
Test that your configuration is working correctly. Your access is restricted to a specific project, so some cluster-wide commands will fail—this is expected.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# List all available cluster contexts (there should be only one)&lt;br /&gt;
kubectl config get-contexts&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# This command will likely FAIL. This is NORMAL.&lt;br /&gt;
# It fails because your role is scoped to a project, not the whole cluster.&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;get pods&amp;lt;/code&amp;gt; command fails because it tries to list pods in the &amp;lt;code&amp;gt;default&amp;lt;/code&amp;gt; namespace, which you may not have access to. Your permissions are tied to the namespaces within your assigned project.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To properly test your connection&#039;&#039;&#039;, you must specify the namespace you have access to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;lt;your-project-namespace&amp;gt; with the actual namespace name provided to you.&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this command returns a list of pods (or an empty list with no errors), your access is configured correctly.&lt;br /&gt;
&lt;br /&gt;
== 3. Forward a Port for JMX Connection ==&lt;br /&gt;
&lt;br /&gt;
With cluster access established, you can now create a secure tunnel from your local machine to the JMX port of the Java application running inside a pod.&lt;br /&gt;
&lt;br /&gt;
=== Set Your Default Namespace (Optional but Recommended) ===&lt;br /&gt;
To avoid typing &amp;lt;code&amp;gt;-n &amp;lt;your-project-namespace&amp;gt;&amp;lt;/code&amp;gt; for every command, you can set it as your default for the current session.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl config set-context --current --namespace=&amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, all subsequent &amp;lt;code&amp;gt;kubectl&amp;lt;/code&amp;gt; commands in this terminal will automatically target your project&#039;s namespace.&lt;br /&gt;
&lt;br /&gt;
=== Find the Target Pod Name ===&lt;br /&gt;
First, identify the exact name of the pod you want to connect to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# If you set your default namespace, you can run this:&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;If you did not set a default, you must specify the namespace:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Copy the full name of the pod from the output (e.g., &amp;lt;code&amp;gt;your-java-app-pod-name-xyz&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
=== Start the Port Forwarding Session ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;kubectl port-forward&amp;lt;/code&amp;gt; command to create the tunnel. This command maps a port on your local machine to the JMX port on the pod (assuming the JMX service in the pod is configured to run on port &#039;&#039;&#039;9010&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# The &amp;quot;-n &amp;lt;namespace&amp;gt;&amp;quot; is not needed if you set your default context&lt;br /&gt;
kubectl port-forward your-java-app-pod-name-xyz 9010:9010&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This command will block your terminal and must be left running for the entire duration of your JMX session. The output will confirm the connection is active.&lt;br /&gt;
&lt;br /&gt;
=== Connect with a JMX Client ===&lt;br /&gt;
While the port-forward is running, open your preferred JMX client (such as &#039;&#039;&#039;JConsole&#039;&#039;&#039; or &#039;&#039;&#039;VisualVM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
# Select the option to connect to a remote process.&lt;br /&gt;
# For the connection address or service URL, enter: &amp;lt;code&amp;gt;localhost:9010&amp;lt;/code&amp;gt;&lt;br /&gt;
# Do not specify a username or password unless the JMX service itself is configured to require them.&lt;br /&gt;
&lt;br /&gt;
You should now be connected to the application&#039;s JVM, with access to its live performance metrics.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Set_CPU_Power_Management_on_Linux&amp;diff=343</id>
		<title>Set CPU Power Management on Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Set_CPU_Power_Management_on_Linux&amp;diff=343"/>
		<updated>2025-11-22T21:04:40Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:General Linux]]&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Install `cpupower` if missing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y linux-cpupower&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Verify CPU frequency scaling support:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Create `/etc/systemd/system/cpu-thermal-control.service`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;ini&amp;quot;&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=Apply powersave governor, balance_power EPP, and frequency limits&lt;br /&gt;
After=multi-user.target&lt;br /&gt;
&lt;br /&gt;
[Service]&lt;br /&gt;
Type=oneshot&lt;br /&gt;
ExecStart=/bin/sh -c &#039;cpupower frequency-set -g powersave &amp;amp;&amp;amp; echo balance_power &amp;gt; /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference &amp;amp;&amp;amp; cpupower frequency-set -u 4GHz&#039;&lt;br /&gt;
RemainAfterExit=yes&lt;br /&gt;
&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enable and start the service:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl enable --now cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage &amp;amp; Power Save Mode ==&lt;br /&gt;
&lt;br /&gt;
- The service sets CPU governor to powersave, EPP to balance_power, and caps max frequency.&lt;br /&gt;
- To enable more aggressive power saving, modify `ExecStart` to use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;ini&amp;quot;&amp;gt;&lt;br /&gt;
ExecStart=/bin/sh -c &#039;cpupower frequency-set -g powersave &amp;amp;&amp;amp; echo powersave &amp;gt; /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference &amp;amp;&amp;amp; cpupower frequency-set -d 800MHz -u 2GHz&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Reload and restart service after changes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl restart cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Check CPU frequency info:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cpupower frequency-info&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== For Proxmox Users ==&lt;br /&gt;
&lt;br /&gt;
Proxmox defaults to performance mode causing higher CPU temps. This service:&lt;br /&gt;
&lt;br /&gt;
- Switches to powersave governor,&lt;br /&gt;
- Applies balance_power or powersave EPP,&lt;br /&gt;
- Caps CPU frequency for cooler, quieter operation.&lt;br /&gt;
&lt;br /&gt;
Ensure kernel supports CPU scaling and adjust frequency caps as needed for your hardware.&lt;br /&gt;
&lt;br /&gt;
Start this service on Proxmox nodes to improve thermal management and power efficiency.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
Check service logs for errors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
journalctl -u cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Verify permissions and CPU frequency scaling support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Sources ==&lt;br /&gt;
&lt;br /&gt;
* [Linux cpupower Documentation](https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html)&lt;br /&gt;
* [systemd Service Files](https://www.freedesktop.org/software/systemd/man/systemd.service.html)&lt;br /&gt;
* [EPP Documentation](https://www.kernel.org/doc/html/latest/admin-guide/pm/energy_perf_bias.html)&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Set_CPU_Power_Management_on_Linux&amp;diff=342</id>
		<title>Set CPU Power Management on Linux</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Set_CPU_Power_Management_on_Linux&amp;diff=342"/>
		<updated>2025-11-22T20:56:58Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;Category:System Services Category:CPU Power Management Category:Proxmox VE Category:Guides &amp;amp; Tutorials  == Prerequisites ==  Install `cpupower` if missing:  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; sudo apt-get update sudo apt-get install -y linux-cpupower &amp;lt;/syntaxhighlight&amp;gt;  Verify CPU frequency scaling support:  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt; ls /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference &amp;lt;/syntaxhighlight&amp;gt;   == Installation ==  Create `/e...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:System Services]]&lt;br /&gt;
[[Category:CPU Power Management]]&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Install `cpupower` if missing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y linux-cpupower&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Verify CPU frequency scaling support:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Create `/etc/systemd/system/cpu-thermal-control.service`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;ini&amp;quot;&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=Apply powersave governor, balance_power EPP, and frequency limits&lt;br /&gt;
After=multi-user.target&lt;br /&gt;
&lt;br /&gt;
[Service]&lt;br /&gt;
Type=oneshot&lt;br /&gt;
ExecStart=/bin/sh -c &#039;cpupower frequency-set -g powersave &amp;amp;&amp;amp; echo balance_power &amp;gt; /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference &amp;amp;&amp;amp; cpupower frequency-set -u 4GHz&#039;&lt;br /&gt;
RemainAfterExit=yes&lt;br /&gt;
&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enable and start the service:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl enable --now cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Usage &amp;amp; Power Save Mode ==&lt;br /&gt;
&lt;br /&gt;
- The service sets CPU governor to powersave, EPP to balance_power, and caps max frequency.&lt;br /&gt;
- To enable more aggressive power saving, modify `ExecStart` to use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;ini&amp;quot;&amp;gt;&lt;br /&gt;
ExecStart=/bin/sh -c &#039;cpupower frequency-set -g powersave &amp;amp;&amp;amp; echo powersave &amp;gt; /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference &amp;amp;&amp;amp; cpupower frequency-set -d 800MHz -u 2GHz&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Reload and restart service after changes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl restart cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- Check CPU frequency info:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cpupower frequency-info&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== For Proxmox Users ==&lt;br /&gt;
&lt;br /&gt;
Proxmox defaults to performance mode causing higher CPU temps. This service:&lt;br /&gt;
&lt;br /&gt;
- Switches to powersave governor,&lt;br /&gt;
- Applies balance_power or powersave EPP,&lt;br /&gt;
- Caps CPU frequency for cooler, quieter operation.&lt;br /&gt;
&lt;br /&gt;
Ensure kernel supports CPU scaling and adjust frequency caps as needed for your hardware.&lt;br /&gt;
&lt;br /&gt;
Start this service on Proxmox nodes to improve thermal management and power efficiency.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
Check service logs for errors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
journalctl -u cpu-thermal-control.service&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Verify permissions and CPU frequency scaling support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Sources ==&lt;br /&gt;
&lt;br /&gt;
* [Linux cpupower Documentation](https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html)&lt;br /&gt;
* [systemd Service Files](https://www.freedesktop.org/software/systemd/man/systemd.service.html)&lt;br /&gt;
* [EPP Documentation](https://www.kernel.org/doc/html/latest/admin-guide/pm/energy_perf_bias.html)&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=341</id>
		<title>Create admin user</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=341"/>
		<updated>2025-10-10T22:04:36Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Create Admin on Proxmox VE 9 ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Install &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; (if not present) ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt update &amp;amp;&amp;amp; apt install -y sudo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create an Administrative User ===&lt;br /&gt;
The following script will:&lt;br /&gt;
* Create a new local Linux user (replace &amp;lt;code&amp;gt;asd&amp;lt;/code&amp;gt; and password as needed).&lt;br /&gt;
* Add the user to the &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; group.&lt;br /&gt;
* Create an &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group in Proxmox user management.&lt;br /&gt;
* Assign the &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; role to the group.&lt;br /&gt;
* Add the user to the Proxmox permission system with PAM authentication.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
PASS=&amp;quot;asd&amp;quot;&lt;br /&gt;
COMMENT=&amp;quot;System Administrator&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create local user if not existing&lt;br /&gt;
if ! id &amp;quot;$USER&amp;quot; &amp;amp;&amp;gt;/dev/null; then&lt;br /&gt;
  useradd -m -s /bin/bash -G sudo &amp;quot;$USER&amp;quot;&lt;br /&gt;
  echo &amp;quot;$USER:$PASS&amp;quot; | chpasswd&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Create admin group in Proxmox (ignore error if it exists)&lt;br /&gt;
pveum groupadd admins --comment &amp;quot;${COMMENT} group&amp;quot; || true&lt;br /&gt;
&lt;br /&gt;
# Assign Administrator role to the group (root-level permission)&lt;br /&gt;
pveum acl modify / --group admins --role Administrator&lt;br /&gt;
&lt;br /&gt;
# Add user to Proxmox user database (PAM authentication)&lt;br /&gt;
pveum user add &amp;quot;${USER}@pam&amp;quot; --comment &amp;quot;${COMMENT}&amp;quot; --groups admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Remove the Created User and Group ===&lt;br /&gt;
To cleanly remove the user and associated group from both Linux and Proxmox:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove local Linux user&lt;br /&gt;
deluser --remove-home &amp;quot;$USER&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove from Proxmox permission system&lt;br /&gt;
pveum user delete &amp;quot;${USER}@pam&amp;quot; || true&lt;br /&gt;
pveum group delete admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. (Optional) Disable Root GUI Access ===&lt;br /&gt;
For improved security, it is recommended to disable the default &amp;lt;code&amp;gt;root@pam&amp;lt;/code&amp;gt; account GUI access once an administrative user exists:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== To re-enable root GUI access: ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Always update default passwords before production use.&lt;br /&gt;
* The &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group will retain &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; privileges assigned through the ACL.&lt;br /&gt;
* Be sure to have at least one active administrative account before disabling root GUI access.&lt;br /&gt;
* All commands must be executed with root privileges (via shell or sudo).&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=340</id>
		<title>Create admin user</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=340"/>
		<updated>2025-10-10T22:02:25Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Create Admin on Proxmox VE 9 ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Install &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; (if not present) ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt update &amp;amp;&amp;amp; apt install -y sudo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create an Administrative User ===&lt;br /&gt;
The following script will:&lt;br /&gt;
* Create a new local Linux user (replace &amp;lt;code&amp;gt;asd&amp;lt;/code&amp;gt; and password as needed).&lt;br /&gt;
* Add the user to the &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; group.&lt;br /&gt;
* Create an &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group in Proxmox user management.&lt;br /&gt;
* Assign the &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; role to the group.&lt;br /&gt;
* Add the user to the Proxmox permission system with PAM authentication.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
PASS=&amp;quot;asd&amp;quot;&lt;br /&gt;
COMMENT=&amp;quot;System Administrator&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create local user if not existing&lt;br /&gt;
if ! id &amp;quot;$USER&amp;quot; &amp;amp;&amp;gt;/dev/null; then&lt;br /&gt;
  useradd -m -s /bin/bash -G sudo &amp;quot;$USER&amp;quot;&lt;br /&gt;
  echo &amp;quot;$USER:$PASS&amp;quot; | chpasswd&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Create admin group in Proxmox (ignore error if it exists)&lt;br /&gt;
pveum groupadd admins --comment &amp;quot;${COMMENT} group&amp;quot; || true&lt;br /&gt;
&lt;br /&gt;
# Assign Administrator role to the group (root-level permission)&lt;br /&gt;
pveum acl modify / --group admins --role Administrator&lt;br /&gt;
&lt;br /&gt;
# Add user to Proxmox user database (PAM authentication)&lt;br /&gt;
pveum user add &amp;quot;${USER}@pam&amp;quot; --comment &amp;quot;${COMMENT}&amp;quot; --groups admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Remove the Created User and Group ===&lt;br /&gt;
To cleanly remove the user and associated group from both Linux and Proxmox:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove local Linux user&lt;br /&gt;
deluser --remove-home &amp;quot;$USER&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove from Proxmox permission system&lt;br /&gt;
pveum user delete &amp;quot;${USER}@pam&amp;quot; || true&lt;br /&gt;
pveum group delete admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. (Optional) Disable Root GUI Access ===&lt;br /&gt;
For improved security, it is recommended to disable the default &amp;lt;code&amp;gt;root@pam&amp;lt;/code&amp;gt; account GUI access once an administrative user exists:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== To re-enable root GUI access: ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Always update default passwords before production use.&lt;br /&gt;
* The &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group will retain &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; privileges assigned through the ACL.&lt;br /&gt;
* Be sure to have at least one active administrative account before disabling root GUI access.&lt;br /&gt;
* All commands must be executed with root privileges (via shell or sudo).&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE 9]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=339</id>
		<title>Create admin user</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_admin_user&amp;diff=339"/>
		<updated>2025-10-10T22:02:05Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Install &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; on Proxmox VE 9 ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Install &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; (if not present) ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt update &amp;amp;&amp;amp; apt install -y sudo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create an Administrative User ===&lt;br /&gt;
The following script will:&lt;br /&gt;
* Create a new local Linux user (replace &amp;lt;code&amp;gt;asd&amp;lt;/code&amp;gt; and password as needed).&lt;br /&gt;
* Add the user to the &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; group.&lt;br /&gt;
* Create an &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group in Proxmox user management.&lt;br /&gt;
* Assign the &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; role to the group.&lt;br /&gt;
* Add the user to the Proxmox permission system with PAM authentication.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
PASS=&amp;quot;asd&amp;quot;&lt;br /&gt;
COMMENT=&amp;quot;System Administrator&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create local user if not existing&lt;br /&gt;
if ! id &amp;quot;$USER&amp;quot; &amp;amp;&amp;gt;/dev/null; then&lt;br /&gt;
  useradd -m -s /bin/bash -G sudo &amp;quot;$USER&amp;quot;&lt;br /&gt;
  echo &amp;quot;$USER:$PASS&amp;quot; | chpasswd&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# Create admin group in Proxmox (ignore error if it exists)&lt;br /&gt;
pveum groupadd admins --comment &amp;quot;${COMMENT} group&amp;quot; || true&lt;br /&gt;
&lt;br /&gt;
# Assign Administrator role to the group (root-level permission)&lt;br /&gt;
pveum acl modify / --group admins --role Administrator&lt;br /&gt;
&lt;br /&gt;
# Add user to Proxmox user database (PAM authentication)&lt;br /&gt;
pveum user add &amp;quot;${USER}@pam&amp;quot; --comment &amp;quot;${COMMENT}&amp;quot; --groups admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Remove the Created User and Group ===&lt;br /&gt;
To cleanly remove the user and associated group from both Linux and Proxmox:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
USER=&amp;quot;asd&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove local Linux user&lt;br /&gt;
deluser --remove-home &amp;quot;$USER&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Remove from Proxmox permission system&lt;br /&gt;
pveum user delete &amp;quot;${USER}@pam&amp;quot; || true&lt;br /&gt;
pveum group delete admins || true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. (Optional) Disable Root GUI Access ===&lt;br /&gt;
For improved security, it is recommended to disable the default &amp;lt;code&amp;gt;root@pam&amp;lt;/code&amp;gt; account GUI access once an administrative user exists:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== To re-enable root GUI access: ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user modify root@pam --enable 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Always update default passwords before production use.&lt;br /&gt;
* The &amp;lt;code&amp;gt;admins&amp;lt;/code&amp;gt; group will retain &amp;lt;code&amp;gt;Administrator&amp;lt;/code&amp;gt; privileges assigned through the ACL.&lt;br /&gt;
* Be sure to have at least one active administrative account before disabling root GUI access.&lt;br /&gt;
* All commands must be executed with root privileges (via shell or sudo).&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE 9]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=338</id>
		<title>Connecting to a Kubernetes Pod for JMX Debugging</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=338"/>
		<updated>2025-09-05T15:39:39Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Connecting to a Kubernetes Pod for JMX Debugging (Rancher Environment) =&lt;br /&gt;
&lt;br /&gt;
This guide provides a comprehensive walkthrough for developers to configure access to a Rancher-managed Kubernetes cluster for the first time and connect to a Java application for JMX monitoring.&lt;br /&gt;
&lt;br /&gt;
The process involves three main stages:&lt;br /&gt;
# Installing the Kubernetes command-line tool, &#039;&#039;&#039;kubectl&#039;&#039;&#039;.&lt;br /&gt;
# Setting up cluster access using a &#039;&#039;&#039;kubeconfig&#039;&#039;&#039; file from Rancher.&lt;br /&gt;
# Forwarding a local port to the pod to establish a secure JMX connection.&lt;br /&gt;
&lt;br /&gt;
== 1. Prerequisite: Install kubectl ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; is the command-line tool for interacting with the Kubernetes API. Before proceeding, you must install it on your local machine.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
Open a PowerShell terminal &#039;&#039;&#039;as an Administrator&#039;&#039;&#039; and run one of the following commands.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
# Using Chocolatey package manager&lt;br /&gt;
choco install kubernetes-cli&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;Or, using Scoop package manager:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
scoop install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== macOS ===&lt;br /&gt;
On macOS, use the Homebrew package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Linux (Debian/Ubuntu) ===&lt;br /&gt;
On Debian-based systems, use the native &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, verify that &#039;&#039;&#039;kubectl&#039;&#039;&#039; is available in your path by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl version --client&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Configure Cluster Access ==&lt;br /&gt;
&lt;br /&gt;
Since this is your first time connecting, you will set up your local configuration from scratch. The kubeconfig file you get from Rancher is specifically configured for your user and its permissions.&lt;br /&gt;
&lt;br /&gt;
=== Create the .kube Directory ===&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; expects its configuration to be in a hidden directory in your user&#039;s home folder.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p ~/.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
mkdir %USERPROFILE%\.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Place the Kubeconfig File ===&lt;br /&gt;
You will receive a kubeconfig file from your DevOps team or download it directly from the Rancher UI. Rename this file to &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; and move it into the &amp;lt;code&amp;gt;.kube&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mv /path/to/your/rancher-kubeconfig.yaml ~/.kube/config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
move C:\path\to\your\rancher-kubeconfig.yaml %USERPROFILE%\.kube\config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will now be the default configuration file that &#039;&#039;&#039;kubectl&#039;&#039;&#039; uses for all commands.&lt;br /&gt;
&lt;br /&gt;
=== Verify Cluster Access ===&lt;br /&gt;
Test that your configuration is working correctly. Your access is restricted to a specific project, so some cluster-wide commands will fail—this is expected.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# List all available cluster contexts (there should be only one)&lt;br /&gt;
kubectl config get-contexts&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# This command will likely FAIL. This is NORMAL.&lt;br /&gt;
# It fails because your role is scoped to a project, not the whole cluster.&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;get pods&amp;lt;/code&amp;gt; command fails because it tries to list pods in the &amp;lt;code&amp;gt;default&amp;lt;/code&amp;gt; namespace, which you may not have access to. Your permissions are tied to the namespaces within your assigned project.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To properly test your connection&#039;&#039;&#039;, you must specify the namespace you have access to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;lt;your-project-namespace&amp;gt; with the actual namespace name provided to you.&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this command returns a list of pods (or an empty list with no errors), your access is configured correctly.&lt;br /&gt;
&lt;br /&gt;
== 3. Forward a Port for JMX Connection ==&lt;br /&gt;
&lt;br /&gt;
With cluster access established, you can now create a secure tunnel from your local machine to the JMX port of the Java application running inside a pod.&lt;br /&gt;
&lt;br /&gt;
=== Set Your Default Namespace (Optional but Recommended) ===&lt;br /&gt;
To avoid typing &amp;lt;code&amp;gt;-n &amp;lt;your-project-namespace&amp;gt;&amp;lt;/code&amp;gt; for every command, you can set it as your default for the current session.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl config set-context --current --namespace=&amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, all subsequent &amp;lt;code&amp;gt;kubectl&amp;lt;/code&amp;gt; commands in this terminal will automatically target your project&#039;s namespace.&lt;br /&gt;
&lt;br /&gt;
=== Find the Target Pod Name ===&lt;br /&gt;
First, identify the exact name of the pod you want to connect to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# If you set your default namespace, you can run this:&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;If you did not set a default, you must specify the namespace:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Copy the full name of the pod from the output (e.g., &amp;lt;code&amp;gt;your-java-app-pod-name-xyz&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
=== Start the Port Forwarding Session ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;kubectl port-forward&amp;lt;/code&amp;gt; command to create the tunnel. This command maps a port on your local machine to the JMX port on the pod (assuming the JMX service in the pod is configured to run on port &#039;&#039;&#039;9010&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# The &amp;quot;-n &amp;lt;namespace&amp;gt;&amp;quot; is not needed if you set your default context&lt;br /&gt;
kubectl port-forward your-java-app-pod-name-xyz 9010:9010&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This command will block your terminal and must be left running for the entire duration of your JMX session. The output will confirm the connection is active.&lt;br /&gt;
&lt;br /&gt;
=== Connect with a JMX Client ===&lt;br /&gt;
While the port-forward is running, open your preferred JMX client (such as &#039;&#039;&#039;JConsole&#039;&#039;&#039; or &#039;&#039;&#039;VisualVM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
# Select the option to connect to a remote process.&lt;br /&gt;
# For the connection address or service URL, enter: &amp;lt;code&amp;gt;localhost:9010&amp;lt;/code&amp;gt;&lt;br /&gt;
# Do not specify a username or password unless the JMX service itself is configured to require them.&lt;br /&gt;
&lt;br /&gt;
You should now be connected to the application&#039;s JVM, with access to its live performance metrics.&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=337</id>
		<title>Connecting to a Kubernetes Pod for JMX Debugging</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=337"/>
		<updated>2025-09-05T15:30:09Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Connecting to a Kubernetes Pod for JMX Debugging (Rancher Environment) =&lt;br /&gt;
&lt;br /&gt;
This guide provides a comprehensive walkthrough for developers to configure access to a Rancher-managed Kubernetes cluster for the first time and connect to a Java application for JMX monitoring.&lt;br /&gt;
&lt;br /&gt;
The process involves three main stages:&lt;br /&gt;
# Installing the Kubernetes command-line tool, &#039;&#039;&#039;kubectl&#039;&#039;&#039;.&lt;br /&gt;
# Setting up cluster access using a &#039;&#039;&#039;kubeconfig&#039;&#039;&#039; file from Rancher.&lt;br /&gt;
# Forwarding a local port to the pod to establish a secure JMX connection.&lt;br /&gt;
&lt;br /&gt;
== 1. Prerequisite: Install kubectl ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; is the command-line tool for interacting with the Kubernetes API. Before proceeding, you must install it on your local machine.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
Open a PowerShell terminal &#039;&#039;&#039;as an Administrator&#039;&#039;&#039; and run one of the following commands.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
# Using Chocolatey package manager&lt;br /&gt;
choco install kubernetes-cli&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;Or, using Scoop package manager:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
scoop install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== macOS ===&lt;br /&gt;
On macOS, use the Homebrew package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Linux (Debian/Ubuntu) ===&lt;br /&gt;
On Debian-based systems, use the native &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; package manager.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, verify that &#039;&#039;&#039;kubectl&#039;&#039;&#039; is available in your path by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl version --client&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Configure Cluster Access ==&lt;br /&gt;
&lt;br /&gt;
Since this is your first time connecting, you will set up your local configuration from scratch. The kubeconfig file you get from Rancher is specifically configured for your user and its permissions.&lt;br /&gt;
&lt;br /&gt;
=== Create the .kube Directory ===&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; expects its configuration to be in a hidden directory in your user&#039;s home folder.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p ~/.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
mkdir %USERPROFILE%\.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Place the Kubeconfig File ===&lt;br /&gt;
You will receive a kubeconfig file from your DevOps team or download it directly from the Rancher UI. Rename this file to &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; and move it into the &amp;lt;code&amp;gt;.kube&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&#039;&#039;On Linux or macOS:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mv /path/to/your/rancher-kubeconfig.yaml ~/.kube/config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;On Windows (in PowerShell or Command Prompt):&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
move C:\path\to\your\rancher-kubeconfig.yaml %USERPROFILE%\.kube\config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will now be the default configuration file that &#039;&#039;&#039;kubectl&#039;&#039;&#039; uses for all commands.&lt;br /&gt;
&lt;br /&gt;
=== Verify Cluster Access ===&lt;br /&gt;
Test that your configuration is working correctly. Your access is restricted to a specific project, so some cluster-wide commands will fail—this is expected.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# List all available cluster contexts (there should be only one)&lt;br /&gt;
kubectl config get-contexts&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# This command will likely FAIL. This is NORMAL.&lt;br /&gt;
# It fails because your role is scoped to a project, not the whole cluster.&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
The &amp;lt;code&amp;gt;get pods&amp;lt;/code&amp;gt; command fails because it tries to list pods in the &amp;lt;code&amp;gt;default&amp;lt;/code&amp;gt; namespace, which you may not have access to. Your permissions are tied to the namespaces within your assigned project.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To properly test your connection&#039;&#039;&#039;, you must specify the namespace you have access to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Replace &amp;lt;your-project-namespace&amp;gt; with the actual namespace name provided to you.&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this command returns a list of pods (or an empty list with no errors), your access is configured correctly.&lt;br /&gt;
&lt;br /&gt;
== 3. Forward a Port for JMX Connection ==&lt;br /&gt;
&lt;br /&gt;
With cluster access established, you can now create a secure tunnel from your local machine to the JMX port of the Java application running inside a pod.&lt;br /&gt;
&lt;br /&gt;
=== Set Your Default Namespace (Optional but Recommended) ===&lt;br /&gt;
To avoid typing &amp;lt;code&amp;gt;-n &amp;lt;your-project-namespace&amp;gt;&amp;lt;/code&amp;gt; for every command, you can set it as your default for the current session.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl config set-context --current --namespace=&amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, all subsequent &amp;lt;code&amp;gt;kubectl&amp;lt;/code&amp;gt; commands in this terminal will automatically target your project&#039;s namespace.&lt;br /&gt;
&lt;br /&gt;
=== Find the Target Pod Name ===&lt;br /&gt;
First, identify the exact name of the pod you want to connect to.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# If you set your default namespace, you can run this:&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;If you did not set a default, you must specify the namespace:&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get pods -n &amp;lt;your-project-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Copy the full name of the pod from the output (e.g., &amp;lt;code&amp;gt;your-java-app-pod-name-xyz&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
=== Start the Port Forwarding Session ===&lt;br /&gt;
Use the &amp;lt;code&amp;gt;kubectl port-forward&amp;lt;/code&amp;gt; command to create the tunnel. This command maps a port on your local machine to the JMX port on the pod (assuming the JMX service in the pod is configured to run on port &#039;&#039;&#039;9010&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# The &amp;quot;-n &amp;lt;namespace&amp;gt;&amp;quot; is not needed if you set your default context&lt;br /&gt;
kubectl port-forward pod/your-java-app-pod-name-xyz 9010:9010&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This command will block your terminal and must be left running for the entire duration of your JMX session. The output will confirm the connection is active.&lt;br /&gt;
&lt;br /&gt;
=== Connect with a JMX Client ===&lt;br /&gt;
While the port-forward is running, open your preferred JMX client (such as &#039;&#039;&#039;JConsole&#039;&#039;&#039; or &#039;&#039;&#039;VisualVM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
# Select the option to connect to a remote process.&lt;br /&gt;
# For the connection address or service URL, enter: &amp;lt;code&amp;gt;localhost:9010&amp;lt;/code&amp;gt;&lt;br /&gt;
# Do not specify a username or password unless the JMX service itself is configured to require them.&lt;br /&gt;
&lt;br /&gt;
You should now be connected to the application&#039;s JVM, with access to its live performance metrics.&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=336</id>
		<title>Connecting to a Kubernetes Pod for JMX Debugging</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=336"/>
		<updated>2025-09-05T10:58:01Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Connecting to a Kubernetes Pod for JMX Debugging (First-Time Setup) =&lt;br /&gt;
&lt;br /&gt;
This guide provides a comprehensive walkthrough for developers to configure access to a Kubernetes cluster for the first time and connect to a Java application for JMX monitoring.&lt;br /&gt;
&lt;br /&gt;
The process involves three main stages:&lt;br /&gt;
# Installing the Kubernetes command-line tool, &#039;&#039;&#039;kubectl&#039;&#039;&#039;.&lt;br /&gt;
# Setting up cluster access using the provided &#039;&#039;&#039;kubeconfig&#039;&#039;&#039; file.&lt;br /&gt;
# Forwarding a local port to the pod to establish a secure JMX connection.&lt;br /&gt;
&lt;br /&gt;
=== 1. Prerequisite: Install kubectl ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; is the command-line tool for interacting with the Kubernetes API. Before proceeding, you must install it on your local machine.&lt;br /&gt;
&lt;br /&gt;
==== Windows ====&lt;br /&gt;
The recommended method for Windows is to use a package manager. Open a PowerShell terminal &#039;&#039;&#039;as an Administrator&#039;&#039;&#039; and run one of the following commands. [https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ Official Windows Install Docs]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
# Using Chocolatey package manager&lt;br /&gt;
choco install kubernetes-cli&lt;br /&gt;
&lt;br /&gt;
# Or, using Scoop package manager&lt;br /&gt;
scoop install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== macOS ====&lt;br /&gt;
On macOS, the standard installation method is via the Homebrew package manager.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Linux (Debian/Ubuntu) ====&lt;br /&gt;
On Debian-based systems, you can install kubectl using the native &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; package manager. [https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ Official Linux Install Docs]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, verify that &#039;&#039;&#039;kubectl&#039;&#039;&#039; is available in your path by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl version --client&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Configure Cluster Access ===&lt;br /&gt;
&lt;br /&gt;
Since this is your first time connecting to a Kubernetes cluster, you will set up your local configuration from scratch.&lt;br /&gt;
&lt;br /&gt;
==== Create the .kube Directory ====&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; expects its configuration to be in a hidden directory in your user&#039;s home folder. First, create this directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On Linux or macOS&lt;br /&gt;
mkdir -p ~/.kube&lt;br /&gt;
&lt;br /&gt;
# On Windows (in Command Prompt)&lt;br /&gt;
mkdir %USERPROFILE%\.kube&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Place the Kubeconfig File ====&lt;br /&gt;
You will receive a kubeconfig file from your DevOps team (e.g., &amp;lt;code&amp;gt;my-cluster.yaml&amp;lt;/code&amp;gt;). Rename this file to &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; and move it into the &amp;lt;code&amp;gt;.kube&amp;lt;/code&amp;gt; directory you just created.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On Linux or macOS&lt;br /&gt;
mv /path/to/your/my-cluster.yaml ~/.kube/config&lt;br /&gt;
&lt;br /&gt;
# On Windows (in Command Prompt)&lt;br /&gt;
move C:\path\to\your\my-cluster.yaml %USERPROFILE%\.kube\config&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will now be the default configuration file that &#039;&#039;&#039;kubectl&#039;&#039;&#039; uses for all commands.&lt;br /&gt;
&lt;br /&gt;
==== Verify Cluster Access ====&lt;br /&gt;
Test that your configuration is working correctly by running a &#039;&#039;&#039;kubectl&#039;&#039;&#039; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# List all available cluster contexts (there should be only one)&lt;br /&gt;
kubectl config get-contexts&lt;br /&gt;
&lt;br /&gt;
# Test the connection by listing pods in the default namespace&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If the last command returns a list of pods (or an empty list with no errors), your access is configured correctly.&lt;br /&gt;
&lt;br /&gt;
=== 3. Forward a Port for JMX Connection ===&lt;br /&gt;
&lt;br /&gt;
With cluster access established, you can now create a secure tunnel from your local machine to the JMX port of the Java application running inside a pod. [https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ Port Forwarding Docs]&lt;br /&gt;
&lt;br /&gt;
==== Find the Target Pod Name ====&lt;br /&gt;
First, identify the exact name of the pod you want to connect to. You may need to specify the namespace if it&#039;s not the default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get pods -n &amp;lt;target-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Copy the full name of the pod from the output (e.g., &amp;lt;code&amp;gt;your-java-app-pod-name-xyz&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==== Start the Port Forwarding Session ====&lt;br /&gt;
Use the &amp;lt;code&amp;gt;kubectl port-forward&amp;lt;/code&amp;gt; command to create the tunnel. This command maps a port on your local machine to the JMX port on the pod (assuming the JMX service in the pod is configured to run on port &#039;&#039;&#039;9010&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl port-forward pod/your-java-app-pod-name-xyz 9010:9010 -n &amp;lt;target-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This command will block your terminal and must be left running for the entire duration of your JMX session. The output will confirm the connection is active.&lt;br /&gt;
&lt;br /&gt;
==== Connect with a JMX Client ====&lt;br /&gt;
While the port-forward is running, open your preferred JMX client (such as &#039;&#039;&#039;JConsole&#039;&#039;&#039; or &#039;&#039;&#039;VisualVM&#039;&#039;&#039;).&lt;br /&gt;
# Select the option to connect to a remote process.&lt;br /&gt;
# For the connection address or service URL, enter: &amp;lt;code&amp;gt;localhost:9010&amp;lt;/code&amp;gt;&lt;br /&gt;
# Do not specify a username or password unless the JMX service itself is configured to require them.&lt;br /&gt;
&lt;br /&gt;
You should now be connected to the application&#039;s JVM, with access to its live performance metrics.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Developer-Guides]]&lt;br /&gt;
[[Category:JMX]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=335</id>
		<title>Connecting to a Kubernetes Pod for JMX Debugging</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Connecting_to_a_Kubernetes_Pod_for_JMX_Debugging&amp;diff=335"/>
		<updated>2025-09-05T10:42:02Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;= Connecting to a Kubernetes Pod for JMX Debugging =  This guide provides a comprehensive walkthrough for developers to permanently configure access to a Kubernetes cluster and connect to a Java application for JMX monitoring. This method is suitable for long-term development and debugging needs.  The process involves three main stages: # Installing the Kubernetes command-line tool, &amp;#039;&amp;#039;&amp;#039;kubectl&amp;#039;&amp;#039;&amp;#039;. # Configuring permanent access to the cluster by merging the provided &amp;#039;&amp;#039;&amp;#039;k...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Connecting to a Kubernetes Pod for JMX Debugging =&lt;br /&gt;
&lt;br /&gt;
This guide provides a comprehensive walkthrough for developers to permanently configure access to a Kubernetes cluster and connect to a Java application for JMX monitoring. This method is suitable for long-term development and debugging needs.&lt;br /&gt;
&lt;br /&gt;
The process involves three main stages:&lt;br /&gt;
# Installing the Kubernetes command-line tool, &#039;&#039;&#039;kubectl&#039;&#039;&#039;.&lt;br /&gt;
# Configuring permanent access to the cluster by merging the provided &#039;&#039;&#039;kubeconfig&#039;&#039;&#039; file.&lt;br /&gt;
# Forwarding a local port to the pod to establish a secure JMX connection.&lt;br /&gt;
&lt;br /&gt;
=== 1. Prerequisite: Install kubectl ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kubectl&#039;&#039;&#039; is the command-line tool for interacting with the Kubernetes API. Before proceeding, you must install it on your local machine.&lt;br /&gt;
&lt;br /&gt;
==== Windows ====&lt;br /&gt;
The recommended method for Windows is to use a package manager. Open a PowerShell terminal &#039;&#039;&#039;as an Administrator&#039;&#039;&#039; and run one of the following commands. [https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ Official Windows Install Docs]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;powershell&amp;quot;&amp;gt;&lt;br /&gt;
# Using Chocolatey package manager&lt;br /&gt;
choco install kubernetes-cli&lt;br /&gt;
&lt;br /&gt;
# Or, using Scoop package manager&lt;br /&gt;
scoop install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== macOS ====&lt;br /&gt;
On macOS, the standard installation method is via the Homebrew package manager.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Linux (Debian/Ubuntu) ====&lt;br /&gt;
On Debian-based systems, you can install kubectl using the native &amp;lt;code&amp;gt;apt&amp;lt;/code&amp;gt; package manager. [https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ Official Linux Install Docs]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install -y kubectl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After installation, verify that &#039;&#039;&#039;kubectl&#039;&#039;&#039; is available in your path by running:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl version --client&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Configure Permanent Cluster Access ===&lt;br /&gt;
&lt;br /&gt;
To ensure &#039;&#039;&#039;kubectl&#039;&#039;&#039; has permanent access, you must merge the connection details from the provided kubeconfig file (e.g., &amp;lt;code&amp;gt;new-cluster.yaml&amp;lt;/code&amp;gt;) into your default configuration file, which is located at &amp;lt;code&amp;gt;~/.kube/config&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Back Up Your Existing Configuration ====&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before you modify your kubeconfig, always create a backup. This prevents the loss of access to other clusters.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On Linux or macOS&lt;br /&gt;
cp ~/.kube/config ~/.kube/config.backup&lt;br /&gt;
&lt;br /&gt;
# On Windows (in Command Prompt)&lt;br /&gt;
copy %USERPROFILE%\.kube\config %USERPROFILE%\.kube\config.backup&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Merge the New and Existing Configurations ====&lt;br /&gt;
The safest way to merge is to use &#039;&#039;&#039;kubectl&#039;&#039;&#039; to combine the files and output a new, merged configuration. This command reads both your old and new files and flattens them into a single valid file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On Linux or macOS&lt;br /&gt;
KUBECONFIG=~/.kube/config:/path/to/your/new-cluster.yaml kubectl config view --flatten &amp;gt; ~/.kube/config_merged&lt;br /&gt;
mv ~/.kube/config_merged ~/.kube/config&lt;br /&gt;
&lt;br /&gt;
# On Windows (in PowerShell)&lt;br /&gt;
$env:KUBECONFIG=&amp;quot;$env:USERPROFILE\.kube\config;\path\to\your\new-cluster.yaml&amp;quot;&lt;br /&gt;
kubectl config view --flatten | Out-File -FilePath $env:USERPROFILE\.kube\config -Encoding utf8&lt;br /&gt;
# Unset the temporary variable&lt;br /&gt;
Remove-Item Env:KUBECONFIG&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Verify and Switch Context ====&lt;br /&gt;
After merging, check that the new cluster &amp;quot;context&amp;quot; is available and switch to it. The context name will be defined within the provided kubeconfig file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# List all available cluster contexts&lt;br /&gt;
kubectl config get-contexts&lt;br /&gt;
&lt;br /&gt;
# Switch to the new context to make it active&lt;br /&gt;
kubectl config use-context &amp;lt;new-context-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Test the connection by listing pods (in the default namespace)&lt;br /&gt;
kubectl get pods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If the last command returns a list of pods (or an empty list with no errors), your permanent access is configured correctly.&lt;br /&gt;
&lt;br /&gt;
=== 3. Forward a Port for JMX Connection ===&lt;br /&gt;
&lt;br /&gt;
With cluster access established, you can now create a secure tunnel from your local machine to the JMX port of the Java application running inside a pod. [https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ Port Forwarding Docs]&lt;br /&gt;
&lt;br /&gt;
==== Find the Target Pod Name ====&lt;br /&gt;
First, identify the exact name of the pod you want to connect to.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get pods -n &amp;lt;target-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Copy the full name of the pod from the output (e.g., &amp;lt;code&amp;gt;your-java-app-pod-name-xyz&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
==== Start the Port Forwarding Session ====&lt;br /&gt;
Use the &amp;lt;code&amp;gt;kubectl port-forward&amp;lt;/code&amp;gt; command to create the tunnel. This command maps a port on your local machine to the JMX port on the pod (assuming the JMX service in the pod is configured to run on port &#039;&#039;&#039;9010&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl port-forward pod/your-java-app-pod-name-xyz 9010:9010 -n &amp;lt;target-namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This command will block your terminal and must be left running for the entire duration of your JMX session. The output will confirm the connection is active.&lt;br /&gt;
&lt;br /&gt;
==== Connect with a JMX Client ====&lt;br /&gt;
While the port-forward is running, open your preferred JMX client (such as &#039;&#039;&#039;JConsole&#039;&#039;&#039; or &#039;&#039;&#039;VisualVM&#039;&#039;&#039;).&lt;br /&gt;
# Select the option to connect to a remote process.&lt;br /&gt;
# For the connection address or service URL, enter: &amp;lt;code&amp;gt;localhost:9010&amp;lt;/code&amp;gt;&lt;br /&gt;
# Do not specify a username or password unless the JMX service itself is configured to require them.&lt;br /&gt;
&lt;br /&gt;
You should now be connected to the application&#039;s JVM, with access to its live performance metrics.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Developer-Guides]]&lt;br /&gt;
[[Category:JMX]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=334</id>
		<title>Diagnosing Resource Issues in a Kubernetes Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=334"/>
		<updated>2025-09-04T12:18:16Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Diagnosing Resource Issues in a Kubernetes Cluster ==&lt;br /&gt;
This guide provides a systematic approach to identifying and diagnosing CPU and memory resource problems within a Kubernetes cluster. It covers checking node and pod resource utilization, from a cluster-wide overview to a detailed analysis of a single node.&lt;br /&gt;
&lt;br /&gt;
=== 1. Prerequisite: Install the Metrics Server ===&lt;br /&gt;
The &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; command is the primary tool for checking real-time resource usage. This command relies on the Metrics Server, which aggregates resource data from each node. Before proceeding, you must ensure it is installed and running.&lt;br /&gt;
&lt;br /&gt;
First, check if the Metrics Server is already deployed:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get deployment metrics-server -n kube-system&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the command does not return a running deployment, install it. The following command deploys the latest version:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; After applying, it may take a few minutes for the Metrics Server to become fully operational and start reporting metrics. [https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 2. High-Level Cluster Overview ===&lt;br /&gt;
Start by assessing the overall health of your nodes to identify any that are under pressure. [https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&lt;br /&gt;
==== Check Node Utilization ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl top node&amp;lt;/code&amp;gt; to get a summary of CPU and memory usage for every node in the cluster. This helps you quickly spot a node that is running hotter than others. [https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl top node&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%&lt;br /&gt;
vm-k3s-wkr-1   350m         8%     2840Mi          73%&lt;br /&gt;
vm-k3s-wkr-2   450m         11%    3150Mi          81%&lt;br /&gt;
vm-k3s-wkr-3   1200m        30%    3500Mi          90%&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Top Pods Across the Cluster ====&lt;br /&gt;
To find the most resource-intensive pods across all namespaces, use &amp;lt;code&amp;gt;kubectl top pod&amp;lt;/code&amp;gt; combined with the &amp;lt;code&amp;gt;--sort-by&amp;lt;/code&amp;gt; flag. [https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes+1]&lt;br /&gt;
&lt;br /&gt;
To find the top CPU-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the CPU column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=cpu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the top memory-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the Memory column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=memory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands direct your attention to the specific applications that are consuming the most resources cluster-wide. [https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&lt;br /&gt;
=== 3. Deep Dive: Analyzing Pods on a Specific Node ===&lt;br /&gt;
If a particular node shows high utilization (e.g., &amp;lt;code&amp;gt;vm-k3s-wkr-3&amp;lt;/code&amp;gt; from the example above), the next step is to identify which pods on that specific node are responsible.&lt;br /&gt;
&lt;br /&gt;
The following command lists all pods on a designated node and sorts them by memory usage in descending order. This is highly effective for pinpointing the source of node pressure.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Define the target node name&lt;br /&gt;
NODE_NAME=&amp;quot;vm-k3s-wkr-3&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Get a list of pod names on that node&lt;br /&gt;
POD_NAMES=$(kubectl get pods --all-namespaces --field-selector spec.nodeName=${NODE_NAME} -o=custom-columns=NAME:.metadata.name --no-headers)&lt;br /&gt;
&lt;br /&gt;
# Filter the &#039;kubectl top&#039; output to show only those pods, then sort by memory (4th column)&lt;br /&gt;
kubectl top pods --all-namespaces --no-headers | grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot; | sort -k4 -h -r&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Command Breakdown ====&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl get pods ...&amp;lt;/code&amp;gt;: This command uses a &amp;lt;code&amp;gt;--field-selector&amp;lt;/code&amp;gt; to retrieve only the pods scheduled on &amp;lt;code&amp;gt;spec.nodeName=${NODE_NAME}&amp;lt;/code&amp;gt;. It outputs just their names. [https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl top pods ...&amp;lt;/code&amp;gt;: This fetches the live CPU and memory usage for all pods in the cluster. [https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot;&amp;lt;/code&amp;gt;: This filters the full &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt; output, showing only the lines that match the pod names running on your target node. The &amp;lt;code&amp;gt;${POD_NAMES//\ /|}&amp;lt;/code&amp;gt; part formats the pod names into a &amp;lt;code&amp;gt;grep -E&amp;lt;/code&amp;gt; compatible pattern (e.g., &amp;lt;code&amp;gt;pod-a|pod-b|pod-c&amp;lt;/code&amp;gt;).&lt;br /&gt;
* &amp;lt;code&amp;gt;sort -k4 -h -r&amp;lt;/code&amp;gt;: This sorts the final list by the fourth column (MEMORY) in human-readable (&amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt;) and reverse (&amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;) order, placing the heaviest pod at the top. [https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 4. Inspecting Problematic Pods ===&lt;br /&gt;
Once you have identified a high-resource pod, use the following commands to investigate further.&lt;br /&gt;
&lt;br /&gt;
==== Check Pod Events and Configuration ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl describe&amp;lt;/code&amp;gt; to check for important events (like &amp;lt;code&amp;gt;OOMKilled&amp;lt;/code&amp;gt;), and to see the pod&#039;s configured resource requests and limits. Comparing actual usage from &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; against these limits is a critical diagnostic step. [https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl describe pod &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Application Logs ====&lt;br /&gt;
Application-level errors are often the root cause of high resource usage. Check the pod&#039;s logs for stack traces, memory leak warnings, or other errors. [https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl logs &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this structured process, from a high-level overview to a granular, node-specific analysis, you can efficiently diagnose and resolve most common resource issues in a Kubernetes cluster.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=333</id>
		<title>Diagnosing Resource Issues in a Kubernetes Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=333"/>
		<updated>2025-09-04T12:11:42Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Diagnosing Resource Issues in a Kubernetes Cluster ==&lt;br /&gt;
This guide provides a systematic approach to identifying and diagnosing CPU and memory resource problems within a Kubernetes cluster. It covers checking node and pod resource utilization, from a cluster-wide overview to a detailed analysis of a single node.&lt;br /&gt;
&lt;br /&gt;
=== 1. Prerequisite: Install the Metrics Server ===&lt;br /&gt;
The &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; command is the primary tool for checking real-time resource usage. This command relies on the Metrics Server, which aggregates resource data from each node. Before proceeding, you must ensure it is installed and running.&lt;br /&gt;
&lt;br /&gt;
First, check if the Metrics Server is already deployed:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get deployment metrics-server -n kube-system&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the command does not return a running deployment, install it. The following command deploys the latest version:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f [https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; After applying, it may take a few minutes for the Metrics Server to become fully operational and start reporting metrics.[https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 2. High-Level Cluster Overview ===&lt;br /&gt;
Start by assessing the overall health of your nodes to identify any that are under pressure.[https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&lt;br /&gt;
==== Check Node Utilization ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl top node&amp;lt;/code&amp;gt; to get a summary of CPU and memory usage for every node in the cluster. This helps you quickly spot a node that is running hotter than others.[https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl top node&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%&lt;br /&gt;
vm-k3s-wkr-1   350m         8%     2840Mi          73%&lt;br /&gt;
vm-k3s-wkr-2   450m         11%    3150Mi          81%&lt;br /&gt;
vm-k3s-wkr-3   1200m        30%    3500Mi          90%&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Top Pods Across the Cluster ====&lt;br /&gt;
To find the most resource-intensive pods across all namespaces, use &amp;lt;code&amp;gt;kubectl top pod&amp;lt;/code&amp;gt; combined with the &amp;lt;code&amp;gt;--sort-by&amp;lt;/code&amp;gt; flag.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes+1]&lt;br /&gt;
&lt;br /&gt;
To find the top CPU-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the CPU column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=cpu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the top memory-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the Memory column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=memory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands direct your attention to the specific applications that are consuming the most resources cluster-wide.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&lt;br /&gt;
=== 3. Deep Dive: Analyzing Pods on a Specific Node ===&lt;br /&gt;
If a particular node shows high utilization (e.g., &amp;lt;code&amp;gt;vm-k3s-wkr-3&amp;lt;/code&amp;gt; from the example above), the next step is to identify which pods on that specific node are responsible.&lt;br /&gt;
&lt;br /&gt;
The following command lists all pods on a designated node and sorts them by memory usage in descending order. This is highly effective for pinpointing the source of node pressure.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Define the target node name&lt;br /&gt;
NODE_NAME=&amp;quot;vm-k3s-wkr-3&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Get a list of pod names on that node&lt;br /&gt;
POD_NAMES=$(kubectl get pods --all-namespaces --field-selector spec.nodeName=${NODE_NAME} -o=custom-columns=NAME:.metadata.name --no-headers)&lt;br /&gt;
&lt;br /&gt;
# Filter the &#039;kubectl top&#039; output to show only those pods, then sort by memory (4th column)&lt;br /&gt;
kubectl top pods --all-namespaces --no-headers | grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot; | sort -k4 -h -r&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Command Breakdown ====&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl get pods ...&amp;lt;/code&amp;gt;: This command uses a &amp;lt;code&amp;gt;--field-selector&amp;lt;/code&amp;gt; to retrieve only the pods scheduled on &amp;lt;code&amp;gt;spec.nodeName=${NODE_NAME}&amp;lt;/code&amp;gt;. It outputs just their names.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl top pods ...&amp;lt;/code&amp;gt;: This fetches the live CPU and memory usage for all pods in the cluster.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot;&amp;lt;/code&amp;gt;: This filters the full &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt; output, showing only the lines that match the pod names running on your target node. The &amp;lt;code&amp;gt;${POD_NAMES//\ /|}&amp;lt;/code&amp;gt; part formats the pod names into a &amp;lt;code&amp;gt;grep -E&amp;lt;/code&amp;gt; compatible pattern (e.g., &amp;lt;code&amp;gt;pod-a|pod-b|pod-c&amp;lt;/code&amp;gt;).&lt;br /&gt;
* &amp;lt;code&amp;gt;sort -k4 -h -r&amp;lt;/code&amp;gt;: This sorts the final list by the fourth column (MEMORY) in human-readable (&amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt;) and reverse (&amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;) order, placing the heaviest pod at the top.[https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 4. Inspecting Problematic Pods ===&lt;br /&gt;
Once you have identified a high-resource pod, use the following commands to investigate further.&lt;br /&gt;
&lt;br /&gt;
==== Check Pod Events and Configuration ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl describe&amp;lt;/code&amp;gt; to check for important events (like &amp;lt;code&amp;gt;OOMKilled&amp;lt;/code&amp;gt;), and to see the pod&#039;s configured resource requests and limits. Comparing actual usage from &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; against these limits is a critical diagnostic step.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl describe pod &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Application Logs ====&lt;br /&gt;
Application-level errors are often the root cause of high resource usage. Check the pod&#039;s logs for stack traces, memory leak warnings, or other errors.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl logs &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this structured process, from a high-level overview to a granular, node-specific analysis, you can efficiently diagnose and resolve most common resource issues in a Kubernetes cluster.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=332</id>
		<title>Diagnosing Resource Issues in a Kubernetes Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Diagnosing_Resource_Issues_in_a_Kubernetes_Cluster&amp;diff=332"/>
		<updated>2025-09-04T12:08:55Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;== Diagnosing Resource Issues in a Kubernetes Cluster == This guide provides a systematic approach to identifying and diagnosing CPU and memory resource problems within a Kubernetes cluster. It covers checking node and pod resource utilization, from a cluster-wide overview to a detailed analysis of a single node.  === 1. Prerequisite: Install the Metrics Server === The &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; command is the primary tool for checking real-time resource usage. This comman...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Diagnosing Resource Issues in a Kubernetes Cluster ==&lt;br /&gt;
This guide provides a systematic approach to identifying and diagnosing CPU and memory resource problems within a Kubernetes cluster. It covers checking node and pod resource utilization, from a cluster-wide overview to a detailed analysis of a single node.&lt;br /&gt;
&lt;br /&gt;
=== 1. Prerequisite: Install the Metrics Server ===&lt;br /&gt;
The &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; command is the primary tool for checking real-time resource usage. This command relies on the Metrics Server, which aggregates resource data from each node. Before proceeding, you must ensure it is installed and running.&lt;br /&gt;
&lt;br /&gt;
First, check if the Metrics Server is already deployed:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl get deployment metrics-server -n kube-system&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the command does not return a running deployment, install it. The following command deploys the latest version:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f [https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; After applying, it may take a few minutes for the Metrics Server to become fully operational and start reporting metrics.[https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 2. High-Level Cluster Overview ===&lt;br /&gt;
Start by assessing the overall health of your nodes to identify any that are under pressure.[https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&lt;br /&gt;
==== Check Node Utilization ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl top node&amp;lt;/code&amp;gt; to get a summary of CPU and memory usage for every node in the cluster. This helps you quickly spot a node that is running hotter than others.[https://signoz.io/blog/kubectl-top/ signoz]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl top node&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%&lt;br /&gt;
z-k3s-agent-1   350m         8%     2840Mi          73%&lt;br /&gt;
z-k3s-agent-2   450m         11%    3150Mi          81%&lt;br /&gt;
z-k3s-agent-3   1200m        30%    3500Mi          90%&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Top Pods Across the Cluster ====&lt;br /&gt;
To find the most resource-intensive pods across all namespaces, use &amp;lt;code&amp;gt;kubectl top pod&amp;lt;/code&amp;gt; combined with the &amp;lt;code&amp;gt;--sort-by&amp;lt;/code&amp;gt; flag.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes+1]&lt;br /&gt;
&lt;br /&gt;
To find the top CPU-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the CPU column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=cpu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the top memory-consuming pods:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Sorts by the Memory column in descending order&lt;br /&gt;
kubectl top pod --all-namespaces --sort-by=memory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands direct your attention to the specific applications that are consuming the most resources cluster-wide.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&lt;br /&gt;
=== 3. Deep Dive: Analyzing Pods on a Specific Node ===&lt;br /&gt;
If a particular node shows high utilization (e.g., &amp;lt;code&amp;gt;z-k3s-agent-3&amp;lt;/code&amp;gt; from the example above), the next step is to identify which pods on that specific node are responsible.&lt;br /&gt;
&lt;br /&gt;
The following command lists all pods on a designated node and sorts them by memory usage in descending order. This is highly effective for pinpointing the source of node pressure.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Define the target node name&lt;br /&gt;
NODE_NAME=&amp;quot;z-k3s-agent-3&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Get a list of pod names on that node&lt;br /&gt;
POD_NAMES=$(kubectl get pods --all-namespaces --field-selector spec.nodeName=${NODE_NAME} -o=custom-columns=NAME:.metadata.name --no-headers)&lt;br /&gt;
&lt;br /&gt;
# Filter the &#039;kubectl top&#039; output to show only those pods, then sort by memory (4th column)&lt;br /&gt;
kubectl top pods --all-namespaces --no-headers | grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot; | sort -k4 -h -r&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Command Breakdown ====&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl get pods ...&amp;lt;/code&amp;gt;: This command uses a &amp;lt;code&amp;gt;--field-selector&amp;lt;/code&amp;gt; to retrieve only the pods scheduled on &amp;lt;code&amp;gt;spec.nodeName=${NODE_NAME}&amp;lt;/code&amp;gt;. It outputs just their names.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;kubectl top pods ...&amp;lt;/code&amp;gt;: This fetches the live CPU and memory usage for all pods in the cluster.[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/ kubernetes]&lt;br /&gt;
* &amp;lt;code&amp;gt;grep -E &amp;quot;${POD_NAMES//\ /|}&amp;quot;&amp;lt;/code&amp;gt;: This filters the full &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt; output, showing only the lines that match the pod names running on your target node. The &amp;lt;code&amp;gt;${POD_NAMES//\ /|}&amp;lt;/code&amp;gt; part formats the pod names into a &amp;lt;code&amp;gt;grep -E&amp;lt;/code&amp;gt; compatible pattern (e.g., &amp;lt;code&amp;gt;pod-a|pod-b|pod-c&amp;lt;/code&amp;gt;).&lt;br /&gt;
* &amp;lt;code&amp;gt;sort -k4 -h -r&amp;lt;/code&amp;gt;: This sorts the final list by the fourth column (MEMORY) in human-readable (&amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt;) and reverse (&amp;lt;code&amp;gt;-r&amp;lt;/code&amp;gt;) order, placing the heaviest pod at the top.[https://k8studio.io/knowledge-base/how-to-find-memory-metrics-for-a-kubernetes-pod/ k8studio]&lt;br /&gt;
&lt;br /&gt;
=== 4. Inspecting Problematic Pods ===&lt;br /&gt;
Once you have identified a high-resource pod, use the following commands to investigate further.&lt;br /&gt;
&lt;br /&gt;
==== Check Pod Events and Configuration ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;kubectl describe&amp;lt;/code&amp;gt; to check for important events (like &amp;lt;code&amp;gt;OOMKilled&amp;lt;/code&amp;gt;), and to see the pod&#039;s configured resource requests and limits. Comparing actual usage from &amp;lt;code&amp;gt;kubectl top&amp;lt;/code&amp;gt; against these limits is a critical diagnostic step.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl describe pod &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Check Application Logs ====&lt;br /&gt;
Application-level errors are often the root cause of high resource usage. Check the pod&#039;s logs for stack traces, memory leak warnings, or other errors.[https://last9.io/blog/kubectl-top/ last9]&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl logs &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this structured process, from a high-level overview to a granular, node-specific analysis, you can efficiently diagnose and resolve most common resource issues in a Kubernetes cluster.&lt;br /&gt;
&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Setting_Up_a_WireGuard_Client_with_Policy-Based_Routing_on_OpenWrt&amp;diff=331</id>
		<title>Setting Up a WireGuard Client with Policy-Based Routing on OpenWrt</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Setting_Up_a_WireGuard_Client_with_Policy-Based_Routing_on_OpenWrt&amp;diff=331"/>
		<updated>2025-09-02T20:20:54Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;== Setting Up a WireGuard Client with Policy-Based Routing on OpenWrt == This guide outlines how to configure an OpenWrt router to connect to a commercial or private WireGuard VPN as a client and then use Policy-Based Routing (PBR) to selectively route traffic from specific devices on your LAN through the VPN tunnel. This allows some devices to benefit from the VPN while others use the standard, faster WAN connection.  === 1. Install Required Packages === First, connect...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Setting Up a WireGuard Client with Policy-Based Routing on OpenWrt ==&lt;br /&gt;
This guide outlines how to configure an OpenWrt router to connect to a commercial or private WireGuard VPN as a client and then use Policy-Based Routing (PBR) to selectively route traffic from specific devices on your LAN through the VPN tunnel. This allows some devices to benefit from the VPN while others use the standard, faster WAN connection.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install Required Packages ===&lt;br /&gt;
First, connect to your router via SSH or use the LuCI web interface (&#039;&#039;&#039;System -&amp;gt; Software&#039;&#039;&#039;) to install the necessary packages.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Update package lists&lt;br /&gt;
opkg update&lt;br /&gt;
&lt;br /&gt;
# Install packages for WireGuard and the PBR application&lt;br /&gt;
opkg install luci-app-wireguard wireguard-tools luci-app-pbr&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Configure the WireGuard Client Interface ===&lt;br /&gt;
Next, we will create the network interface for the WireGuard tunnel.&lt;br /&gt;
&lt;br /&gt;
# Navigate to &#039;&#039;&#039;Network -&amp;gt; Interfaces&#039;&#039;&#039; and click &#039;&#039;&#039;Add new interface...&#039;&#039;&#039;.&lt;br /&gt;
# Give the interface a name, for example, `wg_client`.&lt;br /&gt;
# For the protocol, select &#039;&#039;&#039;WireGuard VPN&#039;&#039;&#039;.&lt;br /&gt;
# Click &#039;&#039;&#039;Create interface&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
On the configuration page that appears, fill out the &#039;&#039;&#039;General Settings&#039;&#039;&#039; tab:&lt;br /&gt;
*   &#039;&#039;&#039;Private Key:&#039;&#039;&#039; Paste the private key for your client.&lt;br /&gt;
*   &#039;&#039;&#039;IP Addresses:&#039;&#039;&#039; Enter the IP address assigned to you by the VPN provider (e.g., `10.100.0.253/24`).&lt;br /&gt;
&lt;br /&gt;
Now, move to the &#039;&#039;&#039;Peers&#039;&#039;&#039; tab and click &#039;&#039;&#039;Add peer&#039;&#039;&#039;:&lt;br /&gt;
*   &#039;&#039;&#039;Public Key:&#039;&#039;&#039; Paste the public key of the VPN server.&lt;br /&gt;
*   &#039;&#039;&#039;Allowed IPs:&#039;&#039;&#039; Enter `0.0.0.0/0` and `::/0`. This tells the interface that it is allowed to route all traffic. The PBR service will decide what traffic actually gets sent here.&lt;br /&gt;
*   &#039;&#039;&#039;Endpoint Host:&#039;&#039;&#039; The domain name or IP address of your VPN server.&lt;br /&gt;
*   &#039;&#039;&#039;Endpoint Port:&#039;&#039;&#039; The port your VPN server is listening on.&lt;br /&gt;
*   &#039;&#039;&#039;Persistent Keepalive:&#039;&#039;&#039; A value of `25` is recommended to keep the connection alive behind NAT.&lt;br /&gt;
&lt;br /&gt;
Click &#039;&#039;&#039;Save&#039;&#039;&#039;, then navigate back to &#039;&#039;&#039;Network -&amp;gt; Interfaces&#039;&#039;&#039; and click &#039;&#039;&#039;Save &amp;amp; Apply&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Resulting UCI Configuration ====&lt;br /&gt;
Your changes will be saved in `/etc/config/network`. The new section will look similar to this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# In /etc/config/network&lt;br /&gt;
&lt;br /&gt;
config interface &#039;wg_client&#039;&lt;br /&gt;
        option proto &#039;wireguard&#039;&lt;br /&gt;
        option private_key &#039;&amp;lt;YOUR_PRIVATE_KEY&amp;gt;&#039;&lt;br /&gt;
        list addresses &#039;10.100.0.253/24&#039;&lt;br /&gt;
        option mtu &#039;1420&#039; # It is good practice to set this manually&lt;br /&gt;
&lt;br /&gt;
config wireguard_wg_client&lt;br /&gt;
        option public_key &#039;&amp;lt;PEER_PUBLIC_KEY&amp;gt;&#039;&lt;br /&gt;
        list allowed_ips &#039;0.0.0.0/0&#039;&lt;br /&gt;
        list allowed_ips &#039;::/0&#039;&lt;br /&gt;
        option endpoint_host &#039;your-vpn-server.com&#039;&lt;br /&gt;
        option endpoint_port &#039;51820&#039;&lt;br /&gt;
        option persistent_keepalive &#039;25&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Configure the Firewall ===&lt;br /&gt;
A dedicated firewall zone is required to manage traffic for the new WireGuard interface.&lt;br /&gt;
&lt;br /&gt;
# Navigate to &#039;&#039;&#039;Network -&amp;gt; Firewall&#039;&#039;&#039;.&lt;br /&gt;
# Under the &#039;&#039;&#039;Zones&#039;&#039;&#039; section, click &#039;&#039;&#039;Add&#039;&#039;&#039;.&lt;br /&gt;
# Configure the new zone as follows:&lt;br /&gt;
#* &#039;&#039;&#039;Name:&#039;&#039;&#039; Give it a descriptive name, like `wg_fw`.&lt;br /&gt;
#* &#039;&#039;&#039;Input:&#039;&#039;&#039; `REJECT`&lt;br /&gt;
#* &#039;&#039;&#039;Output:&#039;&#039;&#039; `ACCEPT`&lt;br /&gt;
#* &#039;&#039;&#039;Forward:&#039;&#039;&#039; `REJECT`&lt;br /&gt;
#* &#039;&#039;&#039;Covered networks:&#039;&#039;&#039; Select your new `wg_client` interface.&lt;br /&gt;
#* &#039;&#039;&#039;Allow forward to destination zones:&#039;&#039;&#039; Select `wan`.&lt;br /&gt;
#* &#039;&#039;&#039;Allow forward from source zones:&#039;&#039;&#039; Select `lan`.&lt;br /&gt;
&lt;br /&gt;
==== Enable MSS Clamping (Crucial for Stability) ====&lt;br /&gt;
While editing the firewall zone, go to the &#039;&#039;&#039;Advanced Settings&#039;&#039;&#039; tab and check the box for &#039;&#039;&#039;MSS clamping&#039;&#039;&#039;. This prevents VPN-related timeout issues (PMTUD black holes).&lt;br /&gt;
&lt;br /&gt;
Click &#039;&#039;&#039;Save&#039;&#039;&#039;, then &#039;&#039;&#039;Save &amp;amp; Apply&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Resulting UCI Configuration ====&lt;br /&gt;
Your changes will be saved in `/etc/config/firewall`:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# In /etc/config/firewall&lt;br /&gt;
&lt;br /&gt;
config zone&lt;br /&gt;
        option name &#039;wg_fw&#039;&lt;br /&gt;
        option input &#039;REJECT&#039;&lt;br /&gt;
        option output &#039;ACCEPT&#039;&lt;br /&gt;
        option forward &#039;REJECT&#039;&lt;br /&gt;
        list network &#039;wg_client&#039;&lt;br /&gt;
        option mtu_fix &#039;1&#039; # This is the MSS Clamping setting&lt;br /&gt;
&lt;br /&gt;
config forwarding&lt;br /&gt;
        option src &#039;lan&#039;&lt;br /&gt;
        option dest &#039;wg_fw&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure Policy-Based Routing ===&lt;br /&gt;
This is where you will specify which devices on your network should use the VPN tunnel.&lt;br /&gt;
&lt;br /&gt;
# Navigate to &#039;&#039;&#039;Services -&amp;gt; Policy-Based Routing&#039;&#039;&#039;.&lt;br /&gt;
# Ensure the service is &#039;&#039;&#039;Enabled&#039;&#039;&#039; at the top of the page.&lt;br /&gt;
# In the &#039;&#039;&#039;Policies&#039;&#039;&#039; section, click &#039;&#039;&#039;Add&#039;&#039;&#039;.&lt;br /&gt;
# Configure the new policy:&lt;br /&gt;
#* &#039;&#039;&#039;Name:&#039;&#039;&#039; A description for the rule (e.g., `Pterodactyl_VPN`).&lt;br /&gt;
#* &#039;&#039;&#039;Local Address / Subnet:&#039;&#039;&#039; The IP address of the device you want to route through the VPN (e.g., `10.0.1.105/32`). You can also specify a whole subnet.&lt;br /&gt;
#* &#039;&#039;&#039;Interface:&#039;&#039;&#039; In the dropdown, select your `wg_client` interface.&lt;br /&gt;
&lt;br /&gt;
Click &#039;&#039;&#039;Save&#039;&#039;&#039;, then &#039;&#039;&#039;Save &amp;amp; Apply&#039;&#039;&#039;. The PBR service will automatically create the necessary firewall marks and IP rules.&lt;br /&gt;
&lt;br /&gt;
=== 5. Apply and Verify ===&lt;br /&gt;
Reboot your router or restart the network, firewall, and PBR services to ensure all settings are active.&lt;br /&gt;
&lt;br /&gt;
==== Verification Steps ====&lt;br /&gt;
# &#039;&#039;&#039;Check PBR Status:&#039;&#039;&#039; Navigate to &#039;&#039;&#039;Services -&amp;gt; Policy-Based Routing&#039;&#039;&#039; and look at the &amp;quot;Active Policies&amp;quot; table. Your new rule should be present and active.&lt;br /&gt;
# &#039;&#039;&#039;Test from a PBR Client:&#039;&#039;&#039; From the device you specified in the PBR rule (`10.0.1.105`), check your public IP. It should be the IP of your VPN server.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl ifconfig.me&lt;br /&gt;
# Expected output: &amp;lt;Your_VPN_Server_IP&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# &#039;&#039;&#039;Test from a Non-PBR Client:&#039;&#039;&#039; From any other device on your LAN, check your public IP. It should be your regular internet IP.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl ifconfig.me&lt;br /&gt;
# Expected output: &amp;lt;Your_ISP_IP&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Networking]]&lt;br /&gt;
[[Category:OpenWRT]]&lt;br /&gt;
[[Category:VPN]]&lt;br /&gt;
[[Category:WireGuard]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Solving_VPN-Related_Network_Timeouts_on_OpenWrt&amp;diff=330</id>
		<title>Solving VPN-Related Network Timeouts on OpenWrt</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Solving_VPN-Related_Network_Timeouts_on_OpenWrt&amp;diff=330"/>
		<updated>2025-09-02T20:19:22Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Solving VPN-Related Network Timeouts on OpenWrt ==&lt;br /&gt;
This guide documents the diagnosis and resolution of a common network issue: intermittent connection timeouts for specific services when traffic is routed through a VPN tunnel (e.g., WireGuard, ZeroTier) on an OpenWrt router. The root cause is a Path MTU Discovery (PMTUD) black hole, and the solution is to enable TCP MSS Clamping in the firewall.&lt;br /&gt;
&lt;br /&gt;
=== 1. The Problem: Connection Timeouts and Stalls ===&lt;br /&gt;
The primary symptom is that certain TCP connections hang and eventually time out, while others work perfectly.&lt;br /&gt;
&lt;br /&gt;
*   &#039;&#039;&#039;Failing Services:&#039;&#039;&#039; Connections to complex websites or APIs that require a larger packet size for their TLS handshake.&lt;br /&gt;
*   &#039;&#039;&#039;Working Services:&#039;&#039;&#039; Connections to simple websites that transfer very little data (e.g., `ifconfig.me`) and standard ICMP (ping) requests.&lt;br /&gt;
&lt;br /&gt;
This issue arises because VPN encapsulation adds overhead to packets. If a router on the internet path has a smaller MTU (Maximum Transmission Unit) than the encapsulated packet, and it is misconfigured to silently drop oversized packets instead of sending a proper ICMP &amp;quot;Fragmentation Needed&amp;quot; message, a PMTUD black hole is created. The connection stalls because the client&#039;s server never learns that it needs to send smaller packets.&lt;br /&gt;
&lt;br /&gt;
=== 2. The Diagnostic Process ===&lt;br /&gt;
A systematic approach using standard network tools can definitively identify a PMTUD black hole.&lt;br /&gt;
&lt;br /&gt;
==== Step 1: Confirm the Scope ====&lt;br /&gt;
The issue was reproduced by running `curl` from a client whose traffic was routed through the VPN tunnel. Simple, low-data sites worked, while complex, high-data sites failed. This is a classic indicator of an MTU-related problem.&lt;br /&gt;
&lt;br /&gt;
==== Step 2: Packet Capture with `tcpdump` ====&lt;br /&gt;
The definitive proof came from capturing the raw packet flow with `tcpdump` on the router. The capture showed a consistent pattern for failing connections:&lt;br /&gt;
# &#039;&#039;&#039;Successful Handshake:&#039;&#039;&#039; The initial TCP three-way handshake (`SYN`, `SYN/ACK`, `ACK`) completed successfully.&lt;br /&gt;
# &#039;&#039;&#039;TLS Negotiation Stall:&#039;&#039;&#039; The connection stalled immediately after the handshake when larger packets (like a TLS certificate) were expected.&lt;br /&gt;
# &#039;&#039;&#039;Selective Acknowledgment (SACK):&#039;&#039;&#039; The client&#039;s kernel sent `SACK` packets. This was the &amp;quot;smoking gun,&amp;quot; as it proved the client was receiving &#039;&#039;some&#039;&#039; data but was acknowledging that other segments were missing.&lt;br /&gt;
# &#039;&#039;&#039;Timeout:&#039;&#039;&#039; The connection eventually hung and was closed by the client.&lt;br /&gt;
&lt;br /&gt;
=== 3. The Solution: Enable TCP MSS Clamping in OpenWrt ===&lt;br /&gt;
While manually setting the MTU on the VPN interface (e.g., to `1420` for WireGuard) is a necessary first step, it does not always solve the problem if the internet path has a non-standard MTU. The most robust solution is to enable TCP MSS Clamping. This instructs the router to automatically resize TCP segments to prevent fragmentation.&lt;br /&gt;
&lt;br /&gt;
On OpenWrt, this is accomplished easily through the LuCI web interface or by editing `/etc/config/firewall`. The key is to add the `mtu_fix` option to the firewall zone handling VPN traffic.&lt;br /&gt;
&lt;br /&gt;
==== Corrected Firewall Zone Configuration ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# In /etc/config/firewall&lt;br /&gt;
&lt;br /&gt;
config zone&lt;br /&gt;
        option name &#039;vpn_zone&#039; #&amp;lt;-- Your VPN zone name&lt;br /&gt;
        # ... other options ...&lt;br /&gt;
        list network &#039;your_vpn_interface&#039; #&amp;lt;-- e.g., &#039;wg_jgy_internal&#039;&lt;br /&gt;
        option mtu_fix &#039;1&#039; # &amp;lt;-- This line is the fix&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This setting is a best practice and should be enabled for &#039;&#039;&#039;all VPN-related firewall zones&#039;&#039;&#039; to ensure reliable network connectivity across any internet path.&lt;br /&gt;
&lt;br /&gt;
[[Category:Networking]]&lt;br /&gt;
[[Category:OpenWRT]]&lt;br /&gt;
[[Category:VPN]]&lt;br /&gt;
[[Category:Troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Solving_VPN-Related_Network_Timeouts_on_OpenWrt&amp;diff=329</id>
		<title>Solving VPN-Related Network Timeouts on OpenWrt</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Solving_VPN-Related_Network_Timeouts_on_OpenWrt&amp;diff=329"/>
		<updated>2025-09-02T20:17:43Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;== Solving VPN-Related Network Timeouts on OpenWrt == This guide documents the diagnosis and resolution of a common network issue: intermittent connection timeouts for specific services when traffic is routed through a VPN tunnel (e.g., WireGuard, ZeroTier) on an OpenWrt router. The root cause is a Path MTU Discovery (PMTUD) black hole, and the solution is to enable TCP MSS Clamping in the firewall.  === 1. The Problem: Connection Timeouts and Stalls === The primary symp...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Solving VPN-Related Network Timeouts on OpenWrt ==&lt;br /&gt;
This guide documents the diagnosis and resolution of a common network issue: intermittent connection timeouts for specific services when traffic is routed through a VPN tunnel (e.g., WireGuard, ZeroTier) on an OpenWrt router. The root cause is a Path MTU Discovery (PMTUD) black hole, and the solution is to enable TCP MSS Clamping in the firewall.&lt;br /&gt;
&lt;br /&gt;
=== 1. The Problem: Connection Timeouts and Stalls ===&lt;br /&gt;
The primary symptom is that certain TCP connections hang and eventually time out, while others work perfectly.&lt;br /&gt;
&lt;br /&gt;
*   &#039;&#039;&#039;Failing Services:&#039;&#039;&#039; Connections to complex websites or APIs that require a larger packet size for their TLS handshake.&lt;br /&gt;
*   &#039;&#039;&#039;Working Services:&#039;&#039;&#039; Connections to simple websites that transfer very little data (e.g., `ifconfig.me`) and standard ICMP (ping) requests.&lt;br /&gt;
&lt;br /&gt;
This issue arises because VPN encapsulation adds overhead to packets. If a router on the internet path has a smaller MTU (Maximum Transmission Unit) than the encapsulated packet, and it is misconfigured to silently drop oversized packets instead of sending a proper ICMP &amp;quot;Fragmentation Needed&amp;quot; message, a PMTUD black hole is created. The connection stalls because the client&#039;s server never learns that it needs to send smaller packets.&lt;br /&gt;
&lt;br /&gt;
=== 2. The Diagnostic Process ===&lt;br /&gt;
A systematic approach using standard network tools can definitively identify a PMTUD black hole.&lt;br /&gt;
&lt;br /&gt;
==== Step 1: Confirm the Scope ====&lt;br /&gt;
The issue was reproduced by running `curl` from a client whose traffic was routed through the VPN tunnel. Simple, low-data sites worked, while complex, high-data sites failed. This is a classic indicator of an MTU-related problem.&lt;br /&gt;
&lt;br /&gt;
==== Step 2: Packet Capture with `tcpdump` ====&lt;br /&gt;
The definitive proof came from capturing the raw packet flow with `tcpdump` on the router. The capture showed a consistent pattern for failing connections:&lt;br /&gt;
# &#039;&#039;&#039;Successful Handshake:&#039;&#039;&#039; The initial TCP three-way handshake (`SYN`, `SYN/ACK`, `ACK`) completed successfully.&lt;br /&gt;
# &#039;&#039;&#039;TLS Negotiation Stall:&#039;&#039;&#039; The connection stalled immediately after the handshake when larger packets (like a TLS certificate) were expected.&lt;br /&gt;
# &#039;&#039;&#039;Selective Acknowledgment (SACK):&#039;&#039;&#039; The client&#039;s kernel sent `SACK` packets. This was the &amp;quot;smoking gun,&amp;quot; as it proved the client was receiving &#039;&#039;some&#039;&#039; data but was acknowledging that other segments were missing.&lt;br /&gt;
# &#039;&#039;&#039;Timeout:&#039;&#039;&#039; The connection eventually hung and was closed by the client.&lt;br /&gt;
&lt;br /&gt;
=== 3. The Solution: Enable TCP MSS Clamping in OpenWrt ===&lt;br /&gt;
While manually setting the MTU on the VPN interface (e.g., to `1420` for WireGuard) is a necessary first step, it does not always solve the problem if the internet path has a non-standard MTU. The most robust solution is to enable TCP MSS Clamping. This instructs the router to automatically resize TCP segments to prevent fragmentation.&lt;br /&gt;
&lt;br /&gt;
On OpenWrt, this is accomplished easily through the LuCI web interface or by editing `/etc/config/firewall`. The key is to add the `mtu_fix` option to the firewall zone handling VPN traffic.&lt;br /&gt;
&lt;br /&gt;
==== Corrected Firewall Zone Configuration ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# In /etc/config/firewall&lt;br /&gt;
&lt;br /&gt;
config zone&lt;br /&gt;
        option name &#039;vpn_zone&#039; #&amp;lt;-- Your VPN zone name&lt;br /&gt;
        # ... other options ...&lt;br /&gt;
        list network &#039;your_vpn_interface&#039; #&amp;lt;-- e.g., &#039;wg_jgy_internal&#039;&lt;br /&gt;
        option mtu_fix &#039;1&#039; # &amp;lt;-- This line is the fix&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This setting is a best practice and should be enabled for &#039;&#039;&#039;all VPN-related firewall zones&#039;&#039;&#039; to ensure reliable network connectivity across any internet path.&lt;br /&gt;
&lt;br /&gt;
[[Category:Networking]]&lt;br /&gt;
[[Category:OpenWrt]]&lt;br /&gt;
[[Category:VPN]]&lt;br /&gt;
[[Category:Troubleshooting]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitor_Proxmox_with_Prometheus_Exporter_on_Kubernetes&amp;diff=328</id>
		<title>Monitor Proxmox with Prometheus Exporter on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitor_Proxmox_with_Prometheus_Exporter_on_Kubernetes&amp;diff=328"/>
		<updated>2025-08-29T17:26:05Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Monitor Proxmox with Prometheus Exporter on Kubernetes to Monitoring PVE 8 via Prometheus on Kubernetes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Monitoring PVE 8 via Prometheus on Kubernetes]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=327</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=327"/>
		<updated>2025-08-29T17:26:03Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Monitor Proxmox with Prometheus Exporter on Kubernetes to Monitoring PVE 8 via Prometheus on Kubernetes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter (PVE 8) ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Prometheus]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=326</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=326"/>
		<updated>2025-08-29T17:25:09Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter (PVE 8) ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Prometheus]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=325</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=325"/>
		<updated>2025-08-29T17:17:38Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter and Per-Host Tokens ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Prometheus]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=324</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=324"/>
		<updated>2025-08-29T17:16:12Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter and Per-Host Tokens ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # The token_name is now consistent across all modules.&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Prometheus]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=323</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=323"/>
		<updated>2025-08-29T17:16:00Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter and Per-Host Tokens ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # The token_name is now consistent across all modules.&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Observability]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=322</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=322"/>
		<updated>2025-08-29T17:09:29Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter and Per-Host Tokens ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `pve-node-1`, `pve-node-2`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;pve-node-1&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;pve-node-2&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`pve-node-1-token`, `pve-node-2-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  pve-node-1-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_1&amp;quot;&lt;br /&gt;
  pve-node-2-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_PVE_NODE_2&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # The token_name is now consistent across all modules.&lt;br /&gt;
    # --- Module for pve-node-1 ---&lt;br /&gt;
    pve-node-1:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_1_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for pve-node-2 ---&lt;br /&gt;
    pve-node-2:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_NODE_2_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_NODE_1_TOKEN}|${PVE_NODE_1_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_NODE_2_TOKEN}|${PVE_NODE_2_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_NODE_1_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-1-token&lt;br /&gt;
        - name: PVE_NODE_2_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-secrets&lt;br /&gt;
              key: pve-node-2-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-nodes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    # This label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
    prometheus: my-prometheus &lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - pve-node-1.your-domain.com&lt;br /&gt;
        - pve-node-2.your-domain.com&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;pve-node-1&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=321</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=321"/>
		<updated>2025-08-29T17:06:11Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter and Per-Host Tokens ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. This architecture uses a single exporter instance that is dynamically configured at startup to use unique, per-host API tokens. This provides the operational simplicity of a single deployment with the enhanced security of per-host credentials.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Unique Read-Only API Token on Each Proxmox Host ===&lt;br /&gt;
This setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you wish to monitor (e.g., `ahsoka`, `thrawn`). We will use a consistent user and token name across all hosts for simplicity.&lt;br /&gt;
&lt;br /&gt;
Connect to each Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# On your FIRST Proxmox host (e.g., &#039;ahsoka&#039;), create the user and first token:&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# On ALL SUBSEQUENT hosts (e.g., &#039;thrawn&#039;), the user is synced by the cluster.&lt;br /&gt;
# You only need to create a new token with the same name.&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will generate a &#039;&#039;&#039;unique secret value&#039;&#039;&#039; on each host. You must copy the secret value for &#039;&#039;&#039;each&#039;&#039;&#039; host immediately, as you will not be able to see it again.&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old token.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
# Only run userdel after removing all tokens for that user from all hosts.&lt;br /&gt;
# pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifests ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporter-full.yaml`). This file contains all the necessary Kubernetes resources.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, populate the `Secret` with the unique token values you generated on each host. The keys in the secret (`ahsoka-token`, `thrawn-token`) must match the environment variable names used in the `initContainer`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# pve-exporter-full.yaml&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  # Populate with the UNIQUE secret values generated on each Proxmox host&lt;br /&gt;
  ahsoka-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_AHSOKA&amp;quot;&lt;br /&gt;
  thrawn-token: &amp;quot;UNIQUE_SECRET_VALUE_FOR_THRAWN&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # The token_name is now consistent across all modules.&lt;br /&gt;
    # --- Module for ahsoka ---&lt;br /&gt;
    ahsoka:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_AHSOKA_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
    # --- Module for thrawn ---&lt;br /&gt;
    thrawn:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: exporter-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_THRAWN_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: jgy-pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_AHSOKA_TOKEN}|${PVE_AHSOKA_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_THRAWN_TOKEN}|${PVE_THRAWN_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_AHSOKA_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: ahsoka-token&lt;br /&gt;
        - name: PVE_THRAWN_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: thrawn-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Create the Prometheus Scrape Configuration ===&lt;br /&gt;
Create a final YAML file for the `ScrapeConfig`. This tells Prometheus how to scrape the single exporter for all your Proxmox hosts, dynamically setting the `target` and `module` parameters for each one.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# pve-scrape-config.yaml&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-proxmoxes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    prometheus: jgy-prometheus&lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - ahsoka.tatooine.jgy.local&lt;br /&gt;
        - thrawn.tatooine.jgy.local&lt;br /&gt;
        # Add other hosts here&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    # Rule 1: Take the target address and use it as the &#039;target&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
      &lt;br /&gt;
    # Rule 2: Extract the hostname (e.g., &amp;quot;ahsoka&amp;quot;) and use it as the &#039;module&#039; URL parameter.&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039; # Captures the part before the first dot&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
      &lt;br /&gt;
    # Rule 3: Set the &#039;instance&#039; label to the Proxmox host&#039;s address.&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
      &lt;br /&gt;
    # Rule 4: Rewrite the scrape address to point to our single exporter service.&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: jgy-pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Apply and Verify ===&lt;br /&gt;
Apply the two Kubernetes manifests to your cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
kubectl apply -f pve-scrape-config.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=jgy-pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target for each of your Proxmox hosts is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=320</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=320"/>
		<updated>2025-08-29T17:02:30Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  ahsoka-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
  thrawn-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for ahsoka ---&lt;br /&gt;
    ahsoka:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: ahsoka-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_AHSOKA_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
&lt;br /&gt;
    # --- Module for thrawn ---&lt;br /&gt;
    thrawn:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: thrawn-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_THRAWN_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: jgy-pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_AHSOKA_TOKEN}|${PVE_AHSOKA_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_THRAWN_TOKEN}|${PVE_THRAWN_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_AHSOKA_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: ahsoka-token&lt;br /&gt;
        - name: PVE_THRAWN_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: thrawn-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Scrape:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-proxmoxes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    prometheus: jgy-prometheus&lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - ahsoka.local&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039;&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: jgy-pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=319</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=319"/>
		<updated>2025-08-29T17:01:19Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  ahsoka-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
  thrawn-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for ahsoka ---&lt;br /&gt;
    ahsoka:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: ahsoka-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_AHSOKA_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
&lt;br /&gt;
    # --- Module for thrawn ---&lt;br /&gt;
    thrawn:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: thrawn-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_THRAWN_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: jgy-pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_AHSOKA_TOKEN}|${PVE_AHSOKA_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_THRAWN_TOKEN}|${PVE_THRAWN_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_AHSOKA_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: ahsoka-token&lt;br /&gt;
        - name: PVE_THRAWN_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: thrawn-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# pve-scrape-config-aligned.yaml&lt;br /&gt;
# ---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1alpha1&lt;br /&gt;
kind: ScrapeConfig&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-proxmoxes&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    prometheus: jgy-prometheus&lt;br /&gt;
spec:&lt;br /&gt;
  staticConfigs:&lt;br /&gt;
    - targets:&lt;br /&gt;
        - ahsoka.local&lt;br /&gt;
  metricsPath: /pve&lt;br /&gt;
  relabelings:&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      targetLabel: __param_target&lt;br /&gt;
    - sourceLabels: [__address__]&lt;br /&gt;
      regex: &#039;([^.]+)\..*&#039;&lt;br /&gt;
      targetLabel: __param_module&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - targetLabel: __address__&lt;br /&gt;
      replacement: jgy-pve-exporter.monitoring.svc:9106&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=318</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=318"/>
		<updated>2025-08-29T16:59:41Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-secrets&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
type: Opaque&lt;br /&gt;
stringData:&lt;br /&gt;
  ahsoka-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
  thrawn-token: &amp;quot;asd&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: ConfigMap&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-config-template&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
data:&lt;br /&gt;
  pve.yml: |&lt;br /&gt;
    # --- Module for ahsoka ---&lt;br /&gt;
    ahsoka:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: ahsoka-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_AHSOKA_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
&lt;br /&gt;
    # --- Module for thrawn ---&lt;br /&gt;
    thrawn:&lt;br /&gt;
      user: pve-exporter@pve&lt;br /&gt;
      token_name: thrawn-token&lt;br /&gt;
      token_value: &amp;quot;${PVE_THRAWN_TOKEN}&amp;quot;&lt;br /&gt;
      verify_ssl: false&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      volumes:&lt;br /&gt;
      - name: config-template-volume&lt;br /&gt;
        configMap:&lt;br /&gt;
          name: jgy-pve-exporter-config-template&lt;br /&gt;
      - name: processed-config-volume&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      - name: tmp&lt;br /&gt;
        emptyDir: {}&lt;br /&gt;
      initContainers:&lt;br /&gt;
      - name: init-config-secrets&lt;br /&gt;
        image: busybox:1.36&lt;br /&gt;
        command: [&#039;/bin/sh&#039;, &#039;-c&#039;]&lt;br /&gt;
        args:&lt;br /&gt;
        - |&lt;br /&gt;
          sed -e &amp;quot;s|\${PVE_AHSOKA_TOKEN}|${PVE_AHSOKA_TOKEN}|g&amp;quot; \&lt;br /&gt;
              -e &amp;quot;s|\${PVE_THRAWN_TOKEN}|${PVE_THRAWN_TOKEN}|g&amp;quot; \&lt;br /&gt;
              /etc/config-template/pve.yml &amp;gt; /etc/processed-config/pve.yml&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_AHSOKA_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: ahsoka-token&lt;br /&gt;
        - name: PVE_THRAWN_TOKEN&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-secrets&lt;br /&gt;
              key: thrawn-token&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: config-template-volume&lt;br /&gt;
          mountPath: /etc/config-template&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/processed-config&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--config.file=/etc/prometheus/pve.yml&amp;quot;&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
          protocol: TCP&lt;br /&gt;
        livenessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 10&lt;br /&gt;
          periodSeconds: 15&lt;br /&gt;
        readinessProbe:&lt;br /&gt;
          httpGet:&lt;br /&gt;
            path: /&lt;br /&gt;
            port: http-metrics&lt;br /&gt;
          initialDelaySeconds: 5&lt;br /&gt;
          periodSeconds: 5&lt;br /&gt;
        securityContext:&lt;br /&gt;
          runAsNonRoot: true&lt;br /&gt;
          runAsUser: 1000&lt;br /&gt;
          readOnlyRootFilesystem: true&lt;br /&gt;
          allowPrivilegeEscalation: false&lt;br /&gt;
          capabilities:&lt;br /&gt;
            drop:&lt;br /&gt;
            - ALL&lt;br /&gt;
        volumeMounts:&lt;br /&gt;
        - name: processed-config-volume&lt;br /&gt;
          mountPath: /etc/prometheus&lt;br /&gt;
          readOnly: true&lt;br /&gt;
        - name: tmp&lt;br /&gt;
          mountPath: /tmp&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: &#039;0&#039;&lt;br /&gt;
            memory: 256Mi&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host&#039;s name (e.g., `thrawn`) and update the new host&#039;s unique credentials and target address.&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=317</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=317"/>
		<updated>2025-08-29T16:30:53Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum userdel pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ==         CONFIGURATION FOR PROXMOX HOST: ahsoka                ==&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ---&lt;br /&gt;
# 1. Secret for &amp;quot;ahsoka&amp;quot; - This holds the unique token components for this host.&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The user part of the token&lt;br /&gt;
  PVE_USER: &amp;quot;pve-exporter@pve&amp;quot;&lt;br /&gt;
  # The name (ID) of the API token&lt;br /&gt;
  PVE_TOKEN_NAME: &amp;quot;exporter-token&amp;quot; # e.g., exporter-token&lt;br /&gt;
  # The secret value of the API token&lt;br /&gt;
  PVE_TOKEN_VALUE: &amp;quot;&amp;lt;token&amp;gt;&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 2. Deployment for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter-ahsoka&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_TOKEN_NAME&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_NAME&lt;br /&gt;
        - name: PVE_TOKEN_VALUE&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_VALUE&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: 50m&lt;br /&gt;
            memory: 64Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: 100m&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
---&lt;br /&gt;
# 3. Service for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 4. ServiceMonitor for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;ahsoka.tatooine.jgy.local&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s FQDN or IP&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host&#039;s name (e.g., `thrawn`) and update the new host&#039;s unique credentials and target address.&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=316</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=316"/>
		<updated>2025-08-29T16:12:08Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum user delete pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ==         CONFIGURATION FOR PROXMOX HOST: ahsoka                ==&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ---&lt;br /&gt;
# 1. Secret for &amp;quot;ahsoka&amp;quot; - This holds the unique token components for this host.&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The user part of the token&lt;br /&gt;
  PVE_USER: &amp;quot;pve-exporter@pve&amp;quot;&lt;br /&gt;
  # The name (ID) of the API token&lt;br /&gt;
  PVE_TOKEN_NAME: &amp;quot;exporter-token&amp;quot; # e.g., exporter-token&lt;br /&gt;
  # The secret value of the API token&lt;br /&gt;
  PVE_TOKEN_VALUE: &amp;quot;&amp;lt;token&amp;gt;&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 2. Deployment for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter-ahsoka&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_TOKEN_NAME&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_NAME&lt;br /&gt;
        - name: PVE_TOKEN_VALUE&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_VALUE&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: 50m&lt;br /&gt;
            memory: 64Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: 100m&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
---&lt;br /&gt;
# 3. Service for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 4. ServiceMonitor for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;ahsoka.tatooine.jgy.local&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s FQDN or IP&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host&#039;s name (e.g., `thrawn`) and update the new host&#039;s unique credentials and target address.&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=315</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=315"/>
		<updated>2025-08-29T16:07:44Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum user delete pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `YOUR_TOKEN_NAME`: The name of your token (e.g., `exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: The secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ==         CONFIGURATION FOR PROXMOX HOST: ahsoka                ==&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ---&lt;br /&gt;
# 1. Secret for &amp;quot;ahsoka&amp;quot; - This holds the unique token components for this host.&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The user part of the token&lt;br /&gt;
  PVE_USER: &amp;quot;pve-exporter@pve&amp;quot;&lt;br /&gt;
  # The name (ID) of the API token&lt;br /&gt;
  PVE_TOKEN_NAME: &amp;quot;YOUR_TOKEN_NAME&amp;quot; # e.g., exporter-token&lt;br /&gt;
  # The secret value of the API token&lt;br /&gt;
  PVE_TOKEN_VALUE: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 2. Deployment for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter-ahsoka&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:v3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_TOKEN_NAME&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_NAME&lt;br /&gt;
        - name: PVE_TOKEN_VALUE&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_TOKEN_VALUE&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: 50m&lt;br /&gt;
            memory: 64Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: 100m&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
---&lt;br /&gt;
# 3. Service for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 4. ServiceMonitor for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;ahsoka.tatooine.jgy.local&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s FQDN or IP&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host&#039;s name (e.g., `thrawn`) and update the new host&#039;s unique credentials and target address.&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=314</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=314"/>
		<updated>2025-08-29T16:05:14Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines a robust and secure method for deploying the `prometheus-pve-exporter` to a Kubernetes cluster. Since it is a security best practice to use a unique API token for each host, this guide details how to deploy a dedicated exporter instance for each host you wish to monitor.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Each Proxmox Host ===&lt;br /&gt;
This one-time setup must be performed on &#039;&#039;&#039;each&#039;&#039;&#039; Proxmox host you want to monitor. This process ensures a clean permission set and avoids common access control list (ACL) conflicts. Connect to your host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a user named `pve-exporter@pve` for monitoring.&lt;br /&gt;
*   Assign the built-in `PVEAuditor` role to the new user.&lt;br /&gt;
*   Create an API token named `exporter-token`.&lt;br /&gt;
*   Assign the `PVEAuditor` role &#039;&#039;&#039;directly to the API token&#039;&#039;&#039; to override any potential ACL inheritance issues.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# 1. Create the user&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# 2. Assign the standard PVEAuditor role to the USER&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEAuditor&lt;br /&gt;
&lt;br /&gt;
# 3. Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&lt;br /&gt;
# 4. THE CRITICAL STEP: Grant the PVEAuditor role DIRECTLY to the API TOKEN&lt;br /&gt;
pveum aclmod / -token &#039;pve-exporter@pve!exporter-token&#039; -role PVEAuditor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The `pveum user token add` command will output the &#039;&#039;&#039;Token ID&#039;&#039;&#039; (e.g., `pve-exporter@pve!exporter-token`) and the &#039;&#039;&#039;Secret Value&#039;&#039;&#039;. Copy the full secret value immediately, as &#039;&#039;&#039;you will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup on a host, first delete the old resources to ensure a clean state.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum aclmod / -delete 1 -token &#039;pve-exporter@pve!exporter-token&#039;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum user delete pve-exporter@pve&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Kubernetes Manifest for Each Host ===&lt;br /&gt;
On your local machine, create a single YAML file (e.g., `pve-exporters.yaml`). This file will contain a separate set of Kubernetes resources for each Proxmox host. Below is the complete template for a host named `ahsoka`.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values:&lt;br /&gt;
*   `jgy-pve-exporter-ahsoka-auth`: Ensure secret names are unique per host.&lt;br /&gt;
*   `YOUR_API_TOKEN_ID`: Use the full Token ID from the previous step (e.g., `pve-exporter@pve!exporter-token`).&lt;br /&gt;
*   `YOUR_API_TOKEN_SECRET`: Use the secret value you just generated.&lt;br /&gt;
*   `ahsoka.tatooine.jgy.local`: Update with your Proxmox host&#039;s fully qualified domain name or IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ==         CONFIGURATION FOR PROXMOX HOST: ahsoka                ==&lt;br /&gt;
# ===================================================================&lt;br /&gt;
# ---&lt;br /&gt;
# 1. Secret for &amp;quot;ahsoka&amp;quot; - This holds the unique token for this host.&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!exporter-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 2. Deployment for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: jgy-pve-exporter-ahsoka&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:v3.5.5&lt;br /&gt;
        args:&lt;br /&gt;
        - &amp;quot;--web.listen-address=:9106&amp;quot;&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9106&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: jgy-pve-exporter-ahsoka-auth&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
        resources:&lt;br /&gt;
          requests:&lt;br /&gt;
            cpu: 50m&lt;br /&gt;
            memory: 64Mi&lt;br /&gt;
          limits:&lt;br /&gt;
            cpu: 100m&lt;br /&gt;
            memory: 128Mi&lt;br /&gt;
---&lt;br /&gt;
# 3. Service for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9106&lt;br /&gt;
    targetPort: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
# 4. ServiceMonitor for the &amp;quot;jgy-pve-exporter-ahsoka&amp;quot; instance&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: jgy-pve-exporter-ahsoka&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: jgy-pve-exporter-ahsoka&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: &amp;quot;http-metrics&amp;quot;&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;ahsoka.tatooine.jgy.local&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s FQDN or IP&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To monitor additional hosts, copy and paste this entire four-document block into the same file, then perform a find-and-replace for `ahsoka` with your new host&#039;s name (e.g., `thrawn`) and update the new host&#039;s unique credentials and target address.&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporters.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pods are running and that Prometheus is successfully scraping the targets.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status for all exporters, replacing with your host names&lt;br /&gt;
kubectl get pods -n monitoring -l app --selector=&#039;app in (jgy-pve-exporter-ahsoka, jgy-pve-exporter-thrawn)&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that targets for `jgy-pve-exporter-ahsoka` (and any others you deployed) are present and have a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The `PVE_VERIFY_SSL: &amp;quot;false&amp;quot;` setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to `&amp;quot;true&amp;quot;` if you use a valid, trusted certificate.&lt;br /&gt;
*   The `ServiceMonitor` resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your `prometheus.yml` file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the `monitoring` namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=313</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=313"/>
		<updated>2025-08-29T13:39:55Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Proxmox ===&lt;br /&gt;
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a role named &amp;lt;code&amp;gt;ExporterRole&amp;lt;/code&amp;gt; with the necessary audit permissions.&lt;br /&gt;
*   Create a user named &amp;lt;code&amp;gt;pve-exporter@pve&amp;lt;/code&amp;gt; specifically for this purpose (it does not require a password).&lt;br /&gt;
*   Assign the read-only role to the new user at the root level (&amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;).&lt;br /&gt;
*   Create an API token named &amp;lt;code&amp;gt;exporter-token&amp;lt;/code&amp;gt; for the user.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the role with read-only privileges and a valid name&lt;br /&gt;
pveum roleadd ExporterRole -privs &amp;quot;Datastore.Audit Sys.Audit&amp;quot;&lt;br /&gt;
# Create the user (password login is not needed for token auth)&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
# Assign the role to the user for the entire datacenter&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role ExporterRole&lt;br /&gt;
# Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. &#039;&#039;&#039;You will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
==== (Optional) Cleanup Script ====&lt;br /&gt;
If you need to re-run the setup, first delete the old resources to avoid errors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pveum user token remove pve-exporter@pve exporter-token&lt;br /&gt;
pveum user delete pve-exporter@pve&lt;br /&gt;
pveum role delete ExporterRole&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== 2. Create the Combined Kubernetes Manifest ===&lt;br /&gt;
On your local machine, create a single YAML file named &amp;lt;code&amp;gt;pve-exporter-full.yaml&amp;lt;/code&amp;gt;. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values for &amp;lt;code&amp;gt;YOUR_API_TOKEN_ID&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;pve-exporter@pve!exporter-token&amp;lt;/code&amp;gt;) and &amp;lt;code&amp;gt;YOUR_API_TOKEN_SECRET&amp;lt;/code&amp;gt; with the values you just generated. Also, update your Proxmox host&#039;s IP address.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-credentials&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!exporter-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:latest&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9221&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9221&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: http-metrics&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;192.168.1.100&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s IP address&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources at once.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the target.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target named &amp;lt;code&amp;gt;pve-exporter&amp;lt;/code&amp;gt; is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The &amp;lt;code&amp;gt;PVE_VERIFY_SSL: &amp;quot;false&amp;quot;&amp;lt;/code&amp;gt; setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to &amp;lt;code&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/code&amp;gt; if you use a valid, trusted certificate.&lt;br /&gt;
*   The &amp;lt;code&amp;gt;ServiceMonitor&amp;lt;/code&amp;gt; resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your &amp;lt;code&amp;gt;prometheus.yml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the &amp;lt;code&amp;gt;monitoring&amp;lt;/code&amp;gt; namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=312</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=312"/>
		<updated>2025-08-29T13:36:26Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Proxmox ===&lt;br /&gt;
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a role named &amp;lt;code&amp;gt;PVEExporter&amp;lt;/code&amp;gt; with the necessary audit permissions.&lt;br /&gt;
*   Create a user named &amp;lt;code&amp;gt;pve-exporter@pve&amp;lt;/code&amp;gt; specifically for this purpose (it does not require a password).&lt;br /&gt;
*   Assign the read-only role to the new user at the root level (&amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;).&lt;br /&gt;
*   Create an API token named &amp;lt;code&amp;gt;exporter-token&amp;lt;/code&amp;gt; for the user.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the role with read-only privileges&lt;br /&gt;
pveum roleadd PVEExporter -privs &amp;quot;Datastore.Audit Sys.Audit&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create the user (password login is not needed for token auth)&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# Assign the role to the user for the entire datacenter&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEExporter&lt;br /&gt;
&lt;br /&gt;
# Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. &#039;&#039;&#039;You will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Combined Kubernetes Manifest ===&lt;br /&gt;
On your local machine, create a single YAML file named &amp;lt;code&amp;gt;pve-exporter-full.yaml&amp;lt;/code&amp;gt;. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values for &amp;lt;code&amp;gt;YOUR_API_TOKEN_ID&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;pve-exporter@pve!exporter-token&amp;lt;/code&amp;gt;) and &amp;lt;code&amp;gt;YOUR_API_TOKEN_SECRET&amp;lt;/code&amp;gt; with the values you just generated. Also, update your Proxmox host&#039;s IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-credentials&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!exporter-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:latest&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9221&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9221&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: http-metrics&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;192.168.1.100&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s IP address&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources at once.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the target.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target named &amp;lt;code&amp;gt;pve-exporter&amp;lt;/code&amp;gt; is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The &amp;lt;code&amp;gt;PVE_VERIFY_SSL: &amp;quot;false&amp;quot;&amp;lt;/code&amp;gt; setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to &amp;lt;code&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/code&amp;gt; if you use a valid, trusted certificate.&lt;br /&gt;
*   The &amp;lt;code&amp;gt;ServiceMonitor&amp;lt;/code&amp;gt; resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your &amp;lt;code&amp;gt;prometheus.yml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the &amp;lt;code&amp;gt;monitoring&amp;lt;/code&amp;gt; namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=311</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=311"/>
		<updated>2025-08-29T13:33:43Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Proxmox ===&lt;br /&gt;
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a role named &amp;lt;code&amp;gt;PVEExporter&amp;lt;/code&amp;gt; with the necessary audit permissions.&lt;br /&gt;
*   Create a user named &amp;lt;code&amp;gt;pve-exporter@pve&amp;lt;/code&amp;gt; specifically for this purpose (it does not require a password).&lt;br /&gt;
*   Assign the read-only role to the new user at the root level (&amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;).&lt;br /&gt;
*   Create an API token named &amp;lt;code&amp;gt;exporter-token&amp;lt;/code&amp;gt; for the user.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the role with read-only privileges&lt;br /&gt;
pveum roleadd PVEExporter -privs &amp;quot;Datacenter.Audit Sys.Audit&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create the user (password login is not needed for token auth)&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# Assign the role to the user for the entire datacenter&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEExporter&lt;br /&gt;
&lt;br /&gt;
# Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. &#039;&#039;&#039;You will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Combined Kubernetes Manifest ===&lt;br /&gt;
On your local machine, create a single YAML file named &amp;lt;code&amp;gt;pve-exporter-full.yaml&amp;lt;/code&amp;gt;. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values for &amp;lt;code&amp;gt;YOUR_API_TOKEN_ID&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;pve-exporter@pve!exporter-token&amp;lt;/code&amp;gt;) and &amp;lt;code&amp;gt;YOUR_API_TOKEN_SECRET&amp;lt;/code&amp;gt; with the values you just generated. Also, update your Proxmox host&#039;s IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-credentials&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!exporter-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:latest&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9221&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9221&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: http-metrics&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;192.168.1.100&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s IP address&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources at once.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the target.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target named &amp;lt;code&amp;gt;pve-exporter&amp;lt;/code&amp;gt; is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The &amp;lt;code&amp;gt;PVE_VERIFY_SSL: &amp;quot;false&amp;quot;&amp;lt;/code&amp;gt; setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to &amp;lt;code&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/code&amp;gt; if you use a valid, trusted certificate.&lt;br /&gt;
*   The &amp;lt;code&amp;gt;ServiceMonitor&amp;lt;/code&amp;gt; resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your &amp;lt;code&amp;gt;prometheus.yml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the &amp;lt;code&amp;gt;monitoring&amp;lt;/code&amp;gt; namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=310</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=310"/>
		<updated>2025-08-29T13:33:21Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Proxmox ===&lt;br /&gt;
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a role named &amp;lt;code&amp;gt;PVEExporter&amp;lt;/code&amp;gt; with the necessary audit permissions.&lt;br /&gt;
*   Create a user named &amp;lt;code&amp;gt;pve-exporter@pve&amp;lt;/code&amp;gt; specifically for this purpose (it does not require a password).&lt;br /&gt;
*   Assign the read-only role to the new user at the root level (&amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;).&lt;br /&gt;
*   Create an API token named &amp;lt;code&amp;gt;exporter-token&amp;lt;/code&amp;gt; for the user.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the role with read-only privileges&lt;br /&gt;
pveum roleadd PVEExporter -privs &amp;quot;Datacenter.Audit Sys.Audit&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create the user (password login is not needed for token auth)&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# Assign the role to the user for the entire datacenter&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEExporter&lt;br /&gt;
&lt;br /&gt;
# Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve exporter-token&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. &#039;&#039;&#039;You will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Combined Kubernetes Manifest ===&lt;br /&gt;
On your local machine, create a single YAML file named &amp;lt;code&amp;gt;pve-exporter-full.yaml&amp;lt;/code&amp;gt;. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values for &amp;lt;code&amp;gt;YOUR_API_TOKEN_ID&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;pve-exporter@pve!exporter-token&amp;lt;/code&amp;gt;) and &amp;lt;code&amp;gt;YOUR_API_TOKEN_SECRET&amp;lt;/code&amp;gt; with the values you just generated. Also, update your Proxmox host&#039;s IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-credentials&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!exporter-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:latest&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9221&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9221&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: http-metrics&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;192.168.1.100&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s IP address&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources at once.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the target.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target named &amp;lt;code&amp;gt;pve-exporter&amp;lt;/code&amp;gt; is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The &amp;lt;code&amp;gt;PVE_VERIFY_SSL: &amp;quot;false&amp;quot;&amp;lt;/code&amp;gt; setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to &amp;lt;code&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/code&amp;gt; if you use a valid, trusted certificate.&lt;br /&gt;
*   The &amp;lt;code&amp;gt;ServiceMonitor&amp;lt;/code&amp;gt; resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your &amp;lt;code&amp;gt;prometheus.yml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the &amp;lt;code&amp;gt;monitoring&amp;lt;/code&amp;gt; namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=309</id>
		<title>Monitoring PVE 8 via Prometheus on Kubernetes</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_PVE_8_via_Prometheus_on_Kubernetes&amp;diff=309"/>
		<updated>2025-08-29T13:32:21Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;== Monitor Proxmox with Prometheus Exporter on Kubernetes == This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.  === 1. Create a Read-Only User and API Token on Proxmox === The only action re...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Monitor Proxmox with Prometheus Exporter on Kubernetes ==&lt;br /&gt;
This guide outlines how to deploy the `prometheus-pve-exporter` to a Kubernetes cluster to monitor a remote Proxmox VE host using a secure API token. This is the recommended authentication method. The process involves a minimal, one-time setup on the Proxmox host and the deployment of a single multi-document YAML file to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
=== 1. Create a Read-Only User and API Token on Proxmox ===&lt;br /&gt;
The only action required on the Proxmox host is the creation of a dedicated user and an API token for that user. Connect to your Proxmox host via SSH and run the following commands.&lt;br /&gt;
&lt;br /&gt;
The script below will:&lt;br /&gt;
*   Create a role named &amp;lt;code&amp;gt;PVEExporter&amp;lt;/code&amp;gt; with the necessary audit permissions.&lt;br /&gt;
*   Create a user named &amp;lt;code&amp;gt;pve-exporter@pve&amp;lt;/code&amp;gt; specifically for this purpose (it does not require a password).&lt;br /&gt;
*   Assign the read-only role to the new user at the root level (&amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;).&lt;br /&gt;
*   Create an API token named &amp;lt;code&amp;gt;k8s-token&amp;lt;/code&amp;gt; for the user.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the role with read-only privileges&lt;br /&gt;
pveum roleadd PVEExporter -privs &amp;quot;Datacenter.Audit Sys.Audit&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create the user (password login is not needed for token auth)&lt;br /&gt;
pveum useradd pve-exporter@pve&lt;br /&gt;
&lt;br /&gt;
# Assign the role to the user for the entire datacenter&lt;br /&gt;
pveum aclmod / -user pve-exporter@pve -role PVEExporter&lt;br /&gt;
&lt;br /&gt;
# Create the API token for the user&lt;br /&gt;
pveum user token add pve-exporter@pve k8s-token&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; The last command will output the token ID and the secret value. Copy the full secret value (the long string of characters) immediately. &#039;&#039;&#039;You will not be able to see it again.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create the Combined Kubernetes Manifest ===&lt;br /&gt;
On your local machine, create a single YAML file named &amp;lt;code&amp;gt;pve-exporter-full.yaml&amp;lt;/code&amp;gt;. This file contains all the necessary Kubernetes resources. We will store the token ID and the secret in the Kubernetes Secret.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Before saving, replace the placeholder values for &amp;lt;code&amp;gt;YOUR_API_TOKEN_ID&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;pve-exporter@pve!k8s-token&amp;lt;/code&amp;gt;) and &amp;lt;code&amp;gt;YOUR_API_TOKEN_SECRET&amp;lt;/code&amp;gt; with the values you just generated. Also, update your Proxmox host&#039;s IP address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Secret&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter-credentials&lt;br /&gt;
  namespace: monitoring # Or your preferred namespace&lt;br /&gt;
stringData:&lt;br /&gt;
  # The PVE_USER for token auth is the full Token ID&lt;br /&gt;
  PVE_USER: &amp;quot;YOUR_API_TOKEN_ID&amp;quot; # e.g., pve-exporter@pve!k8s-token&lt;br /&gt;
  # The PVE_PASSWORD for token auth is the Token Secret&lt;br /&gt;
  PVE_PASSWORD: &amp;quot;YOUR_API_TOKEN_SECRET&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: apps/v1&lt;br /&gt;
kind: Deployment&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  replicas: 1&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  template:&lt;br /&gt;
    metadata:&lt;br /&gt;
      labels:&lt;br /&gt;
        app: pve-exporter&lt;br /&gt;
    spec:&lt;br /&gt;
      containers:&lt;br /&gt;
      - name: pve-exporter&lt;br /&gt;
        image: prompve/prometheus-pve-exporter:latest&lt;br /&gt;
        ports:&lt;br /&gt;
        - name: http-metrics&lt;br /&gt;
          containerPort: 9221&lt;br /&gt;
        env:&lt;br /&gt;
        - name: PVE_USER&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_USER&lt;br /&gt;
        - name: PVE_PASSWORD&lt;br /&gt;
          valueFrom:&lt;br /&gt;
            secretKeyRef:&lt;br /&gt;
              name: pve-exporter-credentials&lt;br /&gt;
              key: PVE_PASSWORD&lt;br /&gt;
        - name: PVE_VERIFY_SSL&lt;br /&gt;
          value: &amp;quot;false&amp;quot;&lt;br /&gt;
---&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Service&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    app: pve-exporter&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  ports:&lt;br /&gt;
  - name: http-metrics&lt;br /&gt;
    port: 9221&lt;br /&gt;
    targetPort: http-metrics&lt;br /&gt;
---&lt;br /&gt;
apiVersion: monitoring.coreos.com/v1&lt;br /&gt;
kind: ServiceMonitor&lt;br /&gt;
metadata:&lt;br /&gt;
  name: pve-exporter&lt;br /&gt;
  namespace: monitoring&lt;br /&gt;
  labels:&lt;br /&gt;
    release: prometheus # Label must match your Prometheus Operator&#039;s discovery selector&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    matchLabels:&lt;br /&gt;
      app: pve-exporter&lt;br /&gt;
  endpoints:&lt;br /&gt;
  - port: http-metrics&lt;br /&gt;
    path: /pve&lt;br /&gt;
    params:&lt;br /&gt;
      target:&lt;br /&gt;
      - &amp;quot;192.168.1.100&amp;quot; # &amp;lt;-- Replace with your Proxmox host&#039;s IP address&lt;br /&gt;
    relabelings:&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: instance&lt;br /&gt;
    - sourceLabels: [__param_target]&lt;br /&gt;
      targetLabel: target&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Apply the Kubernetes Manifest ===&lt;br /&gt;
Apply the single YAML file to your cluster to deploy all resources at once.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
kubectl apply -f pve-exporter-full.yaml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify the Deployment ===&lt;br /&gt;
Check that the pod is running and that Prometheus is successfully scraping the target.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Check pod status&lt;br /&gt;
kubectl get pods -n monitoring -l app=pve-exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After a minute, navigate to your Prometheus UI, go to &#039;&#039;&#039;Status -&amp;gt; Targets&#039;&#039;&#039;, and verify that a target named &amp;lt;code&amp;gt;pve-exporter&amp;lt;/code&amp;gt; is present and has a state of &#039;&#039;&#039;UP&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
*   The &amp;lt;code&amp;gt;PVE_VERIFY_SSL: &amp;quot;false&amp;quot;&amp;lt;/code&amp;gt; setting is used because Proxmox VE defaults to a self-signed SSL certificate. Set to &amp;lt;code&amp;gt;&amp;quot;true&amp;quot;&amp;lt;/code&amp;gt; if you use a valid, trusted certificate.&lt;br /&gt;
*   The &amp;lt;code&amp;gt;ServiceMonitor&amp;lt;/code&amp;gt; resource is intended for clusters running the Prometheus Operator. If you are not using it, you will need to add the scrape configuration directly to your &amp;lt;code&amp;gt;prometheus.yml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
*   All Kubernetes resources are deployed to the &amp;lt;code&amp;gt;monitoring&amp;lt;/code&amp;gt; namespace. Adjust if you use a different one.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proxmox VE]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Monitoring]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_macOS_Host_with_Prometheus_Node_Exporter&amp;diff=308</id>
		<title>Monitoring a macOS Host with Prometheus Node Exporter</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_macOS_Host_with_Prometheus_Node_Exporter&amp;diff=308"/>
		<updated>2025-08-29T11:08:47Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Monitoring a macOS Host with Prometheus Node Exporter to Monitoring a MacOS Host with Prometheus Node Exporter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Monitoring a MacOS Host with Prometheus Node Exporter]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=307</id>
		<title>Monitoring a MacOS Host with Prometheus Node Exporter</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=307"/>
		<updated>2025-08-29T11:08:45Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Monitoring a macOS Host with Prometheus Node Exporter to Monitoring a MacOS Host with Prometheus Node Exporter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:Prometheus]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;br /&gt;
&lt;br /&gt;
= Monitoring a MacOS Host with Prometheus Node Exporter =&lt;br /&gt;
&lt;br /&gt;
This guide details the process of installing and configuring the Prometheus Node Exporter on a MacOS machine, with a focus on filtering out irrelevant filesystems to ensure clean, actionable alerts. The primary challenge when monitoring MacOS is handling the numerous OS-managed, virtual, and temporary filesystems that can trigger false positive alerts for high disk usage.&lt;br /&gt;
&lt;br /&gt;
h2. Initial Setup and Problem Diagnosis&lt;br /&gt;
h3. Installation with Homebrew&lt;br /&gt;
The standard method for installing Node Exporter on MacOS is via the [https://brew.sh/ Homebrew] package manager.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. The Problem: False Positive Alerts&lt;br /&gt;
After a default installation, Node Exporter will scrape metrics from all mounted filesystems. On MacOS, this includes many volumes that are nearly full by design, leading to persistent, non-actionable alerts.&lt;br /&gt;
&lt;br /&gt;
Common sources of these false positives include:&lt;br /&gt;
* &#039;&#039;&#039;Xcode Simulator Runtimes:&#039;&#039;&#039; Mounted under &#039;&#039;/Library/Developer/CoreSimulator/&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Virtual Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/dev&#039;&#039; (devfs).&lt;br /&gt;
* &#039;&#039;&#039;Automounter Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/System/Volumes/Data/home&#039;&#039; (autofs).&lt;br /&gt;
* &#039;&#039;&#039;Read-only System Snapshots:&#039;&#039;&#039; The main &#039;&#039;/&#039;&#039; volume is a sealed, read-only snapshot of the OS.&lt;br /&gt;
&lt;br /&gt;
The goal is to filter these out and only monitor user-managed volumes where disk space is a real concern, primarily &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
h2. Configuration and Troubleshooting&lt;br /&gt;
The solution involves a two-part strategy: configuring Node Exporter to exclude noisy filesystems at the source, and writing a precise PromQL alert that only targets the user-managed data volume.&lt;br /&gt;
&lt;br /&gt;
h3. Step 1: Configure Node Exporter Exclusions&lt;br /&gt;
Node Exporter can be configured to ignore specific mount points using a command-line flag. When installed with Homebrew, these flags should be placed in a dedicated arguments file.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Create or Edit the Arguments File:&#039;&#039;&#039; This file is read by the `brew services` launch agent. It should contain one argument per line, &#039;&#039;&#039;without quotes&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano /opt/homebrew/etc/node_exporter.args&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Add the Exclusion Rule:&#039;&#039;&#039; To comprehensively filter out all known OS-managed, virtual, and temporary filesystems, add the following line to the file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
--collector.filesystem.mount-points-exclude=^/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery|Hardware|xarts|iSCPreboot)|Library/Developer/CoreSimulator/.*)$&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This regular expression tells Node Exporter to ignore any filesystem whose mount point matches these patterns.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Restart the Service:&#039;&#039;&#039; Apply the new configuration by restarting the Node Exporter service.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services restart node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. Step 2: Validate the Configuration&lt;br /&gt;
After restarting, verify that the noisy filesystems are no longer being exported. The following command queries the metrics endpoint and greps for the excluded patterns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -s http://localhost:9100/metrics | grep -E &#039;mountpoint=&amp;quot;/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery)|Library/Developer/CoreSimulator/.*)&amp;quot;&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A successful configuration will result in no output from this command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
h3. Step 3: Create a Precise PromQL Alert&lt;br /&gt;
With the metrics now clean, the final step is to create an alerting rule in Prometheus that is both simple and impossible to trigger with a false positive. Instead of excluding filesystems in the query, you should explicitly &#039;&#039;include&#039;&#039; only the volume you care about.&lt;br /&gt;
&lt;br /&gt;
The most critical user-managed filesystem on MacOS is &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended PromQL Alerting Rule:&#039;&#039;&#039;&lt;br /&gt;
This query calculates the percentage of used space for the &#039;&#039;&#039;Data&#039;&#039;&#039; volume and will fire only if it exceeds 90%.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;promql&amp;quot;&amp;gt;&lt;br /&gt;
100 - (node_filesystem_avail_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;} / node_filesystem_size_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;}) * 100 &amp;gt; 90&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this guide, you achieve a robust monitoring setup for MacOS that provides clean data and generates alerts that are always actionable.&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=306</id>
		<title>Monitoring a MacOS Host with Prometheus Node Exporter</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=306"/>
		<updated>2025-08-29T11:08:31Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:Prometheus]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;br /&gt;
&lt;br /&gt;
= Monitoring a MacOS Host with Prometheus Node Exporter =&lt;br /&gt;
&lt;br /&gt;
This guide details the process of installing and configuring the Prometheus Node Exporter on a MacOS machine, with a focus on filtering out irrelevant filesystems to ensure clean, actionable alerts. The primary challenge when monitoring MacOS is handling the numerous OS-managed, virtual, and temporary filesystems that can trigger false positive alerts for high disk usage.&lt;br /&gt;
&lt;br /&gt;
h2. Initial Setup and Problem Diagnosis&lt;br /&gt;
h3. Installation with Homebrew&lt;br /&gt;
The standard method for installing Node Exporter on MacOS is via the [https://brew.sh/ Homebrew] package manager.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. The Problem: False Positive Alerts&lt;br /&gt;
After a default installation, Node Exporter will scrape metrics from all mounted filesystems. On MacOS, this includes many volumes that are nearly full by design, leading to persistent, non-actionable alerts.&lt;br /&gt;
&lt;br /&gt;
Common sources of these false positives include:&lt;br /&gt;
* &#039;&#039;&#039;Xcode Simulator Runtimes:&#039;&#039;&#039; Mounted under &#039;&#039;/Library/Developer/CoreSimulator/&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Virtual Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/dev&#039;&#039; (devfs).&lt;br /&gt;
* &#039;&#039;&#039;Automounter Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/System/Volumes/Data/home&#039;&#039; (autofs).&lt;br /&gt;
* &#039;&#039;&#039;Read-only System Snapshots:&#039;&#039;&#039; The main &#039;&#039;/&#039;&#039; volume is a sealed, read-only snapshot of the OS.&lt;br /&gt;
&lt;br /&gt;
The goal is to filter these out and only monitor user-managed volumes where disk space is a real concern, primarily &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
h2. Configuration and Troubleshooting&lt;br /&gt;
The solution involves a two-part strategy: configuring Node Exporter to exclude noisy filesystems at the source, and writing a precise PromQL alert that only targets the user-managed data volume.&lt;br /&gt;
&lt;br /&gt;
h3. Step 1: Configure Node Exporter Exclusions&lt;br /&gt;
Node Exporter can be configured to ignore specific mount points using a command-line flag. When installed with Homebrew, these flags should be placed in a dedicated arguments file.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Create or Edit the Arguments File:&#039;&#039;&#039; This file is read by the `brew services` launch agent. It should contain one argument per line, &#039;&#039;&#039;without quotes&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano /opt/homebrew/etc/node_exporter.args&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Add the Exclusion Rule:&#039;&#039;&#039; To comprehensively filter out all known OS-managed, virtual, and temporary filesystems, add the following line to the file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
--collector.filesystem.mount-points-exclude=^/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery|Hardware|xarts|iSCPreboot)|Library/Developer/CoreSimulator/.*)$&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This regular expression tells Node Exporter to ignore any filesystem whose mount point matches these patterns.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Restart the Service:&#039;&#039;&#039; Apply the new configuration by restarting the Node Exporter service.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services restart node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. Step 2: Validate the Configuration&lt;br /&gt;
After restarting, verify that the noisy filesystems are no longer being exported. The following command queries the metrics endpoint and greps for the excluded patterns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -s http://localhost:9100/metrics | grep -E &#039;mountpoint=&amp;quot;/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery)|Library/Developer/CoreSimulator/.*)&amp;quot;&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A successful configuration will result in no output from this command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
h3. Step 3: Create a Precise PromQL Alert&lt;br /&gt;
With the metrics now clean, the final step is to create an alerting rule in Prometheus that is both simple and impossible to trigger with a false positive. Instead of excluding filesystems in the query, you should explicitly &#039;&#039;include&#039;&#039; only the volume you care about.&lt;br /&gt;
&lt;br /&gt;
The most critical user-managed filesystem on MacOS is &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended PromQL Alerting Rule:&#039;&#039;&#039;&lt;br /&gt;
This query calculates the percentage of used space for the &#039;&#039;&#039;Data&#039;&#039;&#039; volume and will fire only if it exceeds 90%.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;promql&amp;quot;&amp;gt;&lt;br /&gt;
100 - (node_filesystem_avail_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;} / node_filesystem_size_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;}) * 100 &amp;gt; 90&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this guide, you achieve a robust monitoring setup for MacOS that provides clean data and generates alerts that are always actionable.&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_macOS_gitlab-runner&amp;diff=305</id>
		<title>Create macOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_macOS_gitlab-runner&amp;diff=305"/>
		<updated>2025-08-29T11:07:56Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Create macOS gitlab-runner to Create MacOS gitlab-runner&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Create MacOS gitlab-runner]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=304</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=304"/>
		<updated>2025-08-29T11:07:53Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Gyurci08 moved page Create macOS gitlab-runner to Create MacOS gitlab-runner&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner with `brew services` (Simplified Method) ==&lt;br /&gt;
This guide details a simplified method for installing and configuring a GitLab Runner on macOS by using Homebrew&#039;s built-in service management. This is often easier than manually managing `launchd` files.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Installation paths depending on your Mac&#039;s architecture. The shell environment setup in a later step will handle this automatically by using `brew shellenv`, but it&#039;s good to be aware of the difference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; `/opt/`&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; `/usr/local/`&lt;br /&gt;
&lt;br /&gt;
=== 3. Create and Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it is best practice to run the GitLab Runner under a dedicated user account. If you haven&#039;t created one, you can do so in &#039;&#039;&#039;System Settings &amp;gt; Users &amp;amp; Groups&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Once the user (e.g., `runner`) exists, &#039;&#039;&#039;switch to it for all subsequent steps&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure the GitLab Runner ===&lt;br /&gt;
As the `runner` user, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set the shell to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Set Up the Shell Environment ===&lt;br /&gt;
This is the most critical step. For the runner&#039;s jobs to find tools and execute correctly, the `runner` user&#039;s shell environment must be configured properly.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the necessary environment variables and path definitions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# This command sets up Homebrew&#039;s environment, including the correct PATH.&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Add any other required PATH exports here. The &#039;brew shellenv&#039; command&lt;br /&gt;
# should handle the primary Homebrew paths for your architecture.&lt;br /&gt;
export PATH=&amp;quot;/Users/runner/Library/Android/sdk/platform-tools:${PATH}&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Start and Manage the Runner with `brew services` ===&lt;br /&gt;
&#039;&#039;&#039;Crucial Point:&#039;&#039;&#039; You must run these commands as the `runner` user. This ensures the service runs under the correct user account.&lt;br /&gt;
&lt;br /&gt;
To start the service and register it to run automatically at login:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services start gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services stop gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the status of all your Homebrew services:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Category:GitLab&amp;diff=303</id>
		<title>Category:GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Category:GitLab&amp;diff=303"/>
		<updated>2025-08-29T11:06:03Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;Category:Automation &amp;amp; Tooling&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Automation &amp;amp; Tooling]]&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=302</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=302"/>
		<updated>2025-08-29T11:01:40Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner with `brew services` (Simplified Method) ==&lt;br /&gt;
This guide details a simplified method for installing and configuring a GitLab Runner on macOS by using Homebrew&#039;s built-in service management. This is often easier than manually managing `launchd` files.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Installation paths depending on your Mac&#039;s architecture. The shell environment setup in a later step will handle this automatically by using `brew shellenv`, but it&#039;s good to be aware of the difference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; `/opt/`&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; `/usr/local/`&lt;br /&gt;
&lt;br /&gt;
=== 3. Create and Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it is best practice to run the GitLab Runner under a dedicated user account. If you haven&#039;t created one, you can do so in &#039;&#039;&#039;System Settings &amp;gt; Users &amp;amp; Groups&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Once the user (e.g., `runner`) exists, &#039;&#039;&#039;switch to it for all subsequent steps&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure the GitLab Runner ===&lt;br /&gt;
As the `runner` user, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set the shell to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Set Up the Shell Environment ===&lt;br /&gt;
This is the most critical step. For the runner&#039;s jobs to find tools and execute correctly, the `runner` user&#039;s shell environment must be configured properly.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the necessary environment variables and path definitions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# This command sets up Homebrew&#039;s environment, including the correct PATH.&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Add any other required PATH exports here. The &#039;brew shellenv&#039; command&lt;br /&gt;
# should handle the primary Homebrew paths for your architecture.&lt;br /&gt;
export PATH=&amp;quot;/Users/runner/Library/Android/sdk/platform-tools:${PATH}&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Start and Manage the Runner with `brew services` ===&lt;br /&gt;
&#039;&#039;&#039;Crucial Point:&#039;&#039;&#039; You must run these commands as the `runner` user. This ensures the service runs under the correct user account.&lt;br /&gt;
&lt;br /&gt;
To start the service and register it to run automatically at login:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services start gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services stop gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the status of all your Homebrew services:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=301</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=301"/>
		<updated>2025-08-29T10:59:37Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner with `brew services` (Simplified Method) ==&lt;br /&gt;
This guide details a simplified method for installing and configuring a GitLab Runner on macOS by using Homebrew&#039;s built-in service management. This is often easier than manually managing `launchd` files.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Homebrew Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Homebrew uses different installation paths depending on your Mac&#039;s architecture. The shell environment setup in a later step will handle this automatically by using `brew shellenv`, but it&#039;s good to be aware of the difference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; `/opt/homebrew/`&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; `/usr/local/`&lt;br /&gt;
&lt;br /&gt;
=== 3. Create and Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it is best practice to run the GitLab Runner under a dedicated user account. If you haven&#039;t created one, you can do so in &#039;&#039;&#039;System Settings &amp;gt; Users &amp;amp; Groups&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Once the user (e.g., `runner`) exists, &#039;&#039;&#039;switch to it for all subsequent steps&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure the GitLab Runner ===&lt;br /&gt;
As the `runner` user, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set the shell to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Set Up the Shell Environment ===&lt;br /&gt;
This is the most critical step. For the runner&#039;s jobs to find tools and execute correctly, the `runner` user&#039;s shell environment must be configured properly.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the necessary environment variables and path definitions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# This command sets up Homebrew&#039;s environment, including the correct PATH.&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Add any other required PATH exports here. The &#039;brew shellenv&#039; command&lt;br /&gt;
# should handle the primary Homebrew paths for your architecture.&lt;br /&gt;
export PATH=&amp;quot;/Users/runner/Library/Android/sdk/platform-tools:${PATH}&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Start and Manage the Runner with `brew services` ===&lt;br /&gt;
&#039;&#039;&#039;Crucial Point:&#039;&#039;&#039; You must run these commands as the `runner` user. This ensures the service runs under the correct user account.&lt;br /&gt;
&lt;br /&gt;
To start the service and register it to run automatically at login:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services start gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services stop gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the status of all your Homebrew services:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=300</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=300"/>
		<updated>2025-08-29T10:58:47Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner with `brew services` (Simplified Method) ==&lt;br /&gt;
This guide details a simplified method for installing and configuring a GitLab Runner on macOS by using Homebrew&#039;s built-in service management. This is often easier than manually managing `launchd` files.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Homebrew Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Homebrew uses different installation paths depending on your Mac&#039;s architecture. The shell environment setup in a later step will handle this automatically by using `brew shellenv`, but it&#039;s good to be aware of the difference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; `/opt/homebrew/`&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; `/usr/local/`&lt;br /&gt;
&lt;br /&gt;
=== 3. Create and Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it is best practice to run the GitLab Runner under a dedicated user account. If you haven&#039;t created one, you can do so in &#039;&#039;&#039;System Settings &amp;gt; Users &amp;amp; Groups&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Once the user (e.g., `runner`) exists, &#039;&#039;&#039;switch to it for all subsequent steps&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure the GitLab Runner ===&lt;br /&gt;
As the `runner` user, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set the shell to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Set Up the Shell Environment ===&lt;br /&gt;
This is the most critical step. For the runner&#039;s jobs to find tools and execute correctly, the `runner` user&#039;s shell environment must be configured properly.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the necessary environment variables and path definitions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# This command sets up Homebrew&#039;s environment, including the correct PATH.&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Add any other required PATH exports here. The &#039;brew shellenv&#039; command&lt;br /&gt;
# should handle the primary Homebrew paths for your architecture.&lt;br /&gt;
export PATH=&amp;quot;/Users/runner/Library/Android/sdk/platform-tools:${PATH}&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Start and Manage the Runner with `brew services` ===&lt;br /&gt;
&#039;&#039;&#039;Crucial Point:&#039;&#039;&#039; You must run these commands as the `runner` user. This ensures the service runs under the correct user account[2].&lt;br /&gt;
&lt;br /&gt;
To start the service and register it to run automatically at login:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services start gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services stop gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the status of all your Homebrew services:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=299</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=299"/>
		<updated>2025-08-29T10:56:29Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner with `brew services` (Simplified Method) ==&lt;br /&gt;
This guide details a simplified method for installing and configuring a GitLab Runner on macOS by using Homebrew&#039;s built-in service management. This is often easier than manually managing `launchd` files.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Homebrew Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Homebrew uses different installation paths depending on your Mac&#039;s architecture. The shell environment setup in a later step will handle this automatically by using `brew shellenv`, but it&#039;s good to be aware of the difference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; `/opt/homebrew/`&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; `/usr/local/`&lt;br /&gt;
&lt;br /&gt;
=== 3. Create and Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it is best practice to run the GitLab Runner under a dedicated user account. If you haven&#039;t created one, you can do so in &#039;&#039;&#039;System Settings &amp;gt; Users &amp;amp; Groups&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Once the user (e.g., `runner`) exists, switch to it for the following steps:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Configure the GitLab Runner ===&lt;br /&gt;
As the `runner` user, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set the shell to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Set Up the Shell Environment ===&lt;br /&gt;
This is the most critical step. For the runner&#039;s jobs to find tools and execute correctly, the `runner` user&#039;s shell environment must be configured properly.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the necessary environment variables and path definitions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# This command sets up Homebrew&#039;s environment, including the correct PATH.&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Add any other required PATH exports here. The &#039;brew shellenv&#039; command&lt;br /&gt;
# should handle the primary Homebrew paths for your architecture.&lt;br /&gt;
export PATH=&amp;quot;/Users/runner/Library/Android/sdk/platform-tools:${PATH}&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Start and Manage the Runner with `brew services` ===&lt;br /&gt;
While still logged in as the `runner` user, use the `brew services` commands to manage the GitLab Runner process. This will automatically create and manage the necessary `launchd` service file for you.&lt;br /&gt;
&lt;br /&gt;
To start the service and register it to run at login:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services start gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services stop gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the status of all your Homebrew services:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=298</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=298"/>
		<updated>2025-08-29T10:53:58Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner on macOS via Homebrew ==&lt;br /&gt;
This guide details a method for installing and configuring a GitLab Runner on macOS to run as a dedicated user service.&lt;br /&gt;
&lt;br /&gt;
=== 1. Install GitLab Runner ===&lt;br /&gt;
First, install the GitLab Runner package using Homebrew:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install gitlab-runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Note on Homebrew Paths (Apple Silicon vs. Intel) ===&lt;br /&gt;
Homebrew uses different installation paths depending on your Mac&#039;s architecture. You &#039;&#039;&#039;must&#039;&#039;&#039; use the correct path in the following configuration files.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Apple Silicon (M1/M2/M3):&#039;&#039;&#039; The path is `/opt/homebrew/`.&lt;br /&gt;
* &#039;&#039;&#039;Intel:&#039;&#039;&#039; The path is `/usr/local/`.&lt;br /&gt;
&lt;br /&gt;
The examples in this guide use the Apple Silicon path. Remember to change `/opt/homebrew/` to `/usr/local/` if you are on an Intel-based Mac.&lt;br /&gt;
&lt;br /&gt;
=== 3. Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it&#039;s best to run the GitLab Runner under a dedicated user account (e.g., `runner`).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Create a Custom `launchd` Service File ===&lt;br /&gt;
This process uses `launchd`, the standard service manager on macOS. For more general information, see the [[https://wiki.jandzsogyorgy.hu/index.php/MacOS_Services|macOS Services (launchd)]] page.&lt;br /&gt;
&lt;br /&gt;
Create a custom `.plist` file in the user&#039;s `LaunchAgents` directory to manage the runner process.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Paste the following XML configuration. &#039;&#039;&#039;Important:&#039;&#039;&#039; If you are on an Intel Mac, change the `ProgramArguments` string from `/opt/homebrew/` to `/usr/local/`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE plist PUBLIC &amp;quot;-//Apple//DTD PLIST 1.0//EN&amp;quot; &amp;quot;http://www.apple.com/DTDs/PropertyList-1.0.dtd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;plist version=&amp;quot;1.0&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;dict&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;KeepAlive&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;Label&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;homebrew.mxcl.gitlab-runner&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;LegacyTimers&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;LimitLoadToSessionType&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;array&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;Aqua&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;Background&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;LoginWindow&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;StandardIO&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;System&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;/array&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;ProcessType&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;Interactive&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;ProgramArguments&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;array&amp;gt;&lt;br /&gt;
                &amp;lt;!-- This path is for Apple Silicon. Change to /usr/local/ for Intel. --&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;/opt/homebrew/opt/gitlab-runner/bin/gitlab-runner&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;run&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;/array&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;RunAtLoad&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;WorkingDirectory&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;StandardErrorPath&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner/gitlab-runner.err.log&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;StandardOutPath&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner/gitlab-runner.out.log&amp;lt;/string&amp;gt;&lt;br /&gt;
&amp;lt;/dict&amp;gt;&lt;br /&gt;
&amp;lt;/plist&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Configure the GitLab Runner ===&lt;br /&gt;
Next, configure the runner&#039;s behavior by editing its `config.toml` file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set it to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 6. Set Up the Shell Environment ===&lt;br /&gt;
To ensure the shell executor has the correct environment variables, configure the `.bashrc` and `.bash_profile` files for the `runner` user.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add your required environment setup. The `brew shellenv` command and the `PATH` variable are especially important to get right for your architecture.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
# Use the correct path for your architecture.&lt;br /&gt;
# Apple Silicon&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
# Intel&lt;br /&gt;
# eval $(/usr/local/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Path (ensure /opt/homebrew/bin or /usr/local/bin is included)&lt;br /&gt;
export PATH=/Users/runner/.rbenv/shims:/Users/runner/Downloads/flutter/bin:/opt/homebrew/bin:/opt/homebrew/opt/ruby/bin:/opt/homebrew/lib/ruby/gems/3.2.0/bin:/Users/runner/.rbenv/shims:/opt/homebrew/bin:/opt/homebrew/sbin:/Library/flutter/bin:/Library/flutter/.pub-cache/bin:/Users/runner/.pub-cache/bin:/Users/runner/Library/Android/sdk/bundle-tool/:/Users/runner/Library/Android/sdk/platform-tools/:/Users/runner/Library/Android/sdk/cmdline-tools/latest/bin/:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures your `.bashrc` configuration is loaded for new shell sessions.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 7. Start and Manage the Runner Service ===&lt;br /&gt;
Finally, use `launchctl` to load your custom service file, which will start the GitLab Runner.&lt;br /&gt;
&lt;br /&gt;
To enable and run the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To disable and stop the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=297</id>
		<title>Create MacOS gitlab-runner</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Create_MacOS_gitlab-runner&amp;diff=297"/>
		<updated>2025-08-29T10:51:08Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:GitLab]]&lt;br /&gt;
== Installing GitLab Runner on macOS via Homebrew ==&lt;br /&gt;
This guide details a method for installing and configuring a GitLab Runner on macOS to run as a dedicated user service.&lt;br /&gt;
&lt;br /&gt;
=== 1. Switch to the Runner User ===&lt;br /&gt;
For security and isolation, it&#039;s best to run the GitLab Runner under a dedicated user account (e.g., `runner`).&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
su runner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Create a Custom `launchd` Service File ===&lt;br /&gt;
Create a custom `.plist` file in the user&#039;s `LaunchAgents` directory to manage the runner process. Run this command to create and edit the file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Paste the following XML configuration into the file. This configuration ensures the runner starts at login and stays running.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE plist PUBLIC &amp;quot;-//Apple//DTD PLIST 1.0//EN&amp;quot; &amp;quot;http://www.apple.com/DTDs/PropertyList-1.0.dtd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;plist version=&amp;quot;1.0&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;dict&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;KeepAlive&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;Label&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;homebrew.mxcl.gitlab-runner&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;LegacyTimers&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;LimitLoadToSessionType&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;array&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;Aqua&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;Background&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;LoginWindow&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;StandardIO&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;System&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;/array&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;ProcessType&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;Interactive&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;ProgramArguments&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;array&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;/opt/homebrew/opt/gitlab-runner/bin/gitlab-runner&amp;lt;/string&amp;gt;&lt;br /&gt;
                &amp;lt;string&amp;gt;run&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;/array&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;RunAtLoad&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;true/&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;WorkingDirectory&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;StandardErrorPath&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner/gitlab-runner.err.log&amp;lt;/string&amp;gt;&lt;br /&gt;
        &amp;lt;key&amp;gt;StandardOutPath&amp;lt;/key&amp;gt;&lt;br /&gt;
        &amp;lt;string&amp;gt;/Users/runner/gitlab-runner.out.log&amp;lt;/string&amp;gt;&lt;br /&gt;
&amp;lt;/dict&amp;gt;&lt;br /&gt;
&amp;lt;/plist&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Configure the GitLab Runner ===&lt;br /&gt;
Next, configure the runner&#039;s behavior by editing its main configuration file. Create or edit the `config.toml` file:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.gitlab-runner/config.toml&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following configuration. &#039;&#039;&#039;Note:&#039;&#039;&#039; GitLab Runner does not support Zsh as a shell for its jobs, so you must explicitly set it to `bash`.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
concurrent = 3&lt;br /&gt;
check_interval = 30&lt;br /&gt;
[session_server]&lt;br /&gt;
  session_timeout = 1800&lt;br /&gt;
[[runners]]&lt;br /&gt;
  name = &amp;quot;Mac-mini-runner&amp;quot;&lt;br /&gt;
  limit = 1&lt;br /&gt;
  url = &amp;quot;https://gitlab.com/&amp;quot;&lt;br /&gt;
  token = &amp;quot;masked&amp;quot;&lt;br /&gt;
  executor = &amp;quot;shell&amp;quot;&lt;br /&gt;
  shell=&amp;quot;bash&amp;quot;&lt;br /&gt;
  [runners.custom_build_dir]&lt;br /&gt;
  [runners.cache]&lt;br /&gt;
    [runners.cache.s3]&lt;br /&gt;
    [runners.cache.gcs]&lt;br /&gt;
    [runners.cache.azure]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 4. Set Up the Shell Environment ===&lt;br /&gt;
To ensure the shell executor has the correct environment variables and paths, you must configure the `.bashrc` and `.bash_profile` files for the `runner` user.&lt;br /&gt;
&lt;br /&gt;
==== Create and configure `.bashrc` ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bashrc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add your required environment setup. This is crucial for tools like Homebrew, rbenv, Android SDK, and Fastlane to work correctly in CI/CD jobs.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
### Brew ###&lt;br /&gt;
## Silicon&lt;br /&gt;
eval $(/opt/homebrew/bin/brew shellenv)&lt;br /&gt;
&lt;br /&gt;
### Ruby ###&lt;br /&gt;
eval &amp;quot;$(rbenv init -)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
### Extra environments ###&lt;br /&gt;
export LC_ALL=en_US.UTF-8&lt;br /&gt;
export LANG=en_US.UTF-8&lt;br /&gt;
&lt;br /&gt;
# Android&lt;br /&gt;
export ANDROID_HOME=&amp;quot;/Users/runner/Library/Android/sdk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Java&lt;br /&gt;
export JAVA_HOME=&amp;quot;/Applications/Android Studio.app/Contents/jbr/Contents/Home&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Path&lt;br /&gt;
export PATH=/Users/runner/.rbenv/shims:/Users/runner/Downloads/flutter/bin:/opt/homebrew/bin:/opt/homebrew/opt/ruby/bin:/opt/homebrew/lib/ruby/gems/3.2.0/bin:/Users/runner/.rbenv/shims:/opt/homebrew/bin:/opt/homebrew/sbin:/Library/flutter/bin:/Library/flutter/.pub-cache/bin:/Users/runner/.pub-cache/bin:/Users/runner/Library/Android/sdk/bundle-tool/:/Users/runner/Library/Android/sdk/platform-tools/:/Users/runner/Library/Android/sdk/cmdline-tools/latest/bin/:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin&lt;br /&gt;
    &lt;br /&gt;
# FASTLANE&lt;br /&gt;
export FASTLANE_SESSION=masked&lt;br /&gt;
export FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export FASTLANE_USER=&amp;quot;mobil@example.com&amp;quot;&lt;br /&gt;
export FASTLANE_PASSWORD=&amp;quot;masked&amp;quot;&lt;br /&gt;
export SPACESHIP_ONLY_ALLOW_INTERACTIVE_2FA=true&lt;br /&gt;
export SUPPLY_UPLOAD_MAX_RETRIES=5&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Create `.bash_profile` to source `.bashrc` ====&lt;br /&gt;
This ensures that your `.bashrc` configuration is loaded every time a new shell session starts.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Add the following lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#####&lt;br /&gt;
# USE &amp;quot;~/.bashrc&amp;quot; for configuration!&lt;br /&gt;
#####&lt;br /&gt;
### Import .bashrc ###&lt;br /&gt;
if [ -f ~/.bashrc ]; then&lt;br /&gt;
    . ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 5. Start and Manage the Runner Service ===&lt;br /&gt;
Finally, use `launchctl` to load your custom service file, which will start the GitLab Runner.&lt;br /&gt;
&lt;br /&gt;
To enable and run the service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To disable and stop the service (for maintenance or updates):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.gitlab-runner-custom.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=MacOS_Services&amp;diff=296</id>
		<title>MacOS Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=MacOS_Services&amp;diff=296"/>
		<updated>2025-08-29T10:50:17Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: Created page with &amp;quot;Category:MacOS == macOS Services (launchd) ==  === Description === On macOS, `launchd` is the system-wide service manager that starts, stops, and manages daemons and agents. You can interact with it using the `launchctl` command-line tool.  Services are defined in two main types: * &amp;#039;&amp;#039;&amp;#039;LaunchAgents&amp;#039;&amp;#039;&amp;#039;: These are executed for a specific user only when that user logs into a graphical session. * &amp;#039;&amp;#039;&amp;#039;LaunchDaemons&amp;#039;&amp;#039;&amp;#039;: These are invoked when the system boots and run indepen...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
== macOS Services (launchd) ==&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
On macOS, `launchd` is the system-wide service manager that starts, stops, and manages daemons and agents. You can interact with it using the `launchctl` command-line tool.&lt;br /&gt;
&lt;br /&gt;
Services are defined in two main types:&lt;br /&gt;
* &#039;&#039;&#039;LaunchAgents&#039;&#039;&#039;: These are executed for a specific user only when that user logs into a graphical session.&lt;br /&gt;
* &#039;&#039;&#039;LaunchDaemons&#039;&#039;&#039;: These are invoked when the system boots and run independently of any user session.&lt;br /&gt;
&lt;br /&gt;
=== Service File Locations (Namespaces) ===&lt;br /&gt;
The property list (`.plist`) files that define how a service should run are stored in specific directories:&lt;br /&gt;
&lt;br /&gt;
==== System-Wide (Administrator-provided) ====&lt;br /&gt;
* `/Library/LaunchDaemons`: For system-wide daemons.&lt;br /&gt;
* `/Library/LaunchAgents`: For agents that should be available to all users.&lt;br /&gt;
&lt;br /&gt;
==== User-Specific ====&lt;br /&gt;
* `~/Library/LaunchAgents`: For agents provided by and running as the current user.&lt;br /&gt;
&lt;br /&gt;
=== Managing Services with `launchctl` ===&lt;br /&gt;
You can control services using the `launchctl` command.&lt;br /&gt;
&lt;br /&gt;
To enable and start a service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl load /path/to/your/service.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To disable and stop a service:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
launchctl unload /path/to/your/service.plist&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
	<entry>
		<id>https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=295</id>
		<title>Monitoring a MacOS Host with Prometheus Node Exporter</title>
		<link rel="alternate" type="text/html" href="https://wiki.jandzsogyorgy.hu/index.php?title=Monitoring_a_MacOS_Host_with_Prometheus_Node_Exporter&amp;diff=295"/>
		<updated>2025-08-29T10:45:43Z</updated>

		<summary type="html">&lt;p&gt;Gyurci08: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:MacOS]]&lt;br /&gt;
[[Category:Prometheus]]&lt;br /&gt;
[[Category:Guides &amp;amp; Tutorials]]&lt;br /&gt;
&lt;br /&gt;
= Monitoring a macOS Host with Prometheus Node Exporter =&lt;br /&gt;
&lt;br /&gt;
This guide details the process of installing and configuring the Prometheus Node Exporter on a macOS machine, with a focus on filtering out irrelevant filesystems to ensure clean, actionable alerts. The primary challenge when monitoring macOS is handling the numerous OS-managed, virtual, and temporary filesystems that can trigger false positive alerts for high disk usage.&lt;br /&gt;
&lt;br /&gt;
h2. Initial Setup and Problem Diagnosis&lt;br /&gt;
h3. Installation with Homebrew&lt;br /&gt;
The standard method for installing Node Exporter on macOS is via the [https://brew.sh/ Homebrew] package manager.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. The Problem: False Positive Alerts&lt;br /&gt;
After a default installation, Node Exporter will scrape metrics from all mounted filesystems. On macOS, this includes many volumes that are nearly full by design, leading to persistent, non-actionable alerts.&lt;br /&gt;
&lt;br /&gt;
Common sources of these false positives include:&lt;br /&gt;
* &#039;&#039;&#039;Xcode Simulator Runtimes:&#039;&#039;&#039; Mounted under &#039;&#039;/Library/Developer/CoreSimulator/&#039;&#039;.&lt;br /&gt;
* &#039;&#039;&#039;Virtual Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/dev&#039;&#039; (devfs).&lt;br /&gt;
* &#039;&#039;&#039;Automounter Filesystems:&#039;&#039;&#039; Such as &#039;&#039;/System/Volumes/Data/home&#039;&#039; (autofs).&lt;br /&gt;
* &#039;&#039;&#039;Read-only System Snapshots:&#039;&#039;&#039; The main &#039;&#039;/&#039;&#039; volume is a sealed, read-only snapshot of the OS.&lt;br /&gt;
&lt;br /&gt;
The goal is to filter these out and only monitor user-managed volumes where disk space is a real concern, primarily &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
h2. Configuration and Troubleshooting&lt;br /&gt;
The solution involves a two-part strategy: configuring Node Exporter to exclude noisy filesystems at the source, and writing a precise PromQL alert that only targets the user-managed data volume.&lt;br /&gt;
&lt;br /&gt;
h3. Step 1: Configure Node Exporter Exclusions&lt;br /&gt;
Node Exporter can be configured to ignore specific mount points using a command-line flag. When installed with Homebrew, these flags should be placed in a dedicated arguments file.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Create or Edit the Arguments File:&#039;&#039;&#039; This file is read by the `brew services` launch agent. It should contain one argument per line, &#039;&#039;&#039;without quotes&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nano /opt/homebrew/etc/node_exporter.args&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Add the Exclusion Rule:&#039;&#039;&#039; To comprehensively filter out all known OS-managed, virtual, and temporary filesystems, add the following line to the file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
--collector.filesystem.mount-points-exclude=^/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery|Hardware|xarts|iSCPreboot)|Library/Developer/CoreSimulator/.*)$&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This regular expression tells Node Exporter to ignore any filesystem whose mount point matches these patterns.&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Restart the Service:&#039;&#039;&#039; Apply the new configuration by restarting the Node Exporter service.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew services restart node_exporter&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
h3. Step 2: Validate the Configuration&lt;br /&gt;
After restarting, verify that the noisy filesystems are no longer being exported. The following command queries the metrics endpoint and greps for the excluded patterns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -s http://localhost:9100/metrics | grep -E &#039;mountpoint=&amp;quot;/(dev|private/var/folders|System/Volumes/(Preboot|Update|VM|Recovery)|Library/Developer/CoreSimulator/.*)&amp;quot;&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A successful configuration will result in no output from this command.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
h3. Step 3: Create a Precise PromQL Alert&lt;br /&gt;
With the metrics now clean, the final step is to create an alerting rule in Prometheus that is both simple and impossible to trigger with a false positive. Instead of excluding filesystems in the query, you should explicitly &#039;&#039;include&#039;&#039; only the volume you care about.&lt;br /&gt;
&lt;br /&gt;
The most critical user-managed filesystem on macOS is &#039;&#039;/System/Volumes/Data&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended PromQL Alerting Rule:&#039;&#039;&#039;&lt;br /&gt;
This query calculates the percentage of used space for the &#039;&#039;&#039;Data&#039;&#039;&#039; volume and will fire only if it exceeds 90%.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;promql&amp;quot;&amp;gt;&lt;br /&gt;
100 - (node_filesystem_avail_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;} / node_filesystem_size_bytes{mountpoint=&amp;quot;/System/Volumes/Data&amp;quot;}) * 100 &amp;gt; 90&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By following this guide, you achieve a robust monitoring setup for macOS that provides clean data and generates alerts that are always actionable.&lt;/div&gt;</summary>
		<author><name>Gyurci08</name></author>
	</entry>
</feed>