Grafana InfluxDB Raspberry Pi gives you a self-hosted time-series monitoring stack that turns sensor readings, system metrics, and IoT data into live dashboards. InfluxDB 2.x stores measurements with nanosecond timestamps and a compressed time-series engine that handles millions of data points without performance degradation. Grafana connects to InfluxDB as a data source and renders the data as panels, graphs, gauges, and tables that update in real time. This guide covers Docker Compose deployment of both services, Telegraf for Pi system metrics, the Python influxdb-client for custom sensor data, MQTT ingestion from Zigbee2MQTT and ESPHome, and building your first dashboards.
Last tested: Raspberry Pi OS Bookworm Lite 64-bit | April 30, 2026 | Raspberry Pi 4 Model B (4GB) | InfluxDB 2.7 | Grafana 11.1 | Telegraf 1.31
Key Takeaways
- InfluxDB 2.x uses token authentication, organisations, and buckets, not the username/database/password model from v1. All connection strings, Python clients, and Telegraf configs must use a token. Do not follow v1 setup guides for a v2 installation.
- Store InfluxDB data on an SSD, not microSD. InfluxDB compresses and merges time-series shards continuously in the background. This generates sustained write and read I/O that accelerates SD card wear and produces measurable query latency on microSD compared to SSD.
- Telegraf is the fastest path to Pi system metrics in InfluxDB. A single Telegraf config file captures CPU, RAM, disk, network, and Pi temperature every 10 seconds with no custom code required.
Grafana InfluxDB Raspberry Pi: How the Stack Works
InfluxDB is a purpose-built time-series database. Every measurement it stores is associated with a timestamp, a measurement name, a set of tags (indexed metadata like device name or location), and one or more fields (the actual numeric values). This structure makes range queries (“give me all temperature readings from the kitchen sensor for the last 24 hours”) extremely fast compared to a general-purpose database.
Grafana connects to InfluxDB as a data source using the Flux query language (InfluxDB 2.x) or InfluxQL (compatibility mode). Each panel on a Grafana dashboard runs a query against the data source and renders the result. Panels update on a configurable interval. The Grafana web interface runs on port 3000; InfluxDB runs on port 8086. Both run in Docker containers, making updates and backups straightforward.
| Component | Role | Port |
|---|---|---|
| InfluxDB 2.x | Time-series database, stores all measurements | 8086 |
| Grafana | Dashboard and visualisation frontend | 3000 |
| Telegraf | Metrics collector: Pi system stats, MQTT bridge | No web UI |
| influxdb-client (Python) | Custom sensor data ingestion from Python scripts | No web UI |
Hardware and OS Preparation
Pi 4 with 4GB RAM is the practical minimum. InfluxDB 2.x uses more memory than v1 for its in-memory index and write buffer. Under sustained load with Telegraf collecting at 10-second intervals and Grafana serving dashboards, expect 600-900MB RAM used between the two containers. The 2GB Pi 4 model works for light use but leaves little headroom for other services running on the same Pi.
Flash Raspberry Pi OS Bookworm Lite 64-bit. After first boot:
sudo apt update && sudo apt full-upgrade -y
# Install Docker and Compose v2
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
sudo apt install docker-compose-plugin -y
Mount an SSD for InfluxDB data. Add it to /etc/fstab using its UUID:
blkid /dev/sda1
# UUID=your-uuid-here /mnt/influxdb ext4 defaults,noatime,nofail 0 2
sudo mkdir -p /mnt/influxdb
sudo mount -a
Set a static IP so dashboards are always reachable at the same address:
sudo nmcli connection modify "Wired connection 1" \
ipv4.method manual \
ipv4.addresses 192.168.1.95/24 \
ipv4.gateway 192.168.1.1 \
ipv4.dns 192.168.1.1
sudo nmcli connection up "Wired connection 1"
- MADE FOR THE MAKERS: Create; Explore; Store; The T7 Portable SSD delivers fast speeds and durable features to back up an…
- SHARE IDEAS IN A FLASH: Don’t waste a second waiting and spend more time doing; The T7 is embedded with PCIe NVMe techno…
- ALWAYS MAKE THE SAVE: Compact design with massive capacity; With capacities up to 4TB, save exactly what you need to you…
Deploying InfluxDB and Grafana
Create the project directory and a .env file for the InfluxDB initial credentials:
mkdir -p ~/monitoring && cd ~/monitoring
cat > .env <<EOF
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=change_this_password
INFLUXDB_ADMIN_TOKEN=change_this_long_token_string
INFLUXDB_ORG=home
INFLUXDB_BUCKET=metrics
EOF
chmod 600 .env
Create compose.yaml:
services:
influxdb:
image: influxdb:2.7
container_name: influxdb
restart: unless-stopped
ports:
- "127.0.0.1:8086:8086"
env_file: .env
environment:
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_USERNAME: ${INFLUXDB_ADMIN_USER}
DOCKER_INFLUXDB_INIT_PASSWORD: ${INFLUXDB_ADMIN_PASSWORD}
DOCKER_INFLUXDB_INIT_ORG: ${INFLUXDB_ORG}
DOCKER_INFLUXDB_INIT_BUCKET: ${INFLUXDB_BUCKET}
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: ${INFLUXDB_ADMIN_TOKEN}
volumes:
- /mnt/influxdb/data:/var/lib/influxdb2
- /mnt/influxdb/config:/etc/influxdb2
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- grafana-storage:/var/lib/grafana
environment:
GF_SECURITY_ADMIN_PASSWORD: change_this_grafana_password
depends_on:
- influxdb
volumes:
grafana-storage:
docker compose up -d
docker compose logs -f influxdb
Expected result: InfluxDB logs show the setup completing and the server listening on port 8086. Navigate to http://<pi-ip>:3000 and the Grafana login page loads. Log in with username admin and the password set in GF_SECURITY_ADMIN_PASSWORD. Change the Grafana admin password immediately after first login.

Connect Grafana to InfluxDB
In Grafana, go to Connections > Data Sources > Add new data source > InfluxDB. Configure:
- Query language: Flux
- URL:
http://influxdb:8086(using the Docker service name, since both containers share the same Docker network) - Organisation: home
- Token: the value set in
INFLUXDB_ADMIN_TOKEN - Default bucket: metrics
Click Save and Test. Expected result: Grafana returns a green “Data source connected and bucket found” confirmation.
Pi System Metrics with Telegraf
Telegraf is a metrics collection agent that writes directly to InfluxDB. Install it on the Pi host (not inside a container) so it can read system metrics and the Pi’s thermal zone directly:
# Add InfluxData APT repository using the current keyring method
curl -fsSL https://repos.influxdata.com/influxdata-archive_compat.key \
| sudo gpg --dearmor -o /etc/apt/keyrings/influxdata.gpg
echo "deb [signed-by=/etc/apt/keyrings/influxdata.gpg] https://repos.influxdata.com/debian stable main" \
| sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt update && sudo apt install telegraf -y
Create /etc/telegraf/telegraf.d/pi.conf:
[global_tags]
host = "raspberrypi"
[agent]
interval = "10s"
flush_interval = "10s"
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "change_this_long_token_string"
organization = "home"
bucket = "metrics"
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.mem]]
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs"]
[[inputs.net]]
interfaces = ["eth0"]
[[inputs.temp]]
# Reads /sys/class/thermal/thermal_zone*/temp
# Includes Pi SoC temperature as thermal_zone0
sudo systemctl enable --now telegraf
sudo systemctl status telegraf
Expected result: After 30 seconds, open the InfluxDB web UI at http://<pi-ip>:8086 and browse the metrics bucket. You should see measurements named cpu, mem, disk, net, and temp appearing with 10-second granularity.
Custom Sensor Data with Python
For custom sensors (INA219 power monitor, BME280 environmental sensor, or any other hardware), write data to InfluxDB using the official Python client. The power monitoring article uses this approach for USB current and voltage logging. The same pattern applies to any sensor.
pip install influxdb-client --break-system-packages
A minimal Python script that writes a measurement to InfluxDB 2.x:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
import time
INFLUX_URL = "http://localhost:8086"
INFLUX_TOKEN = "change_this_long_token_string"
INFLUX_ORG = "home"
INFLUX_BUCKET = "metrics"
client = InfluxDBClient(url=INFLUX_URL, token=INFLUX_TOKEN, org=INFLUX_ORG)
write_api = client.write_api(write_options=SYNCHRONOUS)
def write_sensor(sensor_name, field, value):
point = (
Point("sensor")
.tag("sensor_id", sensor_name)
.field(field, float(value))
)
write_api.write(bucket=INFLUX_BUCKET, org=INFLUX_ORG, record=point)
# Example: write a temperature reading
write_sensor("kitchen_bme280", "temperature_c", 21.4)
write_sensor("kitchen_bme280", "humidity_pct", 48.2)
client.close()
Wrap this in a loop, add your actual sensor reading code, and run it as a systemd service to collect continuously. See Raspberry Pi Power Monitoring via USB for a complete example using INA219 current sensors with this same InfluxDB write pattern.
MQTT Data from Zigbee2MQTT and ESPHome
Telegraf’s MQTT consumer plugin subscribes to MQTT topics and writes incoming payloads to InfluxDB. This makes it straightforward to get Zigbee sensor readings and ESPHome device state into Grafana without any custom code.
Add an MQTT consumer block to /etc/telegraf/telegraf.d/mqtt.conf:
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = [
"zigbee2mqtt/+",
"esphome/+/sensor/+/state",
]
data_format = "json"
json_string_fields = []
username = "telegraf"
password = "telegraf-password"
Create a dedicated telegraf MQTT user in Mosquitto with read access to the relevant topics. See Mosquitto MQTT Raspberry Pi for the password file and ACL setup. Telegraf parses the JSON payload and writes each numeric field as an InfluxDB field automatically.
sudo systemctl restart telegraf
journalctl -u telegraf -f | grep -i "mqtt\|error"
Expected result: Telegraf connects to the MQTT broker and messages appear in the InfluxDB metrics bucket under measurement name mqtt_consumer within 30 seconds of a device reporting. For Zigbee2MQTT and ESPHome device setup, see Zigbee2MQTT Raspberry Pi and ESPHome Raspberry Pi.
Building Dashboards in Grafana
Create a new dashboard in Grafana under Dashboards > New Dashboard > Add panel. Each panel requires a Flux query against the InfluxDB data source.
Pi CPU temperature panel
from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "temp")
|> filter(fn: (r) => r._field == "temp")
|> filter(fn: (r) => r.host == "raspberrypi")
Set the panel type to Time series. Add a threshold at 80 to show a red line at the Pi 5 throttle point.
Memory usage panel
from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "mem")
|> filter(fn: (r) => r._field == "used_percent")
Zigbee sensor temperature panel
from(bucket: "metrics")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "mqtt_consumer")
|> filter(fn: (r) => r._field == "temperature")
|> filter(fn: (r) => r.topic =~ /zigbee2mqtt/)
Use the variable selector in the dashboard settings to create a $topic variable populated from InfluxDB tag values. This lets you switch between sensors using a dropdown without editing queries.
Importing community dashboards
Grafana’s dashboard library at grafana.com/grafana/dashboards contains pre-built dashboards for common Telegraf setups. Dashboard ID 928 (Telegraf system metrics) gives you a complete Pi health dashboard in one import. In Grafana, go to Dashboards > Import, enter the ID, select your InfluxDB data source, and the full dashboard loads with panels already configured.
Retention Policies and Storage Management
InfluxDB 2.x manages data retention through bucket settings. The default bucket has no retention limit, which means data accumulates indefinitely. For a Pi monitoring setup, 30-90 days of raw data is adequate. Set retention when creating the bucket or update it from the InfluxDB UI under Buckets > Edit:
# Set 90-day retention via the influx CLI inside the container
docker exec -it influxdb influx bucket update \
--name metrics \
--retention 90d \
--org home \
--token change_this_long_token_string
At 10-second intervals, Telegraf writes roughly 8,000-12,000 points per hour across five input plugins. At 90-day retention, expect 15-25GB of compressed storage. On an SSD this is negligible. On microSD it is a problem both for capacity and for write endurance.
Backups and Maintenance
Back up InfluxDB data and Grafana dashboards separately. Stop the InfluxDB container before backing up to ensure consistency:
docker compose stop influxdb
sudo rsync -avh /mnt/influxdb/ /mnt/backup/influxdb/
docker compose start influxdb
Export Grafana dashboards as JSON from the dashboard settings menu. Store the JSON files in version control or alongside the compose configuration. Grafana’s internal database (in the grafana-storage Docker volume) can also be backed up by stopping the container and copying the volume contents.
Keep both services updated:
docker compose pull
docker compose up -d
docker image prune -f
Troubleshooting
Grafana cannot connect to InfluxDB
# Test InfluxDB is reachable from inside the Grafana container
docker exec -it grafana wget -q -O- http://influxdb:8086/health
# Check InfluxDB is running
docker ps | grep influxdb
docker compose logs influxdb | tail -20
If the health check returns {"status":"pass"} but Grafana still fails the connection test, confirm the token in the Grafana data source settings matches the INFLUXDB_ADMIN_TOKEN value in .env exactly, including case. Tokens are case-sensitive and whitespace-sensitive.
Telegraf not writing data
sudo journalctl -u telegraf -n 50 | grep -i "error\|E!"
The most common cause is a wrong token or URL in the Telegraf config. Telegraf logs errors with the prefix E!. A successful write logs nothing. Silence is normal. If Telegraf is running and silent but no data appears in InfluxDB, confirm the bucket name and organisation in the config match the values in .env.
Flux queries return no data
Confirm the time range selector in Grafana covers a period where data exists. Telegraf data starts appearing 10-20 seconds after the service starts. A panel showing “Last 5 minutes” immediately after setup may show no data. Switch to “Last 1 hour” to confirm data is present, then narrow the range.
Verify the measurement name and field names match what InfluxDB is storing. Use the InfluxDB Data Explorer at http://<pi-ip>:8086 to browse the bucket contents and build a working query interactively, then copy it to Grafana.
InfluxDB uses too much RAM
Add memory limits to the InfluxDB service in compose.yaml:
deploy:
resources:
limits:
memory: 512M
512MB is the practical minimum for InfluxDB 2.x under light load. Below this, the container OOM-kills during compaction operations. If memory pressure is a problem, reduce the Telegraf collection interval to 30s or 60s to reduce write volume.
FAQ
Should I use InfluxDB v1 or v2?
Use v2. InfluxDB v1 is in maintenance mode with no new features and a 2025 end-of-life timeline. v2 uses Flux query language, token authentication, and a web UI for data exploration and administration. All current Telegraf and Python client documentation targets v2. The only reason to use v1 is compatibility with legacy systems you cannot change.
Can I use Prometheus instead of InfluxDB?
Yes. Prometheus is a pull-based metrics system. It scrapes endpoints rather than receiving pushed data. It works better than InfluxDB for monitoring services that expose Prometheus metrics natively (many containerised applications do). InfluxDB works better for pushed sensor data, IoT time series, and arbitrary custom measurements. For a Pi monitoring stack where the primary data sources are Telegraf system metrics and custom Python sensor scripts, InfluxDB is the simpler path. A mixed setup using both is also common in larger homelab environments.
How do I share Grafana dashboards outside my home network?
Put Caddy in front of Grafana as a reverse proxy with automatic HTTPS. Grafana binds to port 3000 and Caddy proxies an HTTPS domain to it. Anonymous access can be enabled in Grafana’s config for read-only public dashboards, or leave authentication required for private use. See Caddy Reverse Proxy Raspberry Pi for the full setup.
Can Grafana and InfluxDB run on the same Pi as Home Assistant?
Yes, if the Pi has 8GB RAM and an NVMe SSD. Running Home Assistant Supervised alongside InfluxDB and Grafana on a Pi 4 with 4GB is possible but tight. The Supervisor health checks penalise Docker configurations that deviate from its expectations, and adding more containers increases that risk. A cleaner approach for Home Assistant users is to run the Grafana add-on inside Home Assistant and point it at a standalone InfluxDB container running on the same or a different Pi.
How long does it take for Telegraf data to appear in Grafana?
With a 10-second collection interval, the first data points appear in InfluxDB within 20-30 seconds of starting Telegraf. Grafana panels refresh on the dashboard’s refresh interval (default 30 seconds for new panels). Set the Grafana panel refresh to 10 seconds to see near-real-time updates. For live monitoring of Pi temperature during an overclocking session or stress test, 10-second refresh with a 5-minute time window gives useful real-time feedback.
References
- https://docs.influxdata.com/influxdb/v2/
- https://grafana.com/docs/grafana/latest/
- https://docs.influxdata.com/telegraf/v1/
- https://grafana.com/grafana/dashboards/928
About the Author
Chuck Wilson has been programming and building with computers since the Tandy 1000 era. His professional background includes CAD drafting, manufacturing line programming, and custom computer design. He runs PidiyLab in retirement, documenting Raspberry Pi and homelab projects that he actually deploys and maintains on real hardware. Every article on this site reflects hands-on testing on specific hardware and OS versions, not theoretical walkthroughs.
Last tested hardware: Raspberry Pi 4 Model B (4GB), USB 3.0 SSD. Last tested OS: Raspberry Pi OS Bookworm Lite 64-bit. InfluxDB 2.7, Grafana 11.1, Telegraf 1.31.

