Raspberry Pi Cluster: Build, Configure, and Run Distributed Workloads

building a parallel cluster

A Raspberry Pi cluster connects multiple Pi boards over a gigabit switch, assigns each a static hostname and IP, and runs distributed workloads through MicroK8s or MPI. A four-node cluster runs containerized services, parallel computing jobs, and multi-node Kubernetes deployments for under $300 in hardware. This guide covers the hardware, Bookworm-compatible network setup, MicroK8s and MPICH installation, and the workloads a Pi cluster handles well.

Last tested: Raspberry Pi OS Bookworm Lite 64-bit | May 2025 | Raspberry Pi 4 Model B (4GB) x4 | MicroK8s 1.30, MPICH 4.1, Python 3.11

Key Takeaways

  • Static IP configuration on Bookworm uses NetworkManager, not dhcpcd.conf. The dhcpcd method is deprecated and absent from Bookworm. Use nmcli or set DHCP reservations on your router. Any cluster guide referencing dhcpcd.conf will not work on current Raspberry Pi OS.
  • MicroK8s requires snap, which is not installed by default on Raspberry Pi OS Bookworm. Install snap first (sudo apt install snapd), reboot, then install MicroK8s. Skipping the reboot causes the snap daemon to fail silently.
  • The cgroup configuration path differs between Pi 4 and Pi 5. Pi 4 uses /boot/cmdline.txt. Pi 5 uses /boot/firmware/cmdline.txt. Editing the wrong file produces no error but cgroups will not be enabled, and MicroK8s nodes will show as NotReady.

Hardware for a Raspberry Pi Cluster

The minimum viable Raspberry Pi cluster is three nodes: one head node and two workers. Four nodes is the practical starting point because it gives enough workers to demonstrate actual load distribution. Pi 4 (4GB) is the current baseline. Pi 5 is worth the extra cost for new builds because its faster CPU and PCIe lane reduce inter-node job dispatch latency noticeably at higher node counts.

Per-node hardware: Raspberry Pi 4 (4GB) or Pi 5 (8GB). 5V/3A USB-C PSU per board (Pi 4), or 5V/5A per board (Pi 5). A multi-port USB-C charger rated for the total load simplifies cable management. microSD card, 32GB minimum, Class 10 or better. Cat6 patch cable per node.

Shared hardware: Gigabit Ethernet switch with enough ports for all nodes plus uplink. Stackable cluster case (GeeekPi or Turing Pi 2) keeps the build compact and routes airflow. For Pi 5 nodes, the official active cooler is strongly recommended over passive heatsinks because sustained cluster workloads push the Pi 5 thermal ceiling faster than single-board use.

Optional upgrades: PoE HAT per node eliminates separate power supplies entirely and reduces cable count substantially. USB 3.0 SSD per node instead of SD cards eliminates the most common failure point and roughly doubles sequential read throughput. NVMe via M.2 HAT+ on Pi 5 is the fastest storage option if budget allows.

Raspberry Pi cluster node architecture diagram: head node, gigabit switch, worker nodes, and MicroK8s orchestration layer

Expected result: All nodes powered on, connected to the switch, and showing link lights on every port. A managed switch’s port status page should show all ports at 1Gbps. If any port negotiates at 100Mbps, replace the cable before continuing. Cluster performance is network-bound at scale and a single slow link shows up in every benchmark.

Network and OS Setup for a Raspberry Pi Cluster

Flash identical Raspberry Pi OS Bookworm Lite 64-bit images to each SD card using Raspberry Pi Imager. In the Imager advanced settings, assign a unique hostname to each card before writing: node0 for the head node, node1 through node3 for workers. Set the same username and password across all nodes. Enable SSH. Do not use wpa_supplicant.conf for WiFi — set credentials in Imager if WiFi is needed, but wired Ethernet is strongly preferred for cluster use.

After all nodes boot, assign static IPs using NetworkManager on each node. On the head node:

sudo nmcli con mod "Wired connection 1" \
  ipv4.addresses 192.168.50.1/24 \
  ipv4.gateway 192.168.50.254 \
  ipv4.dns "8.8.8.8 8.8.4.4" \
  ipv4.method manual
sudo nmcli con up "Wired connection 1"

Repeat on each worker, incrementing the IP: 192.168.50.2/24 for node1, 192.168.50.3/24 for node2, and so on. Verify connectivity from the head node:

ping -c 4 192.168.50.2
ping -c 4 192.168.50.3

Add all nodes to /etc/hosts on the head node so hostnames resolve without DNS:

192.168.50.1  node0
192.168.50.2  node1
192.168.50.3  node2
192.168.50.4  node3

Distribute this /etc/hosts file to all worker nodes:

for i in 1 2 3; do
  scp /etc/hosts pi@node${i}:/tmp/hosts
  ssh pi@node${i} "sudo cp /tmp/hosts /etc/hosts"
done

Generate an ed25519 SSH key on the head node and distribute it to all workers for passwordless authentication:

ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
for i in 1 2 3; do
  ssh-copy-id pi@node${i}
done

Expected result: ssh pi@node1 from the head node connects without a password prompt. ping node2 resolves by hostname. All four nodes respond to ping from each other. If a worker does not respond, check that NetworkManager applied the static IP with nmcli con show on that node.

Cluster Software: MicroK8s and MPI

Two orchestration paths cover most Raspberry Pi cluster use cases. MicroK8s handles containerized workloads and is the right choice for web services, CI pipelines, and anything that runs in Docker containers. MPICH handles parallel computing jobs (simulation, data processing, distributed number crunching) and communicates directly between nodes without a container layer. Install both or choose based on your workload.

Before installing MicroK8s, enable cgroups. On Pi 4, edit /boot/cmdline.txt. On Pi 5, edit /boot/firmware/cmdline.txt. Append to the existing single line without creating a new line:

cgroup_enable=memory cgroup_memory=1

Reboot all nodes after editing. Then install snap and MicroK8s on every node:

sudo apt install -y snapd
sudo reboot
sudo snap install microk8s --classic
sudo usermod -aG microk8s $USER
newgrp microk8s
microk8s status --wait-ready

On the head node, generate a join token and use it to add each worker:

# On head node (node0):
microk8s add-node

# Copy the printed join command, then on each worker:
microk8s join 192.168.50.1:25000/<token>

Verify all nodes joined successfully:

microk8s kubectl get nodes

Expected result: All four nodes appear with status Ready. If any node shows NotReady, check cgroup configuration on that node with cat /proc/cgroups | grep memory — the enabled column should show 1.

For MPI parallel computing, install MPICH on all nodes:

sudo apt install -y mpich

Create a machinefile on the head node listing all worker hostnames and slot counts:

node0:4
node1:4
node2:4
node3:4

Test MPI across all nodes with a simple hostname broadcast:

mpirun -np 16 -machinefile ~/machinefile hostname

Expected result: 16 lines of output, four from each node hostname, confirming MPI dispatched processes across the full cluster. If any node is missing from the output, verify passwordless SSH works from the head node to that worker and that MPICH is installed on it.

Workloads a Raspberry Pi Cluster Handles Well

A Pi cluster is not a general-purpose speed upgrade. Workloads that parallelize cleanly across nodes benefit directly. Workloads that are single-threaded, GPU-dependent, or require high-bandwidth shared memory do not. Understanding that boundary before building avoids frustration.

CI/CD pipelines are the highest-utility workload for a home cluster. Distribute build jobs across nodes using Jenkins or Gitea Actions. A four-node Pi 4 cluster running parallel compile jobs cuts build times proportionally to the number of nodes, which is a genuine speedup for ARM-native code. Cross-compiling for x86 targets is slower and less useful.

Kubernetes learning is the second highest-utility use. Running a real multi-node MicroK8s cluster teaches service mesh behavior, pod scheduling, rolling deployments, and node failure handling in a way that a single-node minikube setup cannot. Jeff Geerling’s Pi cluster project at jeffgeerling.com is the most complete public reference for real-world Pi cluster Kubernetes work and is worth reading before designing a larger deployment.

MPI parallel computing covers scientific workloads: Monte Carlo simulations, matrix operations, and distributed data processing. A four-node Pi 4 cluster benchmarks around 14-16 GFlops with HPL, which is modest but sufficient to learn MPI programming patterns. NVMe storage on each node shifts the storage bottleneck out of the picture and makes the CPU the limiting factor, which is where it should be for compute jobs.

For monitoring the cluster, Grafana and Prometheus deploy cleanly on MicroK8s and give per-node CPU, memory, temperature, and network throughput dashboards. See Grafana InfluxDB Raspberry Pi: Complete Monitoring Stack Setup Guide for the full stack setup. For network-level visibility, AdGuard Home on the head node handles DNS for the cluster subnet. For a distributed file system accessible across all nodes, a Samba NAS on the head node is the simplest path.

FAQ

Can you mix Raspberry Pi 4 and Pi 5 nodes in the same Raspberry Pi cluster?

Yes, with caveats. Both run 64-bit Raspberry Pi OS Bookworm and MicroK8s handles mixed-performance nodes through its scheduler. MPI jobs are more sensitive because mpirun assigns equal process slots per node by default. A Pi 5 node completing its share faster than a Pi 4 node stalls on the slower node. Adjust slot counts in the machinefile to reflect actual throughput ratios, or benchmark first and set proportional slots.

What happens when a worker node fails mid-job?

Depends on the workload type. MicroK8s reschedules pods automatically when a node goes NotReady, assuming pod disruption budgets allow it. MPI jobs are not fault-tolerant by default. If a worker dies mid-mpirun, the entire job fails. For fault-tolerant MPI, use checkpoint/restart libraries or run shorter jobs with results written to shared storage between steps. Kubernetes workloads are the better choice when uptime matters.

How much power does a 4-node Raspberry Pi cluster draw?

A four-node Pi 4 cluster draws roughly 10-12W at idle and 25-30W under sustained CPU load. A four-node Pi 5 cluster draws 15-18W idle and 45-55W under load due to the Pi 5’s higher TDP. Factor the switch into total draw (typically 5-10W for an 8-port gigabit switch). A PoE setup consolidates power but the switch must be rated for the combined node draw. See Raspberry Pi Power Monitoring via USB for measuring actual draw per node.

Is Pi 4 or Pi 5 better for a new cluster build?

Pi 5 for any new purchase. The performance-per-dollar advantage is clear at cluster scale: the Pi 5 is roughly twice as fast per node, and the PCIe interface enables NVMe storage that eliminates SD card failures. The higher idle power draw is the only real downside. If the cluster will run 24/7 on a tight power budget, Pi 4 nodes are a reasonable compromise. For a detailed comparison, see Raspberry Pi 5 vs Pi 4: The Honest Breakdown.

How do you scale a Raspberry Pi cluster beyond four nodes?

Add nodes incrementally. Flash the new SD card with the same Bookworm image and hostname convention, assign the next static IP, distribute the SSH key from the head node, add the new hostname to /etc/hosts on all existing nodes, then run microk8s add-node on the head node and join from the new worker. MPI machinefile gets one new line per node. There is no rebuild required. Clusters of 8-16 Pi 4 nodes running MicroK8s are well-documented in the community. The Turing Pi 2 board consolidates up to four Compute Modules into a mini-ITX form factor if physical space is the constraint.

References:


About the Author

Chuck Wilson has been programming and building with computers since the Tandy 1000 era. His professional background includes CAD drafting, manufacturing line programming, and custom computer design. He runs PidiyLab in retirement, documenting Raspberry Pi and homelab projects that he actually deploys and maintains on real hardware. Every article on this site reflects hands-on testing on specific hardware and OS versions, not theoretical walkthroughs.

Last tested hardware: Raspberry Pi 4 Model B (4GB) x4. Last tested OS: Raspberry Pi OS Bookworm Lite 64-bit. MicroK8s 1.30, MPICH 4.1, Python 3.11.

  • Compatibility — This acrylic cluster case is designed for the installation of 4pcs Raspberry Pi 5 Boards. Raspberry Pi…
  • Opening Design — You can easily access to all ports of Raspberry Pi 5 boards.
  • Good Cooling Effect — With 4pcs Armor Lite V5 Active Coolers, each layer of Raspberry PI can be installed with an acti…

Was this helpful?

Yes
No
Thanks for your feedback!