Raspberry Pi 5 NVMe RAID: Setup and Redundancy Guide

Raspberry Pi 5 NVMe RAID & Redundancy Setup

Last tested: Raspberry Pi OS Bookworm 64-bit | March 23, 2026 | Raspberry Pi 5 (8GB)

Raspberry Pi 5 NVMe RAID is a practical setup for anyone who wants data redundancy on a single-board computer without buying a dedicated NAS device. The Pi 5’s PCIe FPC connector supports M.2 NVMe drives via an HAT, and Linux’s mdadm software RAID handles the mirroring. The result is a RAID1 array that survives a single drive failure without data loss. This guide covers hardware selection, RAID1 setup with mdadm, boot configuration, monitoring, and the btrfs and ZFS alternatives for those who want filesystem-level RAID instead.

Key Takeaways

  • Raspberry Pi 5 NVMe RAID requires a dual-slot M.2 HAT compatible with the Pi 5’s PCIe FPC connector
  • RAID1 mirrors data across two drives. If one fails, the system keeps running. It is not a substitute for backups.
  • The root filesystem can live on the RAID array. The /boot partition typically stays on the SD card or a separate partition.
  • PCIe on Pi 5 is Gen 2 x1 by default. Gen 3 can be enabled but is currently unofficial and unsupported.
  • Active cooling is required. RAID write operations push both CPU and drive temperatures higher than idle.
  • Test your failover before you need it. Pull a drive, confirm the system still boots, then resync and verify recovery.
Nvme raid diagram - Raspberry Pi 5 NVMe RAID

Hardware Requirements

What you need

  • Raspberry Pi 5 (4GB minimum, 8GB recommended for ZFS or btrfs)
  • Dual-slot M.2 HAT compatible with the Pi 5 PCIe FPC connector. The official Raspberry Pi M.2 HAT+ supports one slot. For two drives you need a third-party dual-slot option such as the Geekworm X1004 or Pineboards HatDrive Duo.
  • Two NVMe SSDs of identical size. Mismatched sizes work but waste capacity on the larger drive.
  • Official Raspberry Pi 5 power supply (5V/5A USB-C). Underpowered supplies cause random disconnects under RAID write load.
  • Active cooling. RAID writes run the CPU harder than typical workloads and NVMe drives generate their own heat.
  • MicroSD card for initial OS installation and /boot partition.

The PCIe interface on Pi 5 runs at Gen 2 x1 speeds by default, which gives a theoretical 500 MB/s bandwidth ceiling. In practice, real-world sequential read speeds on a single NVMe drive through the HAT land around 400 to 450 MB/s. RAID1 does not improve sequential write speed since both drives receive every write, but it can improve sequential read performance when the array striping reads from both mirrors.

PCIe to 2-CH PCIe HAT Adapter for Raspberry Pi 5 | Dual FFC Connectors | Stackable PCIe Gen2 HAT | Expand Pi 5 PCIe Interface | 16-PIN Cable Support
Amazon.com
PCIe to 2-CH PCIe HAT Adapter for Raspberry Pi 5 | Dual FFC Connectors | Stackable PCIe Gen2 HAT | Expand Pi 5 PCIe Interface | 16-PIN Cable Support
GeeekPi Dual FPC PCIe HAT for Raspberry Pi 5, B12 HAT 1 to 2 PCIe Interface with 40Pin GPIO Pin Header for Raspberry Pi 5
Amazon.com
4.0
GeeekPi Dual FPC PCIe HAT for Raspberry Pi 5, B12 HAT 1 to 2 PCIe Interface with 40Pin GPIO Pin Header for Raspberry Pi 5
Kingston NV3 1TB M.2 2280 NVMe SSD | PCIe 4.0 Gen 4x4 | Up to 6000 MB/s | SNV3S/1000G
Amazon.com
5.0
Kingston NV3 1TB M.2 2280 NVMe SSD | PCIe 4.0 Gen 4×4 | Up to 6000 MB/s | SNV3S/1000G

Initial Setup

Flash and update the OS

Flash Raspberry Pi OS Bookworm 64-bit to the microSD card using Raspberry Pi Imager. After first boot, update fully and ensure the EEPROM is current:

sudo apt update && sudo apt full-upgrade -y
sudo rpi-eeprom-update
sudo reboot

Enable PCIe

PCIe is not enabled by default on Pi 5. Add the following line to /boot/firmware/config.txt:

sudo nano /boot/firmware/config.txt
dtparam=pciex1

Save and reboot. This enables the PCIe FPC connector that the M.2 HAT uses. Note that this is the dedicated PCIe interface on Pi 5, not the GPIO header.

Verify drive detection

lsblk
sudo nvme list

You should see two NVMe devices, typically /dev/nvme0n1 and /dev/nvme1n1. If only one or neither appears, reseat the drives and HAT, confirm the PCIe line is in config.txt, and check that the power supply is not sagging under load.

Partition and Format the NVMe Drives

Create partitions

Partition each drive identically using fdisk. Repeat these steps for both drives:

# Partition the first drive
sudo fdisk /dev/nvme0n1
# Press g (new GPT table), n (new partition), accept defaults, w (write)

# Partition the second drive
sudo fdisk /dev/nvme1n1
# Same steps

Format the partitions

sudo mkfs.ext4 /dev/nvme0n1p1
sudo mkfs.ext4 /dev/nvme1n1p1

Confirm both partitions are formatted:

lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT

Create the RAID1 Array with mdadm

Install mdadm and create the array

sudo apt install mdadm -y

sudo mdadm --create --verbose /dev/md0 \
  --level=1 \
  --raid-devices=2 \
  /dev/nvme0n1p1 /dev/nvme1n1p1

mdadm will ask to confirm and then begin building the array. Initial sync runs in the background. Monitor progress:

watch cat /proc/mdstat

Sync time depends on drive size. A 500GB array takes roughly 30 to 60 minutes. The array is usable immediately but is in a degraded state until sync completes.

Save the array configuration

# Write array config so it auto-assembles on boot
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

# Update initramfs to include RAID support
sudo update-initramfs -u

Create filesystem on the array

sudo mkfs.ext4 /dev/md0

Mount and verify

sudo mkdir /mnt/raid
sudo mount /dev/md0 /mnt/raid
df -h | grep raid

Persistent Mounting and Boot Configuration

Add to fstab

# Get the UUID of the array
sudo blkid /dev/md0

Edit /etc/fstab and add a line using the UUID. The nofail option prevents a boot hang if the array is degraded:

UUID=your-uuid-here /mnt/raid ext4 defaults,nofail,discard,noatime 0 0

Move the root filesystem to the RAID array (optional)

If you want the root filesystem on the RAID array rather than the SD card, clone it with rsync:

sudo rsync -aAXv /* /mnt/raid \
  --exclude={"/mnt","/proc","/sys","/tmp","/dev","/run","/media","/lost+found"}

Then update /boot/firmware/cmdline.txt to point root at the array UUID:

sudo nano /boot/firmware/cmdline.txt
# Change root=PARTUUID=... to:
# root=UUID=your-uuid-here rootfstype=ext4

The /boot partition stays on the SD card. Raspberry Pi 5 still reads boot files from there even when the root filesystem is on NVMe. Reboot and confirm with:

findmnt /
# Should show /dev/md0 as source

Redundancy, Failover, and Drive Replacement

Test failover before you need it

Do not assume RAID1 failover works. Test it:

  1. Power down the Pi
  2. Remove one NVMe drive
  3. Power on and confirm the system boots and the array shows as degraded but functional
  4. Power down, reinsert the drive, and power on
  5. Re-add the drive to the array and watch resync complete
# Check array state after removing a drive
sudo mdadm --detail /dev/md0

# Re-add a replacement drive after reinserting
sudo mdadm --add /dev/md0 /dev/nvme1n1p1

# Monitor resync
watch cat /proc/mdstat

Email alerts for drive failure

mdadm can send email alerts when a drive drops from the array. This requires a functioning mail transfer agent on the Pi. If you have one configured, add your address to /etc/mdadm/mdadm.conf:

MAILADDR your-email@example.com
sudo systemctl restart mdadm

SMART monitoring with smartmontools

sudo apt install smartmontools -y

Edit /etc/smartd.conf to monitor both NVMe drives:

/dev/nvme0 -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/nvme1 -a -o on -S on -s (S/../.././02|L/../../6/03)
sudo systemctl restart smartd
sudo systemctl enable smartd

Performance Tuning

fstab mount options

Use noatime and commit=60 to reduce unnecessary write overhead. The discard option enables TRIM which helps maintain NVMe performance over time:

UUID=your-uuid-here /mnt/raid ext4 defaults,nofail,discard,noatime,commit=60 0 0

I/O scheduler

# Check current scheduler
cat /sys/block/nvme0n1/queue/scheduler

# Set to none for NVMe (already optimal in most cases)
echo none | sudo tee /sys/block/nvme0n1/queue/scheduler
echo none | sudo tee /sys/block/nvme1n1/queue/scheduler

Benchmark the array

sudo apt install fio -y

# Sequential read test
fio --name=seqread --filename=/mnt/raid/testfile \
  --size=500M --bs=128k --rw=read \
  --ioengine=libaio --iodepth=16 --direct=1

# Random 4K write test
fio --name=randwrite --filename=/mnt/raid/testfile \
  --size=500M --bs=4k --rw=randwrite \
  --ioengine=libaio --iodepth=16 --direct=1

NVMe temperature monitoring

sudo apt install nvme-cli -y
sudo nvme smart-log /dev/nvme0
sudo nvme smart-log /dev/nvme1

Keep NVMe temperatures below 70 degrees C under sustained write load. Above that, drives begin thermal throttling. If temperatures are consistently high, improve airflow around the HAT or add thermal pads between the drives and the case.

Maintenance and Health Checks

Routine checks

# Quick array status
cat /proc/mdstat

# Detailed array info
sudo mdadm --detail /dev/md0

# Check for kernel errors
journalctl -p 3 -xb | grep -i nvme

# Check syslog for mdadm events
grep mdadm /var/log/syslog

Automate periodic checks

crontab -e
# Log RAID status daily at 6am
0 6 * * * cat /proc/mdstat >> /var/log/raid-check.log

# Run weekly RAID check on Sundays at 2am
0 2 * * 0 /usr/share/mdadm/checkarray --cron --all >> /var/log/mdadm_check.log

# Log NVMe SMART data weekly
0 3 * * 0 smartctl -a /dev/nvme0n1 >> /var/log/smart_nvme0.log
0 3 * * 0 smartctl -a /dev/nvme1n1 >> /var/log/smart_nvme1.log

Keep system and firmware updated

sudo apt update && sudo apt full-upgrade -y
sudo rpi-eeprom-update
sudo reboot

EEPROM updates occasionally improve PCIe and NVMe compatibility on Pi 5. Run this monthly and after any significant OS update. For write pressure reduction on the SD card used for /boot, see Setting Up zram on Raspberry Pi and Preventing SD Card Corruption on Raspberry Pi.

Alternative: RAID with btrfs

btrfs supports built-in RAID1 with checksumming and snapshots, which mdadm does not provide natively. The trade-off is more complexity and slightly higher CPU overhead for checksum verification on reads.

sudo apt install btrfs-progs -y

# Create btrfs RAID1 across both drives
sudo mkfs.btrfs -m raid1 -d raid1 /dev/nvme0n1p1 /dev/nvme1n1p1

# Mount using either device
sudo mount /dev/nvme0n1p1 /mnt/raid

btrfs snapshots give you point-in-time recovery that RAID1 alone does not provide. A deleted file is gone from both mirrors immediately with mdadm. With btrfs snapshots, you can roll back to a state before the deletion:

# Create a subvolume
sudo btrfs subvolume create /mnt/raid/@data

# Take a snapshot
sudo btrfs subvolume snapshot /mnt/raid/@data /mnt/raid/@data_$(date +%Y%m%d)

Alternative: ZFS

ZFS is available on Raspberry Pi OS via zfsutils-linux. It does not require switching to a different OS, though the package may need to be enabled from the contrib repository. ZFS offers the strongest data integrity guarantees of the three options, including end-to-end checksumming, self-healing, and flexible pool management. The cost is higher RAM overhead. On a 4GB Pi 5 with other services running, ZFS can create memory pressure. 8GB is the practical minimum for comfortable ZFS operation.

sudo apt install zfsutils-linux -y

# Create a mirrored ZFS pool
sudo zpool create myzpool mirror /dev/nvme0n1p1 /dev/nvme1n1p1

# Check pool status
zpool status

# Replace a failed drive
zpool replace myzpool /dev/nvme0n1p1 /dev/nvme2n1p1

For most Pi 5 RAID setups, mdadm is the right choice. Use btrfs if you want snapshots. Use ZFS only if you have a specific need for its integrity features and enough RAM to support it comfortably.

Troubleshooting

NVMe drives not detected

Confirm dtparam=pciex1 is in /boot/firmware/config.txt and the Pi has been rebooted. Reseat the HAT and drives. Check that the power supply is rated 5V/5A. Use dmesg | grep -i nvme to see whether the kernel sees the drives and is reporting errors.

RAID array fails to assemble after reboot

The most common cause is a missing or incorrect entry in /etc/mdadm/mdadm.conf, or update-initramfs -u not having been run after creating the array. Check the config file contains the correct array definition, run sudo update-initramfs -u, and reboot.

Boot fails with cannot find root

Recheck the UUID in /boot/firmware/cmdline.txt. The UUID must match exactly what sudo blkid /dev/md0 returns. A mismatch here causes a boot failure. Boot from the SD card in rescue mode to correct it.

One drive keeps dropping from the array

Check NVMe temperature with sudo nvme smart-log /dev/nvme0. Check for power issues with vcgencmd get_throttled. A sagging power supply causes intermittent PCIe errors that look like drive failures. Check dmesg for PCIe link errors. See Raspberry Pi Randomly Reboots Under Load for a full breakdown of throttle flags and power-related failure patterns.

Use Cases

  • Network-attached storage serving SMB or NFS shares where a single drive failure would cause data loss
  • Log collection or database storage where continuous availability matters more than maximum throughput
  • Offsite backup receiver that needs to survive hardware failure without manual intervention
  • Kiosk or always-on display where drive failure would cause visible downtime

RAID1 on Pi 5 is not the right choice for every project. If maximum storage capacity matters more than redundancy, a single larger NVMe drive plus regular backups to an external location is simpler and costs less. RAID earns its setup complexity when continuous availability after a single drive failure is a real requirement rather than a theoretical one.

FAQ

Can I use two different-sized NVMe drives in RAID1?

Technically yes, but the array size matches the smaller drive and the extra capacity on the larger drive is wasted. Use identical drives to avoid this. Mismatched drives also complicate replacement planning since you need to source a specific size later.

Can I boot Raspberry Pi 5 entirely from NVMe RAID?

The root filesystem can live on the RAID array. The /boot partition typically stays on the SD card since Pi 5 reads boot files from there before handing off to the OS. It is possible to migrate /boot to NVMe as well but requires additional configuration and is not necessary for most setups.

Does RAID1 protect against accidental file deletion?

No. RAID1 mirrors every write operation, including deletions. A deleted file is removed from both drives immediately. Only backups protect against accidental deletion. btrfs snapshots can provide a recovery window for deletions if snapshots are taken frequently enough.

What happens when one NVMe drive fails?

The system continues running on the surviving drive in degraded mode. mdadm logs the event and can send an email alert if configured. Replace the failed drive, run sudo mdadm --add /dev/md0 /dev/nvmeXn1p1, and the array resyncs automatically. The Pi stays running throughout the resync.

Is btrfs or ZFS better than mdadm for Pi 5?

mdadm is the simplest and most resource-efficient option and suits most Pi 5 RAID setups. btrfs adds snapshots and checksumming at modest extra cost. ZFS adds the strongest integrity guarantees but needs 8GB RAM to run comfortably alongside other services. Use mdadm unless you have a specific reason to need what btrfs or ZFS provides.

Does PCIe Gen 3 mode improve RAID performance?

Enabling Gen 3 mode raises the theoretical bandwidth ceiling from 500 MB/s to 1 GB/s, but it is currently unofficial and unsupported by the Raspberry Pi Foundation. Some users enable it successfully, but it can cause instability depending on the HAT and drive combination. For a RAID setup where data integrity is the priority, staying on the default Gen 2 is the safer choice.

References


About the Author

Chuck Wilson has been programming and building with computers since the Tandy 1000 era. His professional background includes CAD drafting, manufacturing line programming, and custom computer design. He runs PidiyLab in retirement, documenting Raspberry Pi and homelab projects that he actually deploys and maintains on real hardware. Every article on this site reflects hands-on testing on specific hardware and OS versions, not theoretical walkthroughs.

Last tested hardware: Raspberry Pi 5 (8GB). Last tested OS: Raspberry Pi OS Bookworm 64-bit.

Was this helpful?

Yes
No
Thanks for your feedback!