RAID Linux VPS Setup For Optimal Speed

RAID Linux VPS Setup For Optimal Speed

RAID Linux VPS Setup For Optimal Speed

If you use a Linux VPS that handles high-traffic apps and databases, disk performance can become a bottleneck. Redundant Array of Independent Disks (RAID) provides a way to combine multiple storage devices into a single logical volume. It delivers higher speed, fault tolerance, or a balance of both. In this guide, you will learn a step-by-step RAID Linux VPS Setup For Optimal Speed.

For optimal speed, two of the most effective configurations are RAID 0 and RAID 10. Continue reading on PerLod Hosting to learn how to build a storage configuration that optimizes speed without compromising stability.

RAID Linux VPS Setup For Optimal Speed

In this guide, we will use Linux software RAID (mdadm) on both Debian and RHEL-based distros. You must note that hardware RAID is controlled by the host, not your VPS. Here, we use software RAID inside the VPS.

You must have 2 or more separate block devices like /dev/vdb and /dev/vdc. If you see one disk, you must ask your provider to attach more.

Remember to keep backups regardless of RAID level. Also, for speed-focused arrays, it is recommended to create a dedicated data mount point, for example, /data rather than putting / on RAID.

Now, let’s understand the key points of RAID 0 and RAID 10, and then start the setup steps.

RAID 0 and RAID 10 Key Points

The core concept of RAID 0 is Striping, and RAID 1 is Mirroring. Here are the key points that you understand what you will build and when to choose which level.

RAID 0 (Striping):

  • Goal: Maximum Throughput. Because data is written to and read from all disks at once, the speed can be nearly the sum of the individual disks’ speeds. This is perfect for large and sequential file operations.
  • Minimum Disk: 2.
  • Fault Tolerance: None. There is no redundancy. Every piece of data is critical. If any single disk in the array fails, the entire dataset becomes corrupted and unrecoverable because parts of every file are missing.
  • Capacity: 100% Usable. Since no space is used for backup copies, all the disk space is available for data.
  • Use Cases: It’s for data you can afford to lose but need to process very quickly. You should keep in mind that you should never use it as your only copy of important data.

RAID 10 (Striped Mirrors). Also called RAID 1+0:

  • Goal: High Throughput and Safety. You get the read and write speed benefits of striping across multiple pairs, plus the high reliability of mirroring.
  • Minimum Disk: 4 in pairs.
  • Fault Tolerance: Excellent. The array can survive the failure of multiple disks, as long as no entire mirror pair is lost.
  • Capacity: 50% Usable. Because every byte of data is duplicated on another disk, you lose half your raw capacity.
  • Use Cases: The Gold Standard for Performance. This is ideal for any important workload where speed and uptime are essential.

Let’s dive into the RAID setup.

List Block Devices on Linux VPS

First of all, you must list block devices. You can use the following command:

lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,FSTYPE,MODEL

In the output, you should see at least two unmounted devices, for example, /dev/vdb and /dev/vdc. If you don’t, you must attach more disks.

Install Required Tools for RAID on Linux VPS

At this point, you must install the required tools, including the software RAID tool, partitioning, filesystems, and performance testing on your Linux VPS. To do this, you can run the following commands depending on your OS:

Debian / Ubuntu

sudo apt update
sudo apt install mdadm parted xfsprogs e2fsprogs fio -y

RHEL / AlmaLinux / Rocky Linux

sudo dnf update -y
sudo dnf install mdadm parted xfsprogs e2fsprogs fio -y

Note: If you want to add the array to the boot process later, you must persist the config (mdadm.conf) and update the initramfs.

In Debian/Ubuntu:

sudo update-initramfs -u

In RHEL:

sudo dracut -H -f

Choose mdadm Chunk Size – Essential for Speed

At this point, you must choose chunk sizing. The mdadm chunk size controls how data is split across members for striping. Here are the factors you must consider:

The best and general choice for sequential throughput is 512 KiB (aka –chunk=512).

Also, filesystem tuning should match:

  • XFS: Use su su (stripe unit) equal to chunk, and sw (stripe width) equal to the number of data devices.
  • ext4: compute stride = chunk / block_size. The block size is typically 4 KiB → 512KiB/4KiB = 128, and stripe-width = stride × data divces.

The data devices are the count of disks that actually hold data simultaneously. RAID 0 with 2 disks is sw=2 (data devices=2). RAID 10 typically sw=2 (two data stripes mirrored).

RAID Partition Member Disks

At this point, it is recommended to use a dedicated RAID partition rather than the whole disk directly. It makes future maintenance easier. Let’s see the examples given for RAID 0 and RAID 10. You just need to know this will erase partition tables on these disks.

Two Disks for RAID 0: /dev/vdb and /dev/vdc

This will create a single GPT partition spanning the disk and mark it for RAID:

for d in /dev/vdb /dev/vdc; do
sudo parted -s "$d" mklabel gpt
sudo parted -s -a optimal "$d" mkpart primary 1MiB 100%
sudo parted -s "$d" set 1 raid on
done

Four disks for RAID 10: /dev/vdb../dev/vde

for d in /dev/vdb /dev/vdc /dev/vdd /dev/vde; do
sudo parted -s "$d" mklabel gpt
sudo parted -s -a optimal "$d" mkpart primary 1MiB 100%
sudo parted -s "$d" set 1 raid on
done

You can verify it by using this command:

lsblk -o NAME,SIZE,TYPE,PARTTYPENAME /dev/vdb /dev/vdc /dev/vdd /dev/vde

Create a RAID Array To Improve Performance

At this point, you can create a RAID array that combines multiple disks to work together as one. With mdadm, you can set up and monitor arrays easily.

For RAID 0 (2 disks), you can run the command below:

sudo mdadm --create /dev/md0 \
  --level=0 \
  --raid-devices=2 \
  --chunk=512 \
  /dev/vdb1 /dev/vdc1

Flags meaning:

  • –level=0: striping (no redundancy).
  • –raid-devices=2: two members.
  • –chunk=512: 512 KiB stripe unit per disk.

For RAID 10 (4 disks, default layout), you can run:

sudo mdadm --create /dev/md0 \
--level=10 \
--raid-devices=4 \
/dev/vdb1 /dev/vdc1 /dev/vdd1 /dev/vde1

Note: Default layout is near=2 (n2), which mirrors pairs and stripes across pairs.

To monitor the RAID status, you can run the following command:

cat /proc/mdstat

To check detailed info about the RAID, you can use the command below:

sudo mdadm --detail /dev/md0

Create the RAID Filesystem

When you create a RAID array, you also need to make a filesystem on top of it so you can store files. To get the best speed, the filesystem should be tuned to match how the RAID stripes data across disks. You can choose:

  • XFS if you want high throughput (fast with large files).
  • ext4 is a reliable all-round choice and works fine even without tuning.

Let’s see how to create them.

1. XFS (recommended for throughput):

For RAID 0 with 2 data devices, you can run the following command:

sudo mkfs.xfs -f -d su=512k,sw=2 /dev/md0

For RAID 10 with n2 over 4 disks, you can run the following command:

sudo mkfs.xfs -f -d su=512k,sw=2 /dev/md0

2. ext4 With –chunk=512 (512 KiB) and 4 KiB FS blocks: stride = 128:

For RAID 0 with 2 data devices, you can run:

sudo mkfs.ext4 -E stride=128,stripe-width=256 /dev/md0

For RAID 10 with n2, you can run:

sudo mkfs.ext4 -E stride=128,stripe-width=256 /dev/md0

Mount RAID Device and Persist Across Reboots

After creating your RAID and filesystem, you need to mount it so you can use it, and make sure it’s available after every reboot. You can do this by mounting the RAID device to a folder like /data and then adding its UUID to the /etc/fstab file.

Create the folder and mount the RAID device with the following commands:

sudo mkdir -p /data
sudo mount /dev/md0 /data

Find the UUID of the RAID device:

sudo blkid /dev/md0

Then, add it to the etc/fstab.

For XFS:

echo 'UUID= /data xfs defaults,nofail 0 2' | sudo tee -a /etc/fstab

For ext4:

echo 'UUID= /data ext4 defaults,nofail 0 2' | sudo tee -a /etc/fstab

Note: The nofail option is added so that if the RAID isn’t ready at boot time, your system will still start normally without getting stuck.

Persist mdadm configuration and initramfs

To make sure your RAID array is recognized automatically at boot, you need to save its configuration and update the boot images. This way, the operating system can assemble the array early when starting up.

In Debian and Ubuntu, you can run:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

In RHEL, you can run:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf

If the array is required early in boot, update boot images:

sudo update-initramfs -u  #debian/ubuntu
sudo dracut -H -f         #RHEL

Run Quick Performance Test After RAID Setup

After setting up your RAID, you can run quick tests with fio to check speed and confirm everything works as expected. You can test sequential reads/writes (big files) and random I/O (small files) to compare RAID 0 and RAID 10 performance.

You can use fio for simple and repeatable checks.

For sequential throughput (2 GiB), you can run:

fio --name=seqwrite --directory=/data --size=2G --bs=1M --rw=write --direct=1 --iodepth=16
fio --name=seqread --directory=/data --size=2G --bs=1M --rw=read --direct=1 --iodepth=16

For random I/O (4 KiB, mixed), you can run:

fio --name=randrw --directory=/data --size=1G --bs=4k --rw=randrw --rwmixread=50 --direct=1 --iodepth=32

RAID 0 should show the highest raw throughput but zero safety, and RAID 10 should be close to RAID 0 for reads and strong for writes, while also tolerating disk failures.

Basic Maintenance for RAID Health

To keep your RAID healthy, you should monitor its status, test what happens if a disk fails, and know how to add a replacement disk. You can also grow the filesystem if you expand the array later.

To check health, you can use:

cat /proc/mdstat
sudo mdadm --detail /dev/md0

For simulating a member failure on RAID 10 only, you can run:

sudo mdadm /dev/md0 --fail /dev/vdb1
sudo mdadm /dev/md0 --remove /dev/vdb1

To add a replacement disk, you need to partition a new disk the same way and then run the commands below:

sudo mdadm /dev/md0 --add /dev/vdf1
cat /proc/mdstat

If you expand the array later, you can grow the filesystem:

sudo resize2fs /dev/md0 #ext4
sudo xfs_growfs /data   #XFS(online grow while mounted)

Tips for RAID VPS Performance Setup

To have the best performance, consider the following best practices tips:

  • Use XFS for large, sequential workloads.
  • Pick a larger chunk size (like 512 KB) for throughput.
  • Store metadata/logs separately when possible.
  • Regularly monitor with mdstat and mdadm.
  • Remember, RAID 0 is fast but unsafe, and RAID 10 balances speed and safety.

FAQs

What’s the difference between RAID 0 and RAID 10?

RAID 0 stripes data for maximum speed, but if one disk fails, all data is lost. RAID 10 combines speed and redundancy by mirroring and striping, so it’s fast and safer.

Which filesystem should I use for RAID: XFS, or ext4?

XFS is best for large, sequential workloads and high throughput. ext4 works well for general-purpose use.

Does RAID replace backups?

No. RAID can improve performance and, in some cases, provide redundancy, but it does not replace regular backups. Always back up your VPS data.

Conclusion

Setting up RAID on Linux VPS is one of the best ways to optimize speed and performance, especially for data-heavy workloads. RAID 0 offers raw throughput, while RAID 10 balances both speed and safety. Pairing the right RAID level with the right filesystem, like XFS or ext4, ensures your VPS runs smoothly under demanding tasks.

If you want the benefits of a properly tuned RAID setup, you can consider PerLod VPS Hosting Service. PerLod provides high-performance Linux VPS solutions designed for speed, reliability, and scalability.

We hope this guide, RAID Linux VPS Setup For Optimal Speed, is useful for you. Subscribe to our X and Facebook channels to get the latest articles and setup guides.

For further reading:

How to Secure an OpenSearch Cluster on Ubuntu

Implement Microsegmentation in Linux Server

Zero-Downtime Live Patching Linux kernel

Post Your Comment

PerLod delivers high-performance hosting with real-time support and unmatched reliability.

Contact us

Payment methods

payment gateway
Perlod Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.