//------------------------------------------------------------------- //-------------------------------------------------------------------
NVMe vs SSD on Linux VPS

Maximize Linux VPS Speed: NVMe vs SSD Benchmarks and Optimization

Choosing between NVMe vs SSD on Linux VPS depends on one key difference, which is performance. NVMe drives reach 7,000 MB/s while SATA SSDs max out at 550 MB/s. But numbers alone don’t show how this affects your actual workloads.

This guide shows you how to benchmark both drive types on your Linux VPS hosting and optimize them for maximum performance.

Performance Difference Between NVMe vs SSD on Linux VPS

NVMe connects directly to the CPU through PCIe, while SATA SSDs use the older SATA interface, limited to 600 MB/s.In real tests, NVMe hits 7,050 MB/s for reads compared to 540 MB/s for SATA SSD, and delivers 1,200,000 IOPS versus 98,000 for SATA.

Here are the Key Performance Metrics between NVMe vs SSD on Linux VPS:

MetricSATA SSDNVMe SSD (PCIe 4.0)
Sequential Read540 MB/s7,050 MB/s
Sequential Write500 MB/s6,400 MB/s
Random Read IOPS (4k)98,0001,200,000
Random Write IOPS (4k)88,000950,000
Read Latency120 μs15 μs
Write Latency180 μs20 μs

The latency difference is for database workloads, where NVMe reduces latency from 8.2ms to 1.1ms in MongoDB queries.

Install Benchmark Tools To Test and Optimize NVMe vs SSD

For testing and optimization for NVMe vs SSD on Linux VPS, it is recommended to install benchmark tools. Here, we assume you have a Linux VPS running Ubuntu OS. Use the commands below to install benchmark tools:

sudo apt update
sudo apt install fio ioping hdparm nvme-cli sysstat iotop -y

Tools explanation:

  • fio: Disk I/O benchmark tool
  • ioping: Real-time latency tester
  • hdparm: Drive speed and info checker
  • nvme-cli: NVMe management utility
  • sysstat: Performance monitoring
  • iotop: Process I/O monitor

Once your installation is completed, use the command below to list all block devices to identify your drives:

lsblk -d -o NAME,SIZE,ROTA,TYPE,TRAN

In the output, you will see the name, size, rotation, type, and transport protocol.

For NVMe-specific information, you can use the command below to display all NVMe devices with namespace IDs, capacities, and model information:

sudo nvme list

Real-World Speed Tests for NVMe vs SSD

At this point, you can run real-world speed tests for NVMe vs SSD, which measure four essential metrics, including:

  • Sequential read and write speed for large files.
  • Random IOPS for database operations.
  • Mixed workloads for web servers.
  • Latency for response times.

You can run each test on both drive types to see the actual performance difference on your Linux VPS.

1. Sequential Read/Write Performance:

Test sequential read speed with the command below:

sudo fio --name=seq_read --filename=/dev/nvme0n1 --rw=read --direct=1 --bs=128k --ioengine=libaio --iodepth=32 --numjobs=4 --runtime=60 --time_based --group_reporting

Test sequential write speed with the following command:

sudo fio --name=seq_write --filename=/dev/nvme0n1 --rw=write --direct=1 --bs=128k --ioengine=libaio --iodepth=32 --numjobs=4 --runtime=60 --time_based --group_reporting

Warning: The above command overwrites data on the raw device. You can use a test file instead:

sudo fio --name=seq_write --filename=/mnt/test.dat --size=10G --rw=write --direct=1 --bs=128k --ioengine=libaio --iodepth=32 --numjobs=4 --runtime=60 --time_based --group_reporting

The –size=10G parameter creates a 10GB test file to prevent filling the entire disk.

2. Random Read/Write IOPS:

Test random read IOPS with the command below:

sudo fio --name=rand_read --filename=/mnt/test.dat --size=10G --rw=randread --direct=1 --bs=4k --ioengine=libaio --iodepth=256 --numjobs=4 --runtime=60 --time_based --group_reporting

Test random write IOPS with the following command:

sudo fio --name=rand_write --filename=/mnt/test.dat --size=10G --rw=randwrite --direct=1 --bs=4k --ioengine=libaio --iodepth=256 --numjobs=4 --runtime=60 --time_based --group_reporting

Random write tests show how drives perform under heavy database loads.

3. Mixed Workload Testing:

Test 70% read and 30% write mix with the command below:

sudo fio --name=mixed_rw --filename=/mnt/test.dat --size=10G --rw=randrw --rwmixread=70 --direct=1 --bs=4k --ioengine=libaio --iodepth=64 --numjobs=4 --runtime=60 --time_based --group_reporting

4. Latency Testing with IOPing:

You can measure real-time latency with the following command:

sudo ioping -c 100 /mnt/

NVMe drives typically show latency under 100ms, while SATA SSDs range from 100 to 300ms.

For a fast read speed check, you can use the hdparm command:

sudo hdparm -tT /dev/nvme0n1

Optimizing and Monitoring NVMe vs SSD on Linux VPS

At this point, you can optimize and monitor NVMe vs SSD on Linux VPS so you get consistent performance under real workloads.

I/O Scheduler Configuration

The I/O scheduler determines how read and write requests are queued and dispatched to storage devices.

To check the current scheduler, you can use the command below:

cat /sys/block/nvme0n1/queue/scheduler

In the output, you will see the available schedulers with the active one in brackets:

[none] mq-deadline kyber

Optimal Scheduler Settings for NVMe and SSD Include:

For NVMe drives, you can configure them with the command below:

echo none | sudo tee /sys/block/nvme0n1/queue/scheduler

The none scheduler works best for NVMe because these drives handle task scheduling internally better than the Linux kernel.

For SATA SSDs, you can use the command below:

echo mq-deadline | sudo tee /sys/block/sda/queue/scheduler

The mq-deadline scheduler prevents I/O delays while keeping good performance for SATA devices.

To make the scheduler setting permanent, open the scheduler rules file:

sudo nano /etc/udev/rules.d/60-scheduler.rules

Add these lines to the file:

ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
  • KERNEL=="nvme[0-9]n[0-9]": Match NVMe devices such as nvme0n1, nvme1n1, etc.
  • ATTR{queue/rotational}=="0": Match non-rotating devices, SSDs.

NVMe-Specific Tuning

NVMe drives need different settings than SATA SSDs to reach their full potential. Here are the most common optimizations you can consider for NVMe:

1. Disable Autonomous Power State Transitions: For consistent low latency, you can disable power management. Open the Grub file with the command below:

sudo nano /etc/default/grub

Add nvme_core.default_ps_max_latency_us=0 to GRUB_CMDLINE_LINUX as shown below:

GRUB_CMDLINE_LINUX="nvme_core.default_ps_max_latency_us=0"

Update GRUB and reboot with the commands below:

sudo update-grub
sudo reboot

This prevents the NVMe drive from entering power-saving states that increase latency.

2. Increase Queue Depth: Higher queue depth allows more I/O requests to be queued, which improves throughput.

Check the current queue depth with the command below:

cat /sys/block/nvme0n1/queue/nr_requests

Increase for high-concurrency workloads with the following command:

echo 1024 | sudo tee /sys/block/nvme0n1/queue/nr_requests

3. Monitor NVMe Health: You can use the following command to check temperature, percentage used, available spare capacity, and critical warnings:

sudo nvme smart-log /dev/nvme0n1

Tip: For more advanced NVMe tuning, you can check this guide on NVMe Optimization for Linux Servers.

Filesystem Optimization

How you mount and maintain your filesystem directly affects NVMe and SSD performance. These settings reduce unnecessary write operations and keep drives fast over time through proper space management.

1. Mount Options for SSDs and NVMe:

Edit /etc/fstab with the command below to add performance options:

sudo nano /etc/fstab

Add noatime and discard options as shown below to the file:

/dev/nvme0n1p1  /mnt  ext4  defaults,noatime,discard  0  2
  • noatime: Skip tracking file access times to reduce writes.
  • discard: Enable automatic TRIM to maintain SSD speed.

Apply changes with the following command:

sudo mount -o remount /mnt

2. Enable Periodic TRIM: For filesystems without continuous TRIM, you can run the command below, which runs TRIM weekly to maintain SSD performance:

sudo systemctl enable fstrim.timer
sudo systemctl start fstrim.timer

Performance Monitoring

At this point, you can track which processes are using disk I/O so you can identify bottlenecks and confirm your NVMe or SSD tuning is actually working.

Monitor which processes are using disk I/O with the command below:

sudo iotop -o

View detailed I/O statistics by using the following command:

iostat -x 1

FAQs

What is the main difference between NVMe and SSD?

The main difference is in performance. NVMe uses PCIe for speeds up to 7,000 MB/s, while SATA SSDs are limited to 600 MB/s.

How do I check if my Linux VPS uses NVMe or SSD?

Run lsblk -d -o NAME,TRAN. If it says nvme, you have an NVMe drive. SATA means a standard SSD.

Conclusion

NVMe vs SSD on Linux VPS benchmarks show NVMe drives deliver up to 10x faster speeds and 12x better random I/O than SATA SSDs. This performance is essential for databases, which drops latency from over 10ms to under 1ms.

By applying simple tuning like the non-scheduler and proper mount options, you ensure your Linux VPS hosting gets the maximum performance from your hardware.

To see these performances in action, you can consider upgrading to a high-performance provider like Perlod Hosting.

We hope you enjoy this guide on NVMe vs SSD on Linux VPS. Subscribe to our X and Facebook channels to get the latest articles.

Post Your Comment

PerLod delivers high-performance hosting with real-time support and unmatched reliability.

Contact us

Payment methods

payment gateway
Perlod Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.