
Implement QoS at Hardware Level for VPS
Quality of Service (QoS) in a Virtual Private Server (VPS) means how well your server resources, such as CPU, RAM, and network speed, are managed and guaranteed by the hosting provider. Qos is all about how stable, fast, and reliable your VPS will be, even when other users are sharing the same physical server. In this guide, you will learn how to implement QoS at hardware level for VPS.
You will learn how to control CPU, memory, disk, and network usage on your VPS so that heavy jobs like backups don’t slow down important services like SSH or websites. The following guide steps are safe for modern Ubuntu or Debian (systemd + cgroups v2) and work on most KVM-based VPSs.
Note: On a VPS, you only control what happens inside your own virtual machine. You can set limits on CPU, memory, disk, and network usage, but you cannot force the host system to give you more physical resources. QoS helps you stop one heavy job from slowing down everything else.
At PerLod Hosting, we prioritize hardware-level QoS to guarantee consistent speed, uptime, and efficiency for all our clients.
Table of Contents
Prerequisites and Quick Health Checks for Hardware QoS VPS
Before applying QoS, you need to confirm your VPS supports the required features. You must check your permissions, system version, cgroup support, main network interface, and storage device.
First, make sure you have root access:
sudo -v
Then, check if your system uses cgroups v2, which is needed for modern resource control:
mount | grep cgroup2 || cat /sys/fs/cgroup/cgroup.controllers
Find your main network interface, like eth0 or ens3, and store it in $IF:
IF=$(ip route get 1.1.1.1 | awk '{for(i=1;i<=NF;i++) if ($i=="dev"){print $(i+1); exit}}')
echo "Interface: $IF"
Also, list your block devices that help you identify which disk to apply I/O limits:
lsblk -o NAME,TYPE,SIZE,MOUNTPOINT,ROTA
Note: If your provider blocks Traffic Control (tc) or custom qdiscs, you’ll see errors; skip the network section in that case.
Quick Improvements To Apply Immediately for Hardware QoS VPS
Some simple tricks can give you quick improvements without a deep setup. By adjusting CPU and disk priorities (niceness/ionice) and enabling smarter network queuing (fq_codel), you can reduce slowdowns and network lag right away.
CPU Niceness: This runs a program with a lower CPU priority so other tasks stay smooth.
nice -n 10 long_running_command &
I/O Niceness: It lowers a program’s disk access priority. It is useful for HDDs, not NVMe.
ionice -c3 -p
Or, you can use:
ionice -c3 long_io_heavy_command
Smarter NIC Queueing: It enables fq_codel, which reduces network latency spikes.
sudo modprobe sch_fq_codel
sudo tc qdisc replace dev "$IF" root fq_codel
tc -s qdisc show dev "$IF"
Steps To Implement QoS at Hardware Level for VPS
At this point, you can follow the steps below to complete CPU QoS with systemd, Memory QoS, Disk I/O QoS, and Network QoS.
Let’s dive into the details.

1. CPU QoS with systemd and cgroups v2
Here, you control how much CPU each task or service can use. You can also set limits for single commands or create a special “slice” for background jobs. This keeps critical services responsive even when heavy processes run.
You can use the following command, which runs a program capped at 30% CPU with limited weight:
sudo systemd-run --scope -p CPUQuota=30% -p CPUWeight=50 \
-p IOWeight=50 -- /usr/bin/rsync -a /data/ /remote/
Or, you can create a “background-batch.slice” with low CPU and I/O weight, plus memory limits:
sudo tee /etc/systemd/system/background-batch.slice >/dev/null <<'EOF'
[Unit]
Description=Low-priority background workloads
[Slice]
# 1..10000 (default 100). Lower weight => less CPU share under contention.
CPUWeight=50
IOWeight=50
# Optional memory safety rails:
# MemoryHigh is a soft throttle point; MemoryMax is a hard cap.
MemoryHigh=1G
MemoryMax=2G
EOF
Once you are done, reload the system services to apply the changes:
sudo systemctl daemon-reload
sudo systemctl start background-batch.slice
Note: You can run commands or assign services to the slice. It is easy and has repeatable resource control.
Run a one-off command inside the slice:
sudo systemd-run --scope -p Slice=background-batch.slice -- \
bash -lc 'nice -n 10 ionice -c3 long_heavy_task'
For example, place the backup service permanently into the slice:
sudo systemctl edit backup.service
Add:
[Service]
Slice=background-batch.slice
CPUQuota=30%
Next, apply the changes:
sudo systemctl daemon-reload
sudo systemctl restart backup.service
To verify CPU controls, you can run the command below:
systemctl show backup.service -p CPUQuota,CPUWeight,Slice,MemoryHigh,MemoryMax
2. Memory QoS (Avoid Swap Floods and OOM)
At this step, you can prevent memory-hungry apps from using up all RAM and crashing your VPS. By setting soft and hard memory limits for each service, you can protect your system from slowdowns or out-of-memory errors.
Example for a heavy app:
sudo systemctl edit myapp.service
Add the following lines to the file:
[Service]
MemoryHigh=2G
MemoryMax=3G
- MemoryHigh=2G: Soft limit (tries to slow memory use).
- MemoryMax=3G: Hard cap (never allows service to exceed this).
This prevents one process from crashing the entire server by consuming all available memory.
Apply and verify the changes:
sudo systemctl daemon-reload
sudo systemctl restart myapp.service
systemctl show myapp.service -p MemoryHigh,MemoryMax
You can monitor and check if CPU, I/O, or memory is under stress:
cat /proc/pressure/cpu
cat /proc/pressure/io
cat /proc/pressure/memory
3. Disk I/O QoS (cgroups v2 via systemd)
Heavy disk usage can make your VPS unresponsive. In this step, you can limit the read/write speed or number of operations per second for services. This ensures background jobs like backups don’t overwhelm your storage.
First, identify the block device from the perquisites. For example, /dev/vda. You can limit read/write speeds (bandwidth) or IOPS (operations per second) per service.
sudo systemctl edit backup.service
Add the following content. Choose one set or mix carefully:
[Service]
# Bandwidth caps:
IOReadBandwidthMax=/dev/vda 10M
IOWriteBandwidthMax=/dev/vda 10M
# OR IOPS caps (useful for SSD/NVMe):
# IOReadIOPSMax=/dev/vda 800
# IOWriteIOPSMax=/dev/vda 600
# Relative I/O weight (1..10000); lower => less priority:
IOWeight=50
Tip: If your root FS is on a partition (e.g., /dev/vda2), use that device path in the directives.
Then, apply and verify the changes:
sudo systemctl daemon-reload
sudo systemctl restart backup.service
systemctl show backup.service -p IOReadBandwidthMax,IOWriteBandwidthMax,IOWeight
4. Network QoS (Control Traffic)
At this point, you can organize your network traffic. Important services like SSH and HTTPS always get priority. Both outgoing and incoming traffic can be shaped to keep your VPS responsive.
We will keep latency low with fq_codel or CAKE, guarantee fair share for SSH/HTTP(S), and put backups and uploads in a “bulk” class.
Note: Replace 100mbit with your actual VM uplink.
- Egress shaping with HTB + fq_codel:
Load modules with the command below:
sudo modprobe sch_htb sch_fq_codel cls_u32
Root HTB with a default “bulk” class (30):
sudo tc qdisc replace dev "$IF" root handle 1: htb default 30
sudo tc class replace dev "$IF" parent 1: classid 1:1 htb rate 100mbit ceil 100mbit
High-priority class for SSH/HTTPS responses:
sudo tc class replace dev "$IF" parent 1:1 classid 1:10 htb rate 10mbit ceil 100mbit prio 0
sudo tc qdisc replace dev "$IF" parent 1:10 fq_codel
Bulk class for large transfers:
sudo tc class replace dev "$IF" parent 1:1 classid 1:30 htb rate 5mbit ceil 50mbit prio 7
sudo tc qdisc replace dev "$IF" parent 1:30 fq_codel
Classify egress by source ports (services on your VPS). SSH (22) and HTTPS (443) have high priority:
sudo tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 \
match ip sport 22 0xffff flowid 1:10
sudo tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 \
match ip sport 443 0xffff flowid 1:10
sudo tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 \
match ip sport 80 0xffff flowid 1:10
Backups (rsync port 873) with low priority (bulk):
sudo tc filter add dev "$IF" protocol ip parent 1: prio 5 u32 \
match ip sport 873 0xffff flowid 1:30
Show stats (packets or bytes per class):
tc -s class show dev "$IF"
tc -s qdisc show dev "$IF"
Note: If your workload sends backups over SSH (port 22), simple port matching can’t distinguish it from interactive SSH. In that case, check the marking trick step.
- Ingress shaping (IFB mirror):
To shape incoming traffic(downloads to your VPS), redirect ingress to an IFB device, and apply HTB.
Load modules:
sudo modprobe ifb sch_htb sch_fq_codel cls_u32 act_mirred
sudo ip link add ifb0 type ifb
sudo ip link set dev ifb0 up
Attach ingress qdisc on $IF and redirect to ifb0:
sudo tc qdisc add dev "$IF" handle ffff: ingress
sudo tc filter add dev "$IF" parent ffff: protocol ip prio 10 u32 \
match u32 0 0 action mirred egress redirect dev ifb0
Shape on ifb0:
sudo tc qdisc replace dev ifb0 root handle 2: htb default 20
sudo tc class replace dev ifb0 parent 2: classid 2:1 htb rate 100mbit ceil 100mbit
sudo tc class replace dev ifb0 parent 2:1 classid 2:10 htb rate 20mbit ceil 100mbit # prio
sudo tc class replace dev ifb0 parent 2:1 classid 2:20 htb rate 10mbit ceil 50mbit # bulk
sudo tc qdisc replace dev ifb0 parent 2:10 fq_codel
sudo tc qdisc replace dev ifb0 parent 2:20 fq_codel
Classify ingress by destination ports (services on your VPS):
sudo tc filter add dev ifb0 protocol ip parent 2: prio 1 u32 \
match ip dport 22 0xffff flowid 2:10
sudo tc filter add dev ifb0 protocol ip parent 2: prio 1 u32 \
match ip dport 443 0xffff flowid 2:10
sudo tc filter add dev ifb0 protocol ip parent 2: prio 5 u32 \
match ip dport 873 0xffff flowid 2:20
Display stats:
tc -s class show dev ifb0
- Mark “bulk” connections with nftables (optional):
If backups also use SSH (port 22), you can use nftables to mark flows (by IP, app, or user) and then classify them separately.
Create a dedicated table or chain:
sudo nft add table inet qos
sudo nft add chain inet qos postrouting { type filter hook postrouting priority mangle; }
For example, mark all traffic going to your backup host as bulk (mark=0x2):
BACKUP_IP=203.0.113.50
sudo nft add rule inet qos postrouting ip daddr $BACKUP_IP ct mark set 0x2
In tc, send fwmark 0x2 to bulk class:
sudo tc filter add dev "$IF" parent 1: protocol ip prio 2 handle 0x2 fw flowid 1:30
Note: You can refine nft rules. Match the UID owner via meta skuid, specific app ports, etc.
- CAKE (If Available): A modern replacement for fq_codel+HTB. Often easier and smarter for fairness.
sudo modprobe sch_cake || echo "CAKE not available; keep fq_codel/HTB."
Egress only, which is simple and fair across flows:
sudo tc qdisc replace dev "$IF" root cake bandwidth 100mbit besteffort
Ingress, which still needs IFB. You must do the IFB redirect, then run CAKE on ifb0:
sudo tc qdisc replace dev ifb0 root cake bandwidth 100mbit besteffort
Make Network QoS Persistent (systemd service)
Network shaping rules normally reset after a reboot. You can create a script and a systemd service so your network QoS settings automatically apply every time your VPS starts.
First, create a script “/usr/local/sbin/qos-net.sh” that sets up all tc rules:
sudo install -m 755 -d /usr/local/sbin
sudo tee /usr/local/sbin/qos-net.sh >/dev/null <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
IF=$(ip route get 1.1.1.1 | awk '{for(i=1;i<=NF;i++) if ($i=="dev"){print $(i+1); exit}}')
# Modules
modprobe sch_htb sch_fq_codel cls_u32 || true
modprobe ifb act_mirred || true
# Clean previous
tc qdisc del dev "$IF" root 2>/dev/null || true
tc qdisc del dev "$IF" ingress 2>/dev/null || true
ip link del ifb0 2>/dev/null || true
# Build fresh (HTB + fq_codel + IFB ingress)
tc qdisc replace dev "$IF" root handle 1: htb default 30
tc class replace dev "$IF" parent 1: classid 1:1 htb rate 100mbit ceil 100mbit
tc class replace dev "$IF" parent 1:1 classid 1:10 htb rate 10mbit ceil 100mbit prio 0
tc qdisc replace dev "$IF" parent 1:10 fq_codel
tc class replace dev "$IF" parent 1:1 classid 1:30 htb rate 5mbit ceil 50mbit prio 7
tc qdisc replace dev "$IF" parent 1:30 fq_codel
tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 match ip sport 22 0xffff flowid 1:10
tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 match ip sport 443 0xffff flowid 1:10
tc filter add dev "$IF" protocol ip parent 1: prio 1 u32 match ip sport 80 0xffff flowid 1:10
tc filter add dev "$IF" protocol ip parent 1: prio 5 u32 match ip sport 873 0xffff flowid 1:30
ip link add ifb0 type ifb
ip link set dev ifb0 up
tc qdisc add dev "$IF" handle ffff: ingress
tc filter add dev "$IF" parent ffff: protocol ip prio 10 u32 match u32 0 0 \
action mirred egress redirect dev ifb0
tc qdisc replace dev ifb0 root handle 2: htb default 20
tc class replace dev ifb0 parent 2: classid 2:1 htb rate 100mbit ceil 100mbit
tc class replace dev ifb0 parent 2:1 classid 2:10 htb rate 20mbit ceil 100mbit
tc class replace dev ifb0 parent 2:1 classid 2:20 htb rate 10mbit ceil 50mbit
tc qdisc replace dev ifb0 parent 2:10 fq_codel
tc qdisc replace dev ifb0 parent 2:20 fq_codel
tc filter add dev ifb0 protocol ip parent 2: prio 1 u32 match ip dport 22 0xffff flowid 2:10
tc filter add dev ifb0 protocol ip parent 2: prio 1 u32 match ip dport 443 0xffff flowid 2:10
tc filter add dev ifb0 protocol ip parent 2: prio 5 u32 match ip dport 873 0xffff flowid 2:20
EOF
Then, add a systemd service “qos-net.service” so these rules apply automatically at boot:
sudo tee /etc/systemd/system/qos-net.service >/dev/null <<'EOF'
[Unit]
Description=VPS Network QoS (HTB + fq_codel + IFB)
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/qos-net.sh
ExecStop=/usr/sbin/tc qdisc del dev %I root ; /usr/sbin/tc qdisc del dev %I ingress
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Apply and enable the service:
sudo systemctl daemon-reload
sudo systemctl enable --now qos-net.service
You can verify the settings after reboot:
tc -s qdisc show dev "$IF"
tc -s class show dev "$IF"
tc -s class show dev ifb0
Hardware QoS VPS End-to-end Example for a Backup
This is an example that combines everything, including CPU, memory, disk, and network limits applied to a backup service. The goal is to keep your VPS fast and responsive for users, even while heavy backup tasks run in the background.
Backup service put in a background slice with:
- CPUQuota=30%
- Disk capped at 10 MB/s
- Memory limited to 2 GB
sudo systemctl edit backup.service
Add the following content:
[Service]
Slice=background-batch.slice
CPUQuota=30%
IOReadBandwidthMax=/dev/vda 10M
IOWriteBandwidthMax=/dev/vda 10M
MemoryHigh=1G
MemoryMax=2G
Then, apply the changes:
sudo systemctl daemon-reload
sudo systemctl restart backup.service
Next, network shaping as above (qos-net.service), with SSH/HTTPS in high class and rsync (873) in bulk.
Monitoring during a live backup:
# CPU & per-process
pidstat 1
# Disk stats
iostat -x 1
# Network class usage
tc -s class show dev "$IF" | egrep '1:10|1:30'
tc -s class show dev ifb0 | egrep '2:10|2:20'
Monitor and Troubleshoot QoS VPS
Once QoS is in place, you need to monitor and troubleshoot. This step gives tools and commands to see which processes use the most resources, check pressure on CPU, memory, and disk, and verify if QoS is working correctly.
Check CPU hogs with:
top -H -o %CPU
pidstat -u -t 1
Check disk loads with:
iostat -x 1
cat /proc/pressure/io
Check Memory usage with:
free -h
cat /proc/pressure/memory
List active connections with:
ss -tnp | head
Display packet or byte counts per QoS class with:
tc -s qdisc show dev "$IF"
tc -s class show dev "$IF"
tc -s class show dev ifb0
Troubleshoot common problems for QoS VPS:
- NIC name differs (ens3, enp1s0 vs eth0). Always set $IF programmatically as shown in the guide steps.
- ionice has no effect: expected on NVMe with “none/mq-deadline” schedulers. Use cgroup I/O caps instead.
- If CAKE is not available, keep HTB + fq_codel.
- XC/containers: tc/IFB may not be permitted; you need host-level QoS.
QoS VPS Rollback and Cleanup
If you want to reset your QoS changes, this step shows how to remove network shaping rules and reset service limits back to defaults. It’s your “reset button” in case something goes wrong or you want to start fresh.
To restore default settings, you must disable and remove network QoS:
sudo systemctl disable --now qos-net.service
sudo tc qdisc del dev "$IF" root 2>/dev/null || true
sudo tc qdisc del dev "$IF" ingress 2>/dev/null || true
sudo ip link del ifb0 2>/dev/null || true
Then, reset service limits, which removes all custom limits and restores defaults:
sudo systemctl revert backup.service
Or clear specific properties:
sudo systemctl set-property -- runtime=false backup.service CPUQuota=
FAQs
What is Quality of Service (QoS) in a VPS?
QoS is the process of managing how your VPS resources (CPU, memory, disk, and network) are used, so one heavy task doesn’t slow down everything else on the server.
Why is QoS important for VPS hosting?
Without QoS, backups, updates, or large uploads can cause downtime or slow response for important services like websites or databases. QoS keeps your VPS responsive under load.
Can I fully control QoS on my VPS?
You can control resource usage inside your VPS (guest OS), but you cannot change how the hosting provider allocates physical resources on the hypervisor.
What tools are commonly used for VPS QoS?
– systemd/cgroups v2: for CPU, memory, and disk control.
– tc (traffic control): for network shaping and prioritization.
– ionice/nice: for quick priority adjustments.
Conclusion
Implementing Quality of Service (QoS) at the hardware level helps you make the most out of your VPS. At this point, you learn how to control CPU, memory, disk, and network usage on your VPS so that heavy jobs like backups don’t slow down important services like SSH or websites. You can do it with systemd for CPU, Memory, Disk, and tc for Network. We hope you enjoy this guide on “Implement QoS at Hardware Level for VPS“.
Subscribe to X and Facebook channels to get the latest articles and updates.
Our hosting environment fully supports advanced QoS techniques, so you can fine-tune resources to match your business needs. If you’re looking for a VPS provider that gives you both power and control, check out our flexible VPS plans.
For further reading:
Migrate to Managed VPS without Downtime