High-Performance Reverse Proxy Setup with HAProxy
HAProxy is one of the most powerful and open-source TCP/HTTP load balancers and reverse proxy solutions. In this guide, you will learn to install, configure, and optimize HAProxy as a High-Performance Reverse Proxy. If you are running your websites or applications on a VPS or dedicated server from PerLod Hosting, HAProxy is a great way to build a fast and stable reverse proxy in front of your services.
HAProxy operates at both Layer 4 (TCP) and Layer 7 (HTTP) of the OSI model, which offers unprecedented flexibility in traffic management.
- TCP Mode: In this mode, HAProxy functions as a transparent TCP proxy, forwarding packets between clients and servers without inspecting packet contents.
- HTTP Mode: When configured in HTTP mode, HAProxy gains full visibility into HTTP messages, enabling advanced features including URL-based routing, header manipulation, cookie insertion, content switching, SSL termination, and session persistence.
Now proceed to the following steps to complete the High-Performance Reverse Proxy Setup with HAProxy.
Table of Contents
Requirements for High-Performance Reverse Proxy Setup with HAProxy
Before you start, make sure your server and network match these requirements:
- A VPS or dedicated server with root or sudo access and supported OS such as Ubuntu 22.04/24.04 or a compatible RHEL-based distro.
- Open inbound ports including 80/TCP (HTTP) and 443/TCP (HTTPS). Also, you can open 8404/TCP for the HAProxy stats page, which must be restricted by IP or authentication.
- Allow outbound access from HAProxy to your backend servers, for example, 192.168.1.10:80, 192.168.1.20:8080, etc.
- If you use HTTPs, you need DNS and SSL.
- A domain pointing to your HAProxy public IP.
- If you use certbot –standalone, port 80 must be reachable for HTTP-01 validation.
- At least one backend web server or app that is reachable from HAProxy.
- If you enable health checks like /health, make sure your app returns HTTP 200 on that endpoint.
Tip: If you need a high-performance VPS server, you can check PerLod, which offers flexible plans for your needs.
Install HAProxy as a High-Performance Reverse Proxy
HAProxy is available in the official Ubuntu repositories. For production deployments, you can usethe PPA repository to access the latest version.
To install the latest HAProxy on Ubuntu 24.04 or 22.04, run the system update and install the software properties management tool with the commands below:
sudo apt update && sudo apt upgrade -y
sudo apt install --no-install-recommends software-properties-common -y
Add HAProxy 3.2 LTS PPA repository with the command below:
sudo add-apt-repository ppa:vbernat/haproxy-3.2 -y
Run the system update and install HAProxy 3.2 LTS version:
sudo apt update
sudo apt install haproxy=3.2.* -y
You can also use the default repository to install HAProxy:
sudo apt update
sudo apt install haproxy -y
Verify the installation by checking its version:
haproxy -v
On Red Hat-based distributions, HAProxy is available through default repositories via the DNF or YUM package manager.
Install HAProxy on CentOS 8/9 or RHEL 8/9 by using the commands below:
sudo dnf update -y
sudo dnf install haproxy -y
Verify the installation:
haproxy -v
Check installed package details with the command below:
rpm -qi haproxy
The installation creates the HAProxy user and group, establishes the configuration directory at /etc/haproxy/, and sets up systemd service management.
Once your installation is completed, enable and start the HAProxy service with the commands below:
sudo systemctl enable haproxy
sudo systemctl start haproxy
To verify the HAProxy service is active and running, use the command below:
sudo systemctl status haproxy
Basic HAProxy Reverse Proxy Configuration
The main HAProxy configuration file is located at /etc/haproxy/haproxy.cfg. In this step, we want to show you a minimal configuration for a reverse proxy setup with two backend web servers.
Before making changes, you must create a backup of the default configuration:
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.backup
Then, edit the HAProxy main config file with the command below:
sudo nano /etc/haproxy/haproxy.cfg
For a minimal reverse proxy configuration, you can edit the file as shown below:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# Run HAProxy as unprivileged user for security
user haproxy
group haproxy
# Run as background daemon process
daemon
# Maximum concurrent connections (adjust based on system resources)
maxconn 4096
# Logging configuration - send logs to local syslog
log /dev/log local0
log /dev/log local1 notice
# Chroot to restricted directory for enhanced security
chroot /var/lib/haproxy
# Create stats socket for runtime management
stats socket /var/run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
# SSL/TLS configuration for modern security
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
# Diffie-Hellman parameter size for SSL
tune.ssl.default-dh-param 2048
#---------------------------------------------------------------------
# Common defaults for all proxy sections
#---------------------------------------------------------------------
defaults
# Operating mode - http enables Layer 7 features
mode http
# Inherit logging configuration from global section
log global
# Enable detailed HTTP request logging
option httplog
# Do not log null connections (health checks, monitoring)
option dontlognull
# Enable HTTP connection close on backend side
option http-server-close
# Add X-Forwarded-For header with client IP address
option forwardfor except 127.0.0.0/8
# Enable redistribution of requests on server failures
option redispatch
# Number of connection retry attempts
retries 3
# Timeout settings for optimal performance and security
timeout connect 10s # Time to establish backend connection
timeout client 1m # Maximum client inactivity time
timeout server 1m # Maximum server response time
timeout http-request 10s # Maximum time to receive complete HTTP request
timeout http-keep-alive 10s # HTTP keep-alive timeout
timeout check 10s # Health check timeout
timeout queue 1m # Maximum time in queue when all servers busy
#---------------------------------------------------------------------
# Statistics dashboard (optional but recommended)
#---------------------------------------------------------------------
frontend stats
# Listen on port 8404 for statistics page
bind *:8404
# Enable statistics module
stats enable
# Set statistics URI path
stats uri /stats
# Auto-refresh interval
stats refresh 30s
# Restrict admin access to localhost
acl allowed_network src 127.0.0.1 192.168.1.0/24
stats admin if allowed_network
# Hide HAProxy version for security
stats hide-version
# Set statistics page title
stats realm HAProxy\ Statistics
#---------------------------------------------------------------------
# Frontend configuration - Client facing proxy
#---------------------------------------------------------------------
frontend http_front
# Listen on all interfaces, port 80
bind *:80
# Set maximum connections for this frontend
maxconn 2000
# Default backend for unmatched requests
default_backend web_servers
#---------------------------------------------------------------------
# Backend configuration - Pool of web servers
#---------------------------------------------------------------------
backend web_servers
# Load balancing algorithm
# Options: roundrobin, leastconn, source, uri, etc.
balance roundrobin
# Enable health checks on all servers
option httpchk GET /
# Expect HTTP 200 response from health checks
http-check expect status 200
# Server definitions
# Format: server <name> <ip>:<port> [options]
server web1 192.168.1.10:80 check inter 3s fall 3 rise 2
server web2 192.168.1.11:80 check inter 3s fall 3 rise 2
server web3 192.168.1.12:80 check inter 3s fall 3 rise 2
After the configuration setup, it is recommended to validate the syntax before restarting HAProxy. Validate configuration file syntax with the command below:
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
You can use the command below to validate configs with warnings:
sudo haproxy -c -V -f /etc/haproxy/haproxy.cfg
In your output, you must see:
Configuration file is valid
If your HAProxy configuration is valid, you can restart it with the command below:
sudo systemctl restart haproxy
SSL and TLS Termination Configuration for HAProxy
SSL/TLS termination means HAProxy handles the HTTPS encryption and decryption itself, which makes the whole system faster and lets you manage SSL certificates in one central place.
For testing or Let’s Encrypt integration, you can generate certificates using Certbot. Install Certbot with the commands below:
sudo apt install certbot -y # Ubuntu/Debian
sudo dnf install certbot -y # CentOS/RHEL
Get the certificates with the command below:
sudo certbot certonly --standalone -d example.com -d www.example.com
Certificate files are stored in the following locations:
/etc/letsencrypt/live/example.com/fullchain.pem
/etc/letsencrypt/live/example.com/privkey.pem
HAProxy requires certificates in PEM format with the certificate and private key concatenated into a single file. Create a directory for HAProxy certificates:
sudo mkdir -p /etc/haproxy/certs
Combine the certificate and private key with the following command:
sudo cat /etc/letsencrypt/live/example.com/fullchain.pem \
/etc/letsencrypt/live/example.com/privkey.pem | \
sudo tee /etc/haproxy/certs/example.com.pem
Set the correct permissions with the commands below:
sudo chmod 600 /etc/haproxy/certs/example.com.pem
sudo chown haproxy:haproxy /etc/haproxy/certs/example.com.pem
Now, open the HAProxy config file and add the following settings after the global and defaults sections. You can place the HTTPS and HTTP frontends before your backend web_servers block:
#---------------------------------------------------------------------
# HTTPS Frontend with SSL Termination
#---------------------------------------------------------------------
frontend https_front
# Listen on port 443 with SSL enabled
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem alpn h2,http/1.1
# Maximum connections for HTTPS frontend
maxconn 2000
# Add security headers
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
http-response set-header X-Frame-Options "SAMEORIGIN"
http-response set-header X-Content-Type-Options "nosniff"
http-response set-header X-XSS-Protection "1; mode=block"
http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
# Set header to indicate HTTPS was used
http-request set-header X-Forwarded-Proto https
# Default backend for HTTPS requests
default_backend web_servers
#---------------------------------------------------------------------
# HTTP Frontend - Redirect to HTTPS
#---------------------------------------------------------------------
frontend http_front
bind *:80
# Redirect all HTTP traffic to HTTPS
http-request redirect scheme https code 301 unless { ssl_fc }
For hosting multiple domains, HAProxy supports Server Name Indication (SNI) to select appropriate certificates. Create a certificate bundle directory with the command below:
sudo mkdir -p /etc/haproxy/certs
Copy all domain certificates to the directory:
sudo cp domain1.pem /etc/haproxy/certs/
sudo cp domain2.pem /etc/haproxy/certs/
In the HAProxy main config file, update frontend configuration:
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
HAProxy automatically selects the appropriate certificate based on the SNI value sent by the client.
HAProxy Load Balancing Algorithms
HAProxy supports multiple load balancing algorithms, which are optimized for different traffic patterns.
1. Round Robin: Distributes requests sequentially across all servers, which gives each server an equal traffic share:
backend web_servers
balance roundrobin
server web1 192.168.1.10:80 check
server web2 192.168.1.11:80 check
server web3 192.168.1.12:80 check
2. Least Connections: Routes requests to the server with the fewest active connections. It is best for backends with varying request processing times or long-lived connections:
backend app_servers
balance leastconn
server app1 192.168.1.20:8080 check
server app2 192.168.1.21:8080 check
server app3 192.168.1.22:8080 check
3. Source IP Hash: Routes requests from the same client IP to the same backend server, which provides simple session persistence without cookies:
backend persistent_servers
balance source
server srv1 192.168.1.30:80 check
server srv2 192.168.1.31:80 check
4. Weighted Distribution: It tells HAProxy to send more traffic to stronger servers and less traffic to weaker ones, based on the weight you set.
backend weighted_servers
balance roundrobin
# High-capacity server receives 3x traffic
server powerful1 192.168.1.40:80 check weight 300
# Standard servers receive 1x traffic
server standard1 192.168.1.41:80 check weight 100
server standard2 192.168.1.42:80 check weight 100
HAProxy Health Checks
Health checks ensure traffic routes only to operational servers, which maintain high availability.
1. TCP Health Checks: Basic TCP connection check, which verifies the server accepts connections on the specified port:
backend tcp_servers
mode tcp
balance roundrobin
server srv1 192.168.1.50:3306 check
server srv2 192.168.1.51:3306 check
2. HTTP Health Checks: Advanced health checks using HTTP requests with expected response validation:
backend http_servers
balance leastconn
# Enable HTTP health checks
option httpchk GET /health
# Expect HTTP 200 status code
http-check expect status 200
# Servers with health check configuration
server web1 192.168.1.60:80 check inter 5s fall 3 rise 2
server web2 192.168.1.61:80 check inter 5s fall 3 rise 2
3. Advanced HTTP Health Checks: Multi-step health checks with custom headers and response validation:
backend advanced_health
balance roundrobin
option httpchk
# Send custom HTTP health check request
http-check send meth GET uri /api/health ver HTTP/1.1 hdr Host www.example.com
# Expect specific status codes
http-check expect status 200-299
# Validate response contains specific string
http-check expect string "healthy"
server api1 192.168.1.70:8080 check
server api2 192.168.1.71:8080 check
HAProxy Logging Configuration
Logging is how you record what HAProxy is doing, so you can monitor traffic, troubleshoot problems faster, and review events for security checks.
In this step, you can configure your system’s logging service (rsyslog) to collect HAProxy logs into dedicated files, then enable logging inside HAProxy, and finally set up log rotation so the log files don’t fill up your disk.
Create rsyslog configuration for HAProxy with the command below:
sudo nano /etc/rsyslog.d/49-haproxy.conf
Add the following configuration:
# Enable UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1
# HAProxy logging template
$template Haproxy,"%msg%\n"
# Log all HAProxy messages to dedicated file
local0.* -/var/log/haproxy/haproxy.log;Haproxy
# Log only notice level and above to admin log
local0.notice -/var/log/haproxy/haproxy-admin.log;Haproxy
# Don't send HAProxy logs to other log files
& stop
Create the HAProxy log directory and set appropriate permissions with the commands below:
sudo mkdir -p /var/log/haproxy
sudo chown syslog:adm /var/log/haproxy
Restart the rsyslog service and verify that rsyslog is listening:
sudo systemctl restart rsyslog
sudo netstat -ulnp | grep 514
Ensure HAProxy configuration includes logging directives with this config:
global
log 127.0.0.1:514 local0
log 127.0.0.1:514 local1 notice
defaults
log global
option httplog
After restarting HAProxy, logs appear in /var/log/haproxy/haproxy.log.
Create a logrotate configuration with the command below:
sudo nano /etc/logrotate.d/haproxy
Add rotation policy to the file:
/var/log/haproxy/*.log {
daily
rotate 14
missingok
notifempty
compress
delaycompress
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
HAProxy Smart Routing and Blocking Rules: Access Control Lists (ACLs)
ACLs rules in HAProxy let you detect things like the URL path, the domain name (Host header), the visitor’s IP, or the User-Agent, and then route the request to the right backend or block it if needed.
You can define an ACL, then use it with actions like use_backend (route) or http-request deny (block).
Basic ACL Syntax: ACLs consist of a name and a matching condition:
# ACL format: acl <name> <criterion> [flags] <value>
# Check if request path starts with /api
acl is_api path_beg /api
# Check if Host header matches domain
acl is_domain hdr(host) -i example.com
# Check if client IP is in specific range
acl internal_network src 192.168.1.0/24
Common ACL Use Cases include:
1. Path-Based Routing:
frontend http_front
bind *:80
# Define ACLs for different paths
acl url_static path_beg /static /images /css /js
acl url_api path_beg /api
acl url_admin path_beg /admin
# Route to different backends based on path
use_backend static_servers if url_static
use_backend api_servers if url_api
use_backend admin_servers if url_admin
# Default backend for unmatched requests
default_backend web_servers
2. IP-Based Access Control:
frontend secure_front
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
# Define allowed IP ranges
acl allowed_ips src 192.168.1.0/24 10.0.0.0/8
# Define admin paths
acl admin_path path_beg /admin
# Block admin access from outside allowed IPs
http-request deny if admin_path !allowed_ips
default_backend web_servers
3. Host Header-Based Routing:
frontend http_front
bind *:80
# Define ACLs for different domains
acl host_app1 hdr(host) -i app1.example.com
acl host_app2 hdr(host) -i app2.example.com
acl host_api hdr(host) -i api.example.com
# Route to different backends
use_backend app1_servers if host_app1
use_backend app2_servers if host_app2
use_backend api_servers if host_api
default_backend web_servers
4. User-Agent Filtering:
frontend http_front
bind *:80
# Block requests from known bad bots
acl bad_bot hdr_sub(user-agent) -i bot crawler scraper
# Deny bad bot requests
http-request deny if bad_bot
default_backend web_servers
HAProxy Session Persistence (Sticky Sessions)
Session persistence means that once a user is sent to a specific backend server, HAProxy will keep sending that same user back to the same server for future requests. This is important for apps that store user state on one server, because switching servers can log users out.
Cookie-based persistence: In this method, HAProxy adds a small cookie (like SERVERID) to the user’s browser so it can remember which backend server handled the first request.
backend web_servers
balance roundrobin
# Insert session cookie
cookie SERVERID insert indirect nocache
# Servers with cookie identifiers
server web1 192.168.1.10:80 check cookie web1
server web2 192.168.1.11:80 check cookie web2
server web3 192.168.1.12:80 check cookie web3
Cookie Options:
- insert: HAProxy inserts the cookie.
- indirect: Cookie not sent to backend servers.
- nocache: Prevents caching of responses with cookies.
IP-based persistence (stick tables): In this method, HAProxy remembers the client by IP address and keeps routing that IP to the same backend server, without using cookies. Stick tables are basically an in-memory table that HAProxy uses to store this mapping.
backend web_servers
balance roundrobin
# Create stick table for IP-based persistence
stick-table type ip size 1m expire 30m
# Track client source IP
stick on src
# Backend servers
server web1 192.168.1.10:80 check
server web2 192.168.1.11:80 check
server web3 192.168.1.12:80 check
Stick Table Parameters:
- type ip: Use IPv4 addresses as keys.
- size 1m: Store up to 1,048,576 entries.
- expire 30m: Entries expire after 30 minutes of inactivity.
- stick on src: Use client source IP as tracking key.
Rate Limiting and DDoS Protection in HAProxy
Rate limiting protects backend servers from abuse and DoS attacks. It’s a simple but very effective way to reduce abuse and slow down basic DoS/DDoS-style traffic floods.
Basic rate limiting (per IP): Here, HAProxy tracks each client IP and counts how many HTTP requests it makes in the last 10 seconds. If an IP goes above your limit, HAProxy blocks it with HTTP 429 Too Many Requests.
frontend http_front
bind *:80
# Create stick table to track request rates
stick-table type ip size 1m expire 10s store http_req_rate(10s)
# Track client requests
http-request track-sc0 src
# Deny clients exceeding 20 requests per 10 seconds
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }
default_backend web_servers
Advanced rate limiting (multiple counters): HAProxy tracks several signals at once. If a client looks abusive in any of these metrics, HAProxy flags it and blocks it.
frontend http_front
bind *:80
# Stick table with multiple counters
stick-table type ip size 200k expire 30s store gpc0,conn_cur,conn_rate(10s),http_req_rate(10s),bytes_out_rate(30s),http_err_rate(10s)
# Track client activity
http-request track-sc1 src
# Define abuse detection ACLs
acl conn_rate_abuse sc1_conn_rate gt 20
acl req_rate_abuse sc1_http_req_rate gt 50
acl err_rate_abuse sc1_http_err_rate gt 20
acl data_rate_abuse sc1_bytes_out_rate gt 20000000
# Mark abuser flag
acl mark_as_abuser sc1_inc_gpc0 gt 0
# Deny abusive clients
http-request deny deny_status 429 if mark_as_abuser req_rate_abuse or conn_rate_abuse or err_rate_abuse or data_rate_abuse
default_backend web_servers
Tarpit (slow down instead of block): A tarpit doesn’t instantly reject the attacker; it intentionally slows them down. This wastes the attacker’s resources and reduces pressure on your backends, while still letting real users through more easily.
backend slow_down
mode http
timeout tarpit 5s
http-request tarpit
errorfile 500 /etc/haproxy/errors/429.http
You can create the custom error file /etc/haproxy/errors/429.http with these configs:
HTTP/1.1 429 Too Many Requests
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
Retry-After: 5
Too Many Requests. Please retry after 5 seconds.
HAProxy Statistics and Monitoring
HAProxy includes a built-in stats dashboard that shows live traffic and backend health, so you can quickly spot overload, failing servers, or unusual spikes. You can enable it by adding a small frontend stats block in your HAProxy config file.
Add statistics frontend to configuration:
frontend stats
bind *:8404
mode http
# Enable statistics
stats enable
# Set statistics URI
stats uri /stats
# Auto-refresh every 30 seconds
stats refresh 30s
# Restrict access to admin from specific IPs
acl allowed_network src 127.0.0.1 192.168.1.0/24
stats admin if allowed_network
# Hide HAProxy version for security
stats hide-version
# Set authentication (optional)
stats auth admin:SecurePassword123
# Page title
stats realm HAProxy\ Statistics
You can access the statistics page at:
http://your-server-ip:8404/stats
The statistics dashboard displays essential metrics, including:
- Session rate
- Session total
- Queue current
- Queue max
- Server status
- Health check status
- Active sessions
- Bytes transferred
- Response codes
Runtime API: The Runtime API lets you manage servers live using the admin socket without needing a full reload.
# Enable Runtime API socket
global
stats socket /var/run/haproxy/admin.sock mode 660 level admin
# Use socat to send commands
# Show backend server status
echo "show servers state" | sudo socat stdio /var/run/haproxy/admin.sock
# Disable server for maintenance
echo "disable server web_servers/web1" | sudo socat stdio /var/run/haproxy/admin.sock
# Enable server after maintenance
echo "enable server web_servers/web1" | sudo socat stdio /var/run/haproxy/admin.sock
# Set server to drain mode (finish existing connections, accept no new ones)
echo "set server web_servers/web1 state drain" | sudo socat stdio /var/run/haproxy/admin.sock
# View statistics
echo "show stat" | sudo socat stdio /var/run/haproxy/admin.sock
HAProxy Performance Tuning
Performance tuning means adjusting both the Linux system and HAProxy settings so they can handle lots of concurrent users without running out of resources or slowing down.
For system-level tuning, you can increase system limits for file descriptors:
sudo nano /etc/security/limits.conf
Add the following lines to the file:
haproxy soft nofile 65536
haproxy hard nofile 65536
Configure kernel parameters by editing the sysctl configuration:
sudo nano /etc/sysctl.conf
Add performance parameters to the file:
# Increase maximum number of connection tracking entries
net.netfilter.nf_conntrack_max = 262144
# Increase local port range
net.ipv4.ip_local_port_range = 1024 65535
# Enable TCP fast open
net.ipv4.tcp_fastopen = 3
# Increase maximum backlog
net.core.somaxconn = 4096
# Increase TCP max orphans
net.ipv4.tcp_max_orphans = 65536
# Increase max syn backlog
net.ipv4.tcp_max_syn_backlog = 20480
# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1
# Reduce TIME_WAIT sockets
net.ipv4.tcp_fin_timeout = 30
Apply changes with the command below:
sudo sysctl -p
HAProxy tuning (inside haproxy.cfg): You can tune HAProxy limits so it can actually accept the load:
global
# Maximum concurrent connections globally
# Formula: (RAM_GB * 1024 * 1024 * 1024) / (16 * 1024) / 2
maxconn 20000
# Number of threads (match CPU core count)
nbthread 4
defaults
# Maximum connections per frontend
maxconn 10000
backend web_servers
# Maximum connections per server
server web1 192.168.1.10:80 check maxconn 500
server web2 192.168.1.11:80 check maxconn 500
Connection pooling (HTTP keep-alive): Enabling keep-alive allows HAProxy to reuse existing HTTP connections instead of opening a new connection for every request. This reduces TCP handshake overhead, lowers backend connection churn, and usually improves throughput and latency:
defaults
option http-keep-alive
timeout http-keep-alive 10s
backend web_servers
option http-keep-alive
server web1 192.168.1.10:80 check
Security Best Practices for HAProxy Setup
At this point, you can follow these security best practices for HAProxy configuration.
1. Firewall Configuration: You can restrict access to HAProxy ports.
Allow HTTP/HTTPS traffic with the command below:
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
Allow stats port only from a specific network:
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port protocol="tcp" port="8404" accept'
Then, reload the firewall to apply the changes:
sudo firewall-cmd --reload
2. Security Headers: You can add security headers to all responses in the HAProxy config file:
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
# Security headers
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
http-response set-header X-Frame-Options "DENY"
http-response set-header X-Content-Type-Options "nosniff"
http-response set-header X-XSS-Protection "1; mode=block"
http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
http-response set-header Permissions-Policy "geolocation=(), microphone=(), camera=()"
default_backend web_servers
3. Restricting Access to Sensitive Paths: Implement path-based access restrictions in the HAProxy config file:
frontend http_front
bind *:80
# Define restricted paths
acl admin_path path_beg /admin
acl api_path path_beg /api
# Define allowed networks
acl internal_network src 192.168.1.0/24 10.0.0.0/8
# Block external access to admin
http-request deny if admin_path !internal_network
default_backend web_servers
Complete Production HAProxy Configuration Example
Here is a full sample HAProxy configuration that combines the most common production best practices in one place:
#---------------------------------------------------------------------
# Global settings - Process-wide configuration
#---------------------------------------------------------------------
global
# Security: Run as unprivileged user
user haproxy
group haproxy
# Process management
daemon
maxconn 20000
nbthread 4
# Logging
log /dev/log local0
log /dev/log local1 notice
# Security hardening
chroot /var/lib/haproxy
# Runtime API and statistics
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
# SSL/TLS configuration
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-server-options ssl-min-ver TLSv1.2
tune.ssl.default-dh-param 2048
#---------------------------------------------------------------------
# Default settings for all sections
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
maxconn 10000
# Timeout configuration
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-request 10s
timeout http-keep-alive 10s
timeout check 10s
timeout queue 1m
timeout tunnel 1h
# Error files
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
#---------------------------------------------------------------------
# Statistics dashboard
#---------------------------------------------------------------------
frontend stats
bind *:8404
mode http
stats enable
stats uri /stats
stats refresh 30s
stats hide-version
stats realm HAProxy\ Statistics
stats auth admin:ChangeThisPassword
# Restrict admin access
acl allowed_network src 127.0.0.1 192.168.1.0/24
stats admin if allowed_network
#---------------------------------------------------------------------
# HTTP Frontend - Redirect to HTTPS
#---------------------------------------------------------------------
frontend http_front
bind *:80
# Rate limiting
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
# Redirect all HTTP to HTTPS
http-request redirect scheme https code 301
#---------------------------------------------------------------------
# HTTPS Frontend - Main application entry point
#---------------------------------------------------------------------
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
# Rate limiting
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc1 src
http-request deny deny_status 429 if { sc_http_req_rate(1) gt 100 }
# Security headers
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
http-response set-header X-Frame-Options "SAMEORIGIN"
http-response set-header X-Content-Type-Options "nosniff"
http-response set-header X-XSS-Protection "1; mode=block"
http-response set-header Referrer-Policy "strict-origin-when-cross-origin"
# Set forwarding headers
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Port %[dst_port]
# Path-based routing ACLs
acl url_static path_beg /static /images /css /js /fonts
acl url_api path_beg /api
acl url_admin path_beg /admin
# IP-based access control
acl internal_network src 192.168.1.0/24 10.0.0.0/8
# Block external admin access
http-request deny if url_admin !internal_network
# Route to backends
use_backend static_servers if url_static
use_backend api_servers if url_api
use_backend admin_servers if url_admin
default_backend web_servers
#---------------------------------------------------------------------
# Main web application backend
#---------------------------------------------------------------------
backend web_servers
balance leastconn
# Session persistence
cookie SERVERID insert indirect nocache
# Health checks
option httpchk GET /health
http-check expect status 200
# Connection settings
option http-keep-alive
# Servers
server web1 192.168.1.10:80 check inter 3s fall 3 rise 2 maxconn 500 cookie web1
server web2 192.168.1.11:80 check inter 3s fall 3 rise 2 maxconn 500 cookie web2
server web3 192.168.1.12:80 check inter 3s fall 3 rise 2 maxconn 500 cookie web3
#---------------------------------------------------------------------
# API backend
#---------------------------------------------------------------------
backend api_servers
balance roundrobin
# API health checks
option httpchk GET /api/health
http-check expect status 200
# Servers
server api1 192.168.1.20:8080 check inter 5s fall 3 rise 2 maxconn 300
server api2 192.168.1.21:8080 check inter 5s fall 3 rise 2 maxconn 300
#---------------------------------------------------------------------
# Static content backend
#---------------------------------------------------------------------
backend static_servers
balance roundrobin
# Longer timeouts for large file transfers
timeout server 5m
# Basic health check
option httpchk HEAD /
# Servers
server static1 192.168.1.30:80 check inter 10s
server static2 192.168.1.31:80 check inter 10s
#---------------------------------------------------------------------
# Admin backend
#---------------------------------------------------------------------
backend admin_servers
balance source
# Sticky sessions based on IP
stick-table type ip size 10k expire 1h
stick on src
# Health checks
option httpchk GET /admin/health
http-check expect status 200
# Server
server admin1 192.168.1.40:8000 check inter 5s
Troubleshooting Common HAProxy Issues
This step helps you quickly troubleshoot the most common HAProxy issues in production.
Issue 1. Service Won’t Start: You can check syntax validation and service logs:
# Validate configuration
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
# Check systemd logs
sudo journalctl -u haproxy -n 50 --no-pager
# Check service status
sudo systemctl status haproxy
Common causes:
- Syntax errors in the configuration file.
- Ports are already in use by another service.
- Insufficient permissions on certificate files.
- Missing certificate files.
Issue 2. 503 Service Unavailable Errors: Indicates no healthy backend servers available:
# Check backend server status
echo "show stat" | sudo socat stdio /var/run/haproxy/admin.sock | grep backend_name
# View real-time logs
sudo tail -f /var/log/haproxy/haproxy.log
Solutions:
- Verify backend servers are running and accessible.
- Check that the health check configuration matches the application requirements.
- Review firewall rules between HAProxy and backends.
- Verify SSL certificate validity if using SSL health checks.
Issue 3. Connection Timeouts: Review and adjust timeout settings:
# Check for timeout patterns in logs
sudo tail -f /var/log/haproxy/haproxy.log | grep -E "(cR|sR|CT|ST)"
Common timeout flags:
- cR: Client timeout on receive.
- sR: Server timeout on receive.
- CT: Client timeout.
- ST: Server timeout.
Adjust relevant timeout values in the configuration based on application requirements.
Issue 4. SSL/TLS Handshake Failures: Verify certificate configuration:
# Test SSL certificate
openssl s_client -connect your-server:443 -servername your-domain.com
# Check certificate expiration
openssl x509 -in /etc/haproxy/certs/example.com.pem -noout -dates
# Verify certificate chain
openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt /etc/haproxy/certs/example.com.pem
Common causes:
- Incomplete certificate chain.
- Expired certificates.
- Incorrect file permissions.
- Missing intermediate certificates.
FAQs
Which HAProxy Mode Should I Use? HTTP mode or TCP mode?
Use HTTP mode for websites and APIs because it supports routing by host, path, headers, redirects, and Layer 7 features. Use TCP mode for non-HTTP services.
Where do I place the SSL frontend block in the HAProxy config file?
Put it in /etc/haproxy/haproxy.cfg after global and defaults, and before backend sections.
Why do I get 503 Service Unavailable in HAProxy?
Most of the time, it means HAProxy thinks all backends are DOWN, health check failing, or network access blocked.
Conclusion
HAProxy is fast, flexible, and has many features, so it works well for everything from small projects to huge websites. If you use the setup examples and best practices in this guide, you can build a reverse proxy that is stable, secure, and able to handle very large traffic with very low delay.
We hope you enjoy this guide. Subscribe to our X and Facebook channels to get the latest updates and articles.
For further reading: