//------------------------------------------------------------------- //-------------------------------------------------------------------
Resolve Conntrack Table Full Errors

Stop Table Full Errors: Conntrack Configuration for High-Traffic Dedicated Servers

Connection tracking or conntrack table full errors are the most common network issues in high-traffic Linux servers. When you see the “nf_conntrack: table full, dropping packet” in your system logs, it means your server has reached its connection tracking capacity and is actively dropping network packets. This creates serious problems like failed connections, network timeouts, and slow application performance.

This guide from PerLod Hosting explains why conntrack tables fill up under load, how to size them for your workload, and how to prevent connection drops on production servers and dedicated server environments.

What is Connection Tracking?

Connection tracking, or Conntrack, is a kernel module within the Linux Netfilter framework that keeps state information about active network connections. The conntrack system creates a hash table that stores details about each connection, including protocol type, source IP address, source port, destination IP address, destination port, and connection state.

This tracking is essential for NAT and firewall operations. Without conntrack, your server can not perform NAT translation or keep firewall rules that identify between new connections and established traffic.

The conntrack table supports tracking for six specific protocols, including:

  • TCP
  • UDP
  • ICMP
  • DCCP
  • SCTP
  • GRE

Each tracked connection consumes about 304 to 320 bytes of kernel memory. For example, tracking 1 million simultaneous connections requires 320 MB of RAM.

Why Conntrack Tables Fill Up?

Understanding why conntrack tables fill up helps you prevent the problem before it affects your environment. Here are the most common conntrack table full causes:

1. High Connection Volume: High-traffic servers often track 100,000 to 200,000 connections under normal load. Once this limit is reached, the server drops both new and existing users. Dedicated servers and VM hosts are especially vulnerable. Running multiple virtual machines generates massive traffic volumes that fill the tracking table quickly.

2. Microservices and Container Environments: Microservices create far more connections than traditional apps. Kubernetes pods constantly generate NAT entries by talking to each other, and NodePort services get this worse by performing NAT twice.

3. Excessive Timeout Values: Linux defaults keep connections in memory too long, active TCP connections are tracked for 5 full days, and even generic ones stay for 10 minutes. This wastes space by remembering completed connections long after they have stopped communicating.

4. DDoS Attacks and Malicious Traffic: Attackers flood your server with fake connections to fill up the list, so real users get blocked. Even a sudden jump in normal traffic can fill the list by mistake.

5. Default Limits Too Low: Most Linux systems start with a low limit of 65,536 connections by default. Even with 8GB of RAM, the system may only allow around 260,000 connections, which is often not enough for high-traffic servers.

Here is a table showing how low limits affect different servers:

Server TypeTypical ConnectionsDefault LimitProblem
Light web server10,000 to 30,00065,536Adequate
Medium web server50,000 to 100,00065,536Occasional drops​
Heavy dedicated server100,000 to 200,00065,536Frequent drops​
Kubernetes node200,000 to 500,00065,536Critical failures​
High-traffic NAT gateway500,000 and above.65,536Severe packet loss​

Right-Sizing Conntrack Table

Right-sizing conntrack table requires understanding both your architecture and available system resources. The goal is to balance memory consumption with connection capacity.

1. Memory-Based Calculation: The standard formula for calculating maximum conntrack entries is based on available RAM.

Here is the formula:

CONNTRACK_MAX = RAM(bytes) / 16384×2

Examples for practical calculations include:

4GB Server:

4×1024^3 / 16384×2 = 131,072 connections

8GB Server:

8×1024^3 / 16384×2 = 262,144 connections

16GB Server:

16×1024^3 / 16384×2 = 524,288 connections

Since each conntrack entry consumes about 320 bytes, you can verify memory requirements with:

Memory Required = max connections × 320 bytes

2. Hash Table Sizing: The conntrack system uses a hash table with buckets to store connection entries. The size of the hash table directly affects performance. Larger hash tables with smaller buckets provide faster connection performance.

The relationship between maximum connections and hash table size has completed across Linux kernel versions:

Older Kernels pre-5.15:

hashsize = nf_conntrack_max / 4

This creates an average bucket size of 4 entries when the table is full.

Modern Kernels 5.15 and above:

hashsize = nf_conntrack_max

This creates an average bucket size of 1 to 2 entries, which improves performance.

Each bucket should hold just one entry and never more than eight. Long lists in buckets force the system to search more, which slows performance.

3. Size Recommendations by Workload: You can choose your conntrack maximum based on your server’s role and traffic patterns.

For light load servers:

  • Maximum: 131,072 connections.
  • Hashsize: 32,768.
  • Memory: Around 42 MB.
  • Use case: Development servers and small websites.

For medium load servers:

  • Maximum: 262,144 to 524,288 connections.
  • Hashsize: 65,536 to 131,072.
  • Memory: Around 84 to 168 MB.
  • Use case: Standard web applications and database servers.

For heavy load dedicated servers:

  • Maximum: 524,288 to 1,048,576 connections.
  • Hashsize: 131,072 to 262,144.
  • Memory: Around 168 to 336 MB.
  • Use case: High-traffic web servers, application servers, and dedicated hosting.

For Kubernetes nodes:

  • Maximum: 524,288 to 1,048,576 connections.
  • Hashsize: 524,288 to 1,048,576 (1:1 ratio).
  • Memory: Around 168 to 336 MB.
  • Use case: Container orchestration and microservices platforms.

For NAT gateways and firewalls:

  • Maximum: 1,048,576 to 2,097,152 connections.
  • ​Hashsize: 262,144 to 524,288.
  • Memory: Around 336 to 672 MB.
  • Use case: Network edge devices and multi-tenant hosting.

Note: For dedicated servers, you can start with at least 524,288 connections and monitor your usage. Since these servers usually have plenty of RAM, it is safer to set high limits to handle heavy traffic.

Resolve Conntrack Table Full Errors

To fix a full conntrack table, you need two steps: a quick fix to stop current errors and a permanent change to prevent them later.

To resolve this issue, proceed to the following steps.

Check Current Conntrack Usage

First, you must check your current conntrack usage and status. To do this, you can use the commands below.

Check the maximum allowed connections with the command below:

cat /proc/sys/net/netfilter/nf_conntrack_max

Check current active connections with the following command:

cat /proc/sys/net/netfilter/nf_conntrack_count

Calculate current conntrack usage percentage with the command below:

echo "scale=2; $(cat /proc/sys/net/netfilter/nf_conntrack_count) / $(cat /proc/sys/net/netfilter/nf_conntrack_max) * 100" | bc

Note: If your table is more than 80% full, you must increase the limit immediately.

To view the current hash table size, you can use the commands below:

# Check bucket count
cat /proc/sys/net/netfilter/nf_conntrack_buckets

# Alternative command
cat /sys/module/nf_conntrack/parameters/hashsize

To monitor connection tracking in real-time, you can use the following commands:

# Continuous monitoring (updates every 1 second)
watch -n 1 "conntrack -C"

# View event log of connection changes
conntrack --event

# View events with timestamps
conntrack --event -o timestamp

Also, check your system logs for conntrack table full errors:

# View recent kernel messages
dmesg | grep conntrack
dmesg -T | grep "table full"

# Check systemd journal
journalctl -k | grep conntrack

Immediately Increase Conntrack Connections Limits

Now you can use the commands below to immediately increase limits without rebooting. These changes take effect instantly but do not persist after reboot:

# Increase maximum connections to 1 million
sysctl -w net.netfilter.nf_conntrack_max=1048576

# Increase hash table size to 1 million (modern kernels)
sysctl -w net.netfilter.nf_conntrack_buckets=1048576

For older kernels where the buckets parameter cannot be changed via sysctl, you can use this command:

# Set hashsize directly (use 1/4 of max for older kernels)
echo 262144 > /sys/module/nf_conntrack/parameters/hashsize

Confirm the new maximum connection limit and the new bucket count with the commands below:

sysctl net.netfilter.nf_conntrack_max
cat /sys/module/nf_conntrack/parameters/hashsize

Make Conntrack Table Configuration Permanent

To make the changes permanent across reboots, it requires modifying system configuration files. But saving these settings for a reboot is tricky. The system often tries to apply your new limits before the connection tracking tool is even loaded, which causes them to fail silently.

To make your configuration persistent, follow the steps below.

Pre-load the Conntrack Module

You must be sure the nf_conntrack module loads early in the boot process by adding it to the modules configuration. To do this, run the command below:

echo "nf_conntrack" >> /etc/modules-load.d/modules.conf

This makes the module load via systemd-modules-load.service before sysctl settings are applied.

Configure Sysctl Parameters for Conntrack Settings

Create a dedicated sysctl configuration file for conntrack settings with the command below:

# Create conntrack configuration file
cat > /etc/sysctl.d/10-conntrack.conf << EOF
# Conntrack table sizing
net.netfilter.nf_conntrack_max=1048576
net.netfilter.nf_conntrack_buckets=1048576

# Timeout optimizations
net.netfilter.nf_conntrack_tcp_timeout_established=21600
net.netfilter.nf_conntrack_tcp_timeout_time_wait=60
net.netfilter.nf_conntrack_tcp_timeout_close_wait=20
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30
net.netfilter.nf_conntrack_generic_timeout=120
EOF

Load the configuration and verify settings are applied without rebooting with the commands below:

sysctl -p /etc/sysctl.d/10-conntrack.conf
sysctl net.netfilter.nf_conntrack_max
sysctl net.netfilter.nf_conntrack_buckets

Alternatively, you can use a udev rule to apply settings when the module loads:

cat > /etc/udev/rules.d/91-nf_conntrack.rules << 'EOF'
ACTION=="add", SUBSYSTEM=="module", KERNEL=="nf_conntrack", \
RUN+="/usr/lib/systemd/systemd-sysctl --prefix=/net/netfilter"
EOF

This rule tells the system to apply your connection tracking settings exactly when the module finishes loading.

For CentOS 7 systems, you can configure the hash table size via module parameters. Create a modprobe configuration with the command below:

echo "options nf_conntrack expect_hashsize=131072 hashsize=262144" > /etc/modprobe.d/firewalld-sysctls.conf

Restart the firewall to reload the module and verify the hash size:

systemctl restart firewalld
cat /sys/module/nf_conntrack/parameters/hashsize

Timeout Tuning for Faster Entry Recycling

Reducing timeout values clears old connections faster, which frees up space without needing to increase the total limit. Check your current settings with the command below:

sysctl -a | grep nf_conntrack_tcp_timeout

The most impactful timeout adjustments include:

1. TCP Established Timeout: The default TCP established timeout is 432,000 seconds (5 days). This is too long for most apps. You can reduce it to 6 hours with the command below:

# Temporary change
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established=21600

# Permanent change
echo "net.netfilter.nf_conntrack_tcp_timeout_established=21600" >> /etc/sysctl.d/10-conntrack.conf

2. TCP TIME_WAIT Timeout: Connections in TIME_WAIT state consume entries for 120 seconds by default. Reduce to 60 seconds with the following command:

sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=60

3. TCP CLOSE_WAIT and FIN_WAIT: These transitional states should clear quickly with the commands below:

# CLOSE_WAIT: reduce from 60s to 20s
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait=20

# FIN_WAIT: reduce from 120s to 30s
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30

4. Generic Connection Timeout: The generic timeout applies to connections without specific protocol tracking.

The default is 600 seconds. You can reduce it to 120 seconds with the command below:

sysctl -w net.netfilter.nf_conntrack_generic_timeout=120

5. UDP Timeouts: UDP connections timeout faster than TCP by default. You can fine-tune based on your application:

# Standard UDP timeout (default 30s)
sysctl -w net.netfilter.nf_conntrack_udp_timeout=30

# UDP stream timeout (default 180s)
sysctl -w net.netfilter.nf_conntrack_udp_timeout_stream=60

To create a complete timeout configuration, you can use the commands below:

cat >> /etc/sysctl.d/10-conntrack.conf << EOF

# Timeout optimizations
net.netfilter.nf_conntrack_generic_timeout=120
net.netfilter.nf_conntrack_tcp_timeout_established=21600
net.netfilter.nf_conntrack_tcp_timeout_time_wait=60
net.netfilter.nf_conntrack_tcp_timeout_close_wait=20
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30
net.netfilter.nf_conntrack_tcp_timeout_last_ack=30
net.netfilter.nf_conntrack_udp_timeout=30
net.netfilter.nf_conntrack_udp_timeout_stream=60
EOF

# Apply changes
sysctl -p /etc/sysctl.d/10-conntrack.conf

Note: Be careful when changing timeouts. If you make them too short, you might accidentally cut off valid connections that are meant to stay open.

Bypassing Connection Tracking for Performance

You can completely skip tracking for high-volume traffic that doesn’t need it by using RAW table rules. This saves CPU power and keeps your table from filling up, because these rules block tracking before it even starts.

When You Can Use NOTRACK:

  • High-volume web traffic on dedicated servers where stateful filtering is not required.
  • Trusted internal network communication.
  • Backup traffic between servers.
  • BGP, OSPF, and other routing protocol traffic.

Studies show that blocking bad traffic early in the RAW table lowers CPU usage by 10 to 15%, which frees up power for real work on dedicated servers.

To apply NOTRACK rules, you can use the commands below:

# Bypass tracking for web traffic on port 80
iptables -t raw -A PREROUTING -p tcp --dport 80 -j NOTRACK
iptables -t raw -A OUTPUT -p tcp --sport 80 -j NOTRACK

# Bypass tracking for HTTPS traffic on port 443
iptables -t raw -A PREROUTING -p tcp --dport 443 -j NOTRACK
iptables -t raw -A OUTPUT -p tcp --sport 443 -j NOTRACK

# Bypass tracking for trusted internal subnet
iptables -t raw -A PREROUTING -s 10.0.0.0/8 -d 10.0.0.0/8 -j NOTRACK
iptables -t raw -A OUTPUT -s 10.0.0.0/8 -d 10.0.0.0/8 -j NOTRACK

Always apply these rules in both directions, incoming and outgoing, or the traffic won’t be fully ignored.

Important limitations You Must Consider:

  • NAT does not function for NOTRACK traffic.
  • Stateful firewall rules cannot track NOTRACK connections.
  • You must implement alternative security measures for untracked traffic.

You can make RAW table rules persistent by adding them to your firewall configuration. For iptables, you can use:

# Save current rules
iptables-save > /etc/iptables/rules.v4

# Restore rules on boot
iptables-restore < /etc/iptables/rules.v4

Set up Monitoring and Alerting for Conntrack

You can easily set up monitoring rules to detect conntrack issues before they affect your workload. Create a monitoring script with the command below:

cat > /usr/local/bin/check-conntrack.sh << 'EOF'
#!/bin/bash

MAX=$(cat /proc/sys/net/netfilter/nf_conntrack_max)
COUNT=$(cat /proc/sys/net/netfilter/nf_conntrack_count)
PERCENT=$((COUNT * 100 / MAX))

if [ $PERCENT -gt 80 ]; then
    echo "WARNING: Conntrack usage at ${PERCENT}% (${COUNT}/${MAX})"
    # Add alerting logic here (email, Slack, etc.)
fi
EOF

Make the script executable:

chmod +x /usr/local/bin/check-conntrack.sh

You can also schedule automatic checks with cron with the command below:

# Check every 5 minutes
echo "*/5 * * * * /usr/local/bin/check-conntrack.sh" | crontab -

To enable conntrack event logging for debugging, you can use the commands below:

# Log invalid connections
iptables -A INPUT -m conntrack --ctstate INVALID -j LOG --log-prefix "CONNTRACK_DROP: "

# View logged events
dmesg | grep CONNTRACK_DROP

View detailed connection statistics with the following commands:

# List all tracked connections
conntrack -L

# Count connections by state
conntrack -L | awk '{print $4}' | sort | uniq -c | sort -rn

# List connections for specific IP
conntrack -L -s 192.168.1.100

# Show connections to external destination
conntrack -L -d 8.8.8.8

# Filter by protocol
conntrack -L -p tcp
conntrack -L -p udp

Tip: In emergency situations, you can manually clear your connection tracking entries. Just keep in mind to use it carefully, because this disrupts active connections.

# Flush ALL connections (disrupts all active connections)
conntrack -F

# Delete connections from specific source IP
conntrack --delete --orig-src 192.168.1.100

# Delete connections to specific destination
conntrack --delete --orig-dst 8.8.8.8

# Delete specific protocol connections from an IP
conntrack --delete --orig-src 192.168.1.100 -p tcp

Note: Only use the flush command in emergencies. It instantly kills every single active connection on your server.

Example Conntrack Table Configuration for Dedicated Servers

Here we provide a complete conntrack table configuration which is suitable for high-traffic dedicated servers:

# Step 1: Pre-load conntrack module
echo "nf_conntrack" > /etc/modules-load.d/modules.conf

# Step 2: Create comprehensive sysctl configuration
cat > /etc/sysctl.d/10-conntrack.conf << EOF
# Connection tracking table sizing for dedicated server
# Supports up to 1M simultaneous connections (~320MB RAM)
net.netfilter.nf_conntrack_max=1048576
net.netfilter.nf_conntrack_buckets=1048576

# Timeout optimizations to free entries faster
net.netfilter.nf_conntrack_generic_timeout=120
net.netfilter.nf_conntrack_tcp_timeout_established=21600
net.netfilter.nf_conntrack_tcp_timeout_time_wait=60
net.netfilter.nf_conntrack_tcp_timeout_close_wait=20
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30
net.netfilter.nf_conntrack_tcp_timeout_last_ack=30
net.netfilter.nf_conntrack_tcp_timeout_syn_recv=60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent=120
net.netfilter.nf_conntrack_udp_timeout=30
net.netfilter.nf_conntrack_udp_timeout_stream=60

# Enable connection accounting (optional, for monitoring)
net.netfilter.nf_conntrack_acct=1

# Enable timestamping (optional, for debugging)
net.netfilter.nf_conntrack_timestamp=1
EOF

# Step 3: Apply configuration immediately
sysctl -p /etc/sysctl.d/10-conntrack.conf

# Step 4: Verify settings
echo "Conntrack Max: $(cat /proc/sys/net/netfilter/nf_conntrack_max)"
echo "Conntrack Buckets: $(cat /proc/sys/net/netfilter/nf_conntrack_buckets)"
echo "Current Count: $(cat /proc/sys/net/netfilter/nf_conntrack_count)"

# Step 5: Test persistence by rebooting
echo "Configuration complete. Reboot to verify persistence."

This connection tracking configuration is best for dedicated servers handling web hosting, application servers, database clusters, or container orchestration platforms.

Prevent Conntrack Connection Drops

In this step, you can also use these best practices to prevent future conntrack issues on dedicated servers and production environments.

1. Capacity Planning: You must monitor your conntrack usage patterns over time. Track peak connection counts during high-traffic periods.

For the log connection count every minute for 24 hours, you can run:

for i in {1..1440}; do
    echo "$(date '+%Y-%m-%d %H:%M:%S') $(cat /proc/sys/net/netfilter/nf_conntrack_count)" >> /var/log/conntrack-usage.log
    sleep 60
done

You can analyze the log to identify peak usage. To find the maximum connection count, you can run:

sort -k3 -n /var/log/conntrack-usage.log | tail -1

Calculate average usage with the following command:

awk '{sum+=$3; count++} END {print sum/count}' /var/log/conntrack-usage.log

Set your max limit to at least 150% of your highest traffic to leave safety room for sudden spikes.

2. Application-Level Optimization: Badly written apps can fill up your tracking table by managing connections poorly. Consider the following recommendations:

  • Connection Pooling: Make your apps reuse existing connections instead of opening new ones every time. Using tools like database pools and keep-alive settings can lower the load on your tracking table.
  • Graceful Shutdown: Ensure apps close connections cleanly. A proper finish clears the tracking entry much faster than just cutting the RST packets.
  • Connection Limits: Set strict limits on how many connections each app can make. This stops a single busy service from hogging the entire table and blocking everyone else.

3. Network Architecture Considerations: Dedicated servers hosting multiple services or VMs benefit from strategic network architecture:

  • Service Isolation: Put busy services on their own network interfaces. This keeps their tracking load separate and makes problems easier to spot.
  • Load Balancing: Spread traffic across multiple servers instead of overloading one. Load balancers handle heavy connections much better than the standard kernel tracking.
  • Kubernetes: Switch to eBPF networking like Cilium. It bypasses the tracking table for many tasks, which reduces memory usage.

4. Security Hardening: Implement security measures to prevent connection table issues:

Rate Limiting: Use iptables or firewalld rate limiting to restrict new connection attempts per IP address:

# Limit new connections to 100 per minute per IP
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --set
iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m recent --update --seconds 60 --hitcount 100 -j DROP

SYN Flood Protection: Enable TCP SYN cookies to handle SYN flood attacks without consuming conntrack entries:

# Enable SYN cookies
sysctl -w net.ipv4.tcp_syncookies=1
echo "net.ipv4.tcp_syncookies=1" >> /etc/sysctl.d/10-conntrack.conf

Connection Tracking Zones: For complex dedicated server environments with multiple network namespaces or containers, you can use conntrack zones to isolate tracking between contexts. This prevents one namespace from exhausting the global table.

5. Regular Maintenance: Make checking these settings a regular habit:

Weekly Reviews: Examine conntrack statistics weekly to identify trends:

# Weekly report script
cat > /usr/local/bin/conntrack-weekly-report.sh << 'EOF'
#!/bin/bash

echo "Conntrack Weekly Report - $(date)"
echo "=================================="
echo "Current Count: $(cat /proc/sys/net/netfilter/nf_conntrack_count)"
echo "Maximum Allowed: $(cat /proc/sys/net/netfilter/nf_conntrack_max)"
echo "Usage Percentage: $(echo "scale=2; $(cat /proc/sys/net/netfilter/nf_conntrack_count) / $(cat /proc/sys/net/netfilter/nf_conntrack_max) * 100" | bc)%"
echo ""
echo "Top Connection States:"
conntrack -L 2>/dev/null | awk '{print $4}' | sort | uniq -c | sort -rn | head -10
echo ""
echo "Top Source IPs:"
conntrack -L 2>/dev/null | grep -oP 'src=\K[^ ]+' | sort | uniq -c | sort -rn | head -10
EOF

chmod +x /usr/local/bin/conntrack-weekly-report.sh

Scaling: As your traffic grows, increase limits incrementally rather than waiting for exhaustion. Monitor the growth rate and project future needs.

Documentation: Document your conntrack configuration decisions, including why specific limits and timeouts were chosen for your workload. This helps future administrators understand the configuration logic.

FAQs

What does “nf_conntrack: table full, dropping packet” mean?

It means your server has hit its connection tracking limit and is rejecting new users.

How many connections should my server support?

It depends on your workload. Busy web servers need 100,000 to 200,000 or higher. Dedicated servers should start with at least 524,288.

How do I know if my conntrack is full?

You can use the cat /proc/sys/net/netfilter/nf_conntrack_count command. If it’s over 80% of the max value, increase limits immediately.

Conclusion

Connection tracking errors can be easily resolved with the right setup. For high-traffic production and dedicated servers, you can follow the above rules we discussed and stop table full errors, which keep your Linux servers running smoothly.

We hope you enjoy this guide. Subscribe to our X and Facebook channels to get the latest updates and articles.

For further reading:

Why Vertical Scaling Fails and What To Do Next

Containerizing AI inference with NVIDIA Triton Inference Server

Post Your Comment

PerLod delivers high-performance hosting with real-time support and unmatched reliability.

Contact us

Payment methods

payment gateway
Perlod Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.