Edge Computing with Customized Bare Metal Servers

deploy Edge Computing on bare metal servers

Edge Computing with Customized Bare Metal Servers

Edge computing means running applications or processing data close to where it’s generated, for example, on a local server, instead of sending everything to a faraway cloud data center. This reduces latency, saves bandwidth, and can improve security and reliability. This guide intends to teach you how to set up Edge computing on bare metal servers.

If you are looking for reliable dedicated servers to power your edge computing infrastructure, you can check PerLod Hosting, which provides high-performance bare metal servers with global data center locations, perfect for building low-latency edge networks.

Overview: Edge Computing on Bare Metal Servers

In this guide, we want to build a high-performance, low-latency edge computing network. This setup puts small, fast servers near your users and links them securely to a main data center, so users connect to the nearest edge server for quick responses while important data stays safe in the core, which makes the app faster, more reliable, and easier to scale.

We’re building an edge computing system using a few tools, including:

  • Ubuntu 24.04 Bare Metal Server: We’re running everything directly on real computers, not virtual machines, using Ubuntu Linux version 24.04.
  • WireGuard VPN: It securely connects your edge servers and the main server so they can talk safely over the internet.
  • K3s Lightweight Kubernetes: Helps run and manage small apps, which is a smaller and faster Kubernetes version for edge devices.
  • Traefik with Nginx: These are used to control web traffic. Traefik decides where requests go, and Nginx can store small copies of responses to make websites faster.
  • Redis Edge Cache: A fast temporary database that stores data close to the edge so apps can get it quickly.
  • PostgreSQL core DB: he main database at your central data center. You can make read replicas at the edge to speed up data reading.
  • Prometheus Node Exporter with Grafana Agent: These tools help you monitor your systems.

In this guide, we use EDGE_A and EDGE_B as the edge servers. For example, one in Germany and one in the Netherlands. And the CORE is the main data center server.

Planning and Prerequisites to Deploy Edge Computing on Dedicated Servers

A successful edge computing setup requires careful planning and execution. Here are the key planning decisions:

Latency Budget (Speed Targets): Reads must be very fast from the edge, but writes can take a bit longer to the core.

Data Strategy: If data is already stored at the edge, serve it locally for speed. When users save or update data, send it to the central system by choosing:

  • Synchronous: Wait until the core confirms the write, which is more consistent.
  • Asynchronous: Send it later, which is faster, but may show old data briefly.

DNS Routing: Use latency-based or geo DNS so users automatically connect to the nearest edge server. If you host DNS yourself, start simple:

  • Use region-based subdomains.
  • Add smart routing based on geography or speed later.

Hardware Recommendation:

  • NICs with offloads: Network cards that can handle some tasks by themselves, reducing CPU work.
  • NVMe drives: Super-fast storage for caching frequently used data.
  • RAM: At least 16–32 GB for smooth caching and processing.
  • BIOS settings: Set the system to Performance mode, and if you need ultra-low latency, disable deep C-states.

You must measure current latency from user-representative locations to your current origin server. To measure the required metrics, you can use the commands below:

# ICMP Latency: Measures round-trip time and packet loss to the origin.
ping -c 20 origin.example.com

# Path Quality: A powerful tool to diagnose latency, jitter, and packet loss on each hop of the route.
sudo apt update && sudo apt install mtr -y
mtr -rwzbc 200 origin.example.com

# TCP Throughput: Tests the maximum network throughput and how it is affected by latency.
sudo apt install iperf3 -y
iperf3 -c origin.example.com -t 30

# HTTP Load Testing: Simulates multiple users to measure your web server's latency and performance under load.
sudo snap install hey --classic || true
hey -z 30s -c 50 https://origin.example.com/health

Save these metrics as your before numbers.

Prepare OS and Optimize Network on EDGE and CORE Servers

You must prepare your servers (Edge and Core) by running the system update, installing required tools, and optimizing the network. Run the system update and install the required tools with the commands below:

sudo apt update && sudo apt upgrade -y
sudo apt install jq curl git unzip ethtool net-tools htop vim tmux ufw 

Set the correct timezone on your servers:

sudo timedatectl set-timezone UTC
sudo timedatectl set-ntp true

Also, set the CPU Governor, which forces the CPU to run at its maximum frequency at all times, with the commands below:

sudo apt install linux-tools-common linux-tools-generic -y
for c in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo performance | sudo tee $c; done

Now you must configure the sysctl parameters to optimize TCP buffer sizes, enable modern congestion control (BBR), and reduce TCP timeout delays with the following command:

sudo tee /etc/sysctl.d/99-edge-tuning.conf >/dev/null <<'EOF'
net.core.netdev_max_backlog = 250000
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
net.ipv4.tcp_congestion_control = bbr
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.ip_local_port_range = 10000 65000
net.core.somaxconn = 65535
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
EOF

Apply the changes with the following command:

sudo sysctl --system

You can check and optionally disable generic receive offload, which can sometimes introduce latency in favor of throughput. Check current offloads with the command below:

sudo ethtool -k $(ip -o link show | awk -F': ' '$2!="lo"{print $2; exit}')

Enable or disable specific offloads if needed. For example, this will ensure GRO and LRO off:

sudo ethtool -K eno1 gro off lro off

Set up WireGuard for Secure and Fast Edge Overlay

We want to use WireGuard to create an encrypted, peer-to-peer overlay network. This mesh gives your services a private, routable IP space to communicate securely between the core and edges, as if they were on the same physical local network, regardless of their global location.

Install WireGuard on all Edge and Core servers with the command below:

sudo apt install wireguard qrencode -y

Generate the key pairs on all servers with the following commands:

wg genkey | tee ~/wg.key | wg pubkey | tee ~/wg.pub
chmod 600 ~/wg.key

Assign overlay IPs in this guide include:

  • CORE: 10.88.0.1/24
  • EDGE_A: 10.88.0.11/24
  • EDGE_B: 10.88.0.12/24

Now you must configure the Core server, which defines the core node’s IP and lists all edge servers that are allowed to connect with the following command:

sudo tee /etc/wireguard/wg0.conf >/dev/null <<'EOF'
[Interface]
Address = 10.88.0.1/24
ListenPort = 51820
PrivateKey = <CORE_PRIVATE_KEY>

# Edge A
[Peer]
PublicKey = <EDGE_A_PUB>
AllowedIPs = 10.88.0.11/32

# Edge B (optional)
[Peer]
PublicKey = <EDGE_B_PUB>
AllowedIPs = 10.88.0.12/32
EOF

Then, configure the Edge_A server, which defines the edge server’s IP and points it to the core server’s public IP as its endpoint:

sudo tee /etc/wireguard/wg0.conf >/dev/null <<'EOF'
[Interface]
Address = 10.88.0.11/24
ListenPort = 51820
PrivateKey = <EDGE_A_PRIVATE_KEY>

[Peer]
PublicKey = <CORE_PUB>
AllowedIPs = 10.88.0.0/24
Endpoint = <CORE_PUBLIC_IP>:51820
PersistentKeepalive = 15
EOF

You can do this for the Edge_B server similarly.

Start and enable WireGuard on all servers and check the status with the commands below:

sudo systemctl enable --now wg-quick@wg0
wg show

Test the connectivity across the overlay network:

ping -c 4 10.88.0.1    # from EDGE Servers
ping -c 4 10.88.0.11   # from CORE to EDGE_A

Deploy k3s Clusters at the Autonomous Edge

In this step, we want to deploy K3s in an Autonomous Edge pattern, where each edge location runs its own independent single-node cluster. This provides maximum flexibility; if the core data center fails, the edges continue to operate using their local cached data and compute.

Install K3s on Edge servers with the following commands:

curl -sfL https://get.k3s.io | sudo sh -s - server \
  --disable traefik \
  --write-kubeconfig-mode 644 \
  --tls-san $(curl -s ifconfig.me) \
  --flannel-backend=wireguard-native

This will disable the built-in Traefik and use WireGuard for the internal Pod network.

Check that Edge servers are ready and list all system pods with the commands below:

sudo kubectl get nodes -o wide
sudo kubectl get pods -A

You can optionally install K3s on your Core server for server-side workloads:

curl -sfL https://get.k3s.io | sudo sh -s - server --disable traefik

Routing and Accelerating Traffic at the Edge

We want to deploy a two-layer system, including Traefik as the intelligent ingress router that understands Kubernetes and handles TLS termination, and a high-performance Nginx micro-cache placed directly in front of the application.

Install Traefik via Helm, which is a package manager for Kubernetes used to deploy complex applications:

sudo snap install helm --classic || curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

kubectl create ns ingress
helm repo add traefik https://traefik.github.io/charts
helm repo update

helm upgrade --install traefik traefik/traefik -n ingress \
  --set service.type=NodePort \
  --set ports.web.nodePort=30080 \
  --set ports.websecure.nodePort=30443 \
  --set logs.general.level=INFO

Then, deploy Nginx Micro-Cache as a DaemonSet by creating the YAML file:

sudo nano edge-cache.yaml

Add the following configuration to the file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: edge-cache
  namespace: ingress
spec:
  selector:
    matchLabels:
      app: edge-cache
  template:
    metadata:
      labels:
        app: edge-cache
    spec:
      hostNetwork: true
      containers:
      - name: nginx
        image: nginx:1.25
        ports:
        - containerPort: 8081
          hostPort: 8081
        volumeMounts:
        - name: cfg
          mountPath: /etc/nginx/conf.d
      volumes:
      - name: cfg
        configMap:
          name: edge-cache-cm
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: edge-cache-cm
  namespace: ingress
data:
  cache.conf: |
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=EDGE:100m max_size=2g inactive=10m use_temp_path=off;

    server {
      listen 8081;
      resolver 1.1.1.1 1.0.0.1;
      proxy_connect_timeout 2s;
      proxy_read_timeout 5s;

      location / {
        proxy_cache EDGE;
        proxy_cache_valid 200 1m;
        proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504 updating;
        proxy_ignore_headers Set-Cookie;
        add_header X-Cache $upstream_cache_status;

        # upstream = local k8s service (app), fallback to core if needed
        set $upstream http://app.default.svc.cluster.local:8080;
        proxy_pass $upstream;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
      }
    }

Apply the configuration with the command below:

kubectl apply -f edge-cache.yaml

This runs Nginx on each edge server at port 8081 as a micro-cache in front of your app service.

Next, configure Traefik IngressRoute to Nginx cache with:

sudo nano ingressroute.yaml

Add the following configuration to the file:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: app-route
  namespace: default
spec:
  entryPoints:
  - web
  - websecure
  routes:
  - match: Host(`edge.example.com`)
    kind: Rule
    services:
    - name: edge-cache-svc
      port: 8081
---
apiVersion: v1
kind: Service
metadata:
  name: edge-cache-svc
  namespace: default
spec:
  type: ClusterIP
  ports:
  - port: 8081
    targetPort: 8081
  selector:
    app: edge-cache   # matches DaemonSet labels

Apply the configuration with the command below:

kubectl apply -f ingressroute.yaml

Create a Sample Application for Latency Testing

To verify the setup, we want to create a sample application that acts as a real service, allowing us to test the complete data path from user to cache to app and back.

Create a simple Python HTTP server deployment and a Kubernetes Service to expose it within the cluster with the command below:

sudo nano app.yaml

Add the following configuration to the file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: python:3.12-slim
        command: ["python", "-m", "http.server", "8080"]
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: app
  namespace: default
spec:
  selector:
    app: app
  ports:
  - port: 8080
    targetPort: 8080

Apply the configuration and verify the application pods are running and the service is correctly targeting them:

kubectl apply -f app.yaml
kubectl get svc,pods -n default -o wide

Deploy Redis Cache At the Edge

You can now deploy a Redis cache at the edge for fast read operations and define clear paths for write operations that must persist to the core database.

Use the commands below to launch a Redis deployment and service in the ‘data’ namespace for the application to use as a local, hot cache:

kubectl create ns data
kubectl -n data create deployment redis --image=redis:7 --port=6379
kubectl -n data expose deployment redis --type=ClusterIP --port=6379
kubectl -n data get svc redis

Your app can read from Redis first; cache misses go to the core API. You can periodically warm the cache or fill on demand.

Writes: Send to CORE

  • For strong consistency, POST or PUT to api.core.example.com over WireGuard.
  • For eventual consistency, enqueue at the edge like Redis Stream or Kafka at the edge and replicate to the core asynchronously.

Note: Keep important systems and data in the main core server. Send mostly read-only data like feature flags, product lists, or user profiles to edge caches for faster access, with a TTL so they stay fresh.

Smart DNS Routing for Edge Computing Setup

The final step is to put the user on the right edge. You can start with a simple and effective strategy using regional subdomains like eu.example.com. This manually directs users in Europe to your German edge, and users in the US to your American edge.

This can later be upgraded to a managed DNS service with latency-based routing, which continuously measures and directs users to the absolute closest healthy endpoint.

Configure Firewall Rules for Edge Computing Setup

You must only allow what you need, including:

  • Public: 80/443 (Traefik).
  • WireGuard: 51820/udp between edges and core.
  • SSH: locked to your IPs.
  • NodePorts (30080/30443) only if you’re directly exposing NodePorts; otherwise, use a proper LB/NAT mapping.
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 51820/udp

Enable the firewall and check the status:

sudo ufw enable
sudo ufw status

Monitor Metrics on Edge and Core Servers

We want to use the Prometheus Node Exporter to collect bare-metal server metrics and the Grafana Agent to securely ship those metrics to a central observability platform.

Install and configure the Node exporter on your dedicated server with the commands below:

cd /tmp
curl -LO https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz
tar xzf node_exporter-.tar.gz sudo mv node_exporter-/node_exporter /usr/local/bin/
sudo useradd -rs /bin/false nodeexp || true

Create a systemd unit file for it with the command below:

sudo tee /etc/systemd/system/node_exporter.service >/dev/null <<'EOF'
[Unit]
Description=Prometheus Node Exporter
After=network.target

[Service]
User=nodeexp
Group=nodeexp
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
EOF

Apply the changes and enable the Node exporter service:

sudo systemctl daemon-reload
sudo systemctl enable --now node_exporter

Verify the Node Exporter is listening on its default port:

ss -tlnp | grep 9100

Follow the Grafana Agent setup Docs and connect it to your Prometheus remote_write endpoint. At the very least, make sure it collects metrics from Node Exporter and Traefik.

Benchmark Metrics After Edge Computing Setup

At this point, you must re-run the measurements and compare the “before” and “after” results to display he latency reduction, throughput improvement, and overall stability in your new edge computing architecture.

From a client near the EDGE_A server, run the commands below:

# Edge DNS in place (eu.example.com -> EDGE_A)
ping -c 20 eu.example.com
mtr -rwzbc 200 eu.example.com

# HTTP through Traefik->Nginx cache->app
hey -z 30s -c 100 http://eu.example.com/

You must look for faster average and slowest response times compared to the main server, more requests served from the Nginx cache (X-Cache: HIT), and steady performance with few or no errors.

FAQs

Why use dedicated servers instead of cloud instances for edge computing?

Dedicated servers offer consistent performance, hardware-level control, and predictable network behavior, which are essential for low-latency edge workloads.

Can I use virtual machines or containers instead of bare-metal servers for edge computing?

Yes. Containers via Kubernetes or K3s are ideal for edge workloads due to their lightweight nature. However, if you need maximum performance and hardware-level optimizations, bare-metal deployment is better.

How does WireGuard help in edge computing?

WireGuard creates a secure, high-speed VPN overlay connecting your edge nodes and core data center. It ensures encrypted traffic between all locations with minimal CPU overload.

Conclusion

Implementing edge computing on dedicated servers is one of the most effective strategies for reducing network latency and improving service reliability. By bringing compute resources closer to end-users, organizations can achieve faster response times, lower bandwidth usage, and enhanced scalability.

Do not forget to check the PerLod’s global dedicated server plans for setting up a powerful edge computing platform, which reduces network latency for users worldwide.

We hope you enjoy this guide on setting up Edge computing on bare metal servers. Subscribe to our X and Facebook channels to get the latest updates and articles on Edge computing.

For further reading:

Setting Up a Game Server on a VPS

How to Set Up a Video Streaming Server

Case study in Fintech Startup Dedicated Server

Post Your Comment

PerLod delivers high-performance hosting with real-time support and unmatched reliability.

Contact us

Payment methods

payment gateway
Perlod Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.