Why Bare Metal Kubernetes LoadBalancer is Pending and How to Fix it
Running Kubernetes in the cloud is easy: you ask for a LoadBalancer, and the cloud provider gives you an IP address automatically. On bare metal, it doesn’t work that way. Since there is no cloud controller to handle the networking, your services often get stuck in a Pending state forever. In this guide, we will explain why this happens and fix the most common bare metal Kubernetes networking issues.
By the end of this tutorial from PerLod Hosting, you will have a network setup on your dedicated server that works just as smoothly as the cloud.
Table of Contents
Cloud and Bare Metal Kubernetes Networking Differences
The main difference between cloud Kubernetes, like EKS, GKE, or AKS, and bare metal is in the network infrastructure layer.
In a cloud environment, the cloud provider’s controller manager automatically detects Service resources with type: LoadBalancer. Then, it prepares a cloud load balancer like Google Load Balancer and assigns a public IP address to it. This process happens transparently to the user.
In a bare metal environment, such as when running a cluster on a dedicated server or a local VM, this automation does not exist. Kubernetes does not include a native implementation of network load balancers. So, if you create a Service of type: LoadBalancer, Kubernetes will wait for a third-party controller to provide an IP, which causes the service to stay in the Pending state.
Additionally, bare metal clusters have direct access to physical network interfaces (NICs) without a hypervisor layer. This offers higher performance but requires manual management of ARP resolution, routing, and BGP peering if you want to advertise routes to the wider internet.
Kubernetes Networking Failures on Bare Metal
When you manage your own servers, you don’t have the automatic safety nets that cloud providers offer, because Kubernetes can’t find the network resources it expects. When deploying on bare metal, you will encounter these specific Kubernetes networking failures:
1. Pending External IPs: The most common issue is that Services defined as type: LoadBalancer never get an EXTERNAL-IP.
The symptom of this issue is that when you run the kubectl get svc command, it displays pending under the EXTERNAL-IP column forever. This happens because no external controller is running to assign an IP address from your network pool to the Service.
2. NodePort Limitations: Without a LoadBalancer, users fall back to NodePort. NodePort only exposes services on high ports; the default range is 30000 to 32767. Because you can not expose port 80 or 443 directly, it is unacceptable for web traffic.
3. Hairpinning (NAT Loopback) Failures: The symptom is that pods inside the cluster cannot access the Service via its public IP, even though external clients can. This is a hairpin mode issue where the CNI (Container Network Interface) or kube-proxy does not correctly route traffic leaving a node back to itself via the external address.
4. CNI Conflicts: The symptom of CNI conflicts is that pod-to-pod communication fails or DNS resolution fails.
On bare metal servers, network plugins like Calico or Flannel often get confused. They might accidentally pick an IP range that your office or data center is already using, or try to send traffic through the wrong network interface.
Once you identified the bare meta Kubernetes networking issues, you can proceed to the following step to fix them.
Fix Kubernetes Networking Failures on Bare Metal
After you identify the bare metal Kubernetes networking failures, you can manually add the network components that your server is missing. To fix these Kubernetes networking issues, you must install a software load balancer and properly configure your ingress layer.
1. Deploy MetalLB: MetalLB acts as the missing LoadBalancer for bare metal. It assigns IPs from a pool you define and uses standard protocols, including ARP or BGP, to make them reachable.
If you are using kube-proxy in IPVS mode, you must enable strict ARP so MetalLB can answer ARP requests:
kubectl edit configmap -n kube-system kube-proxy
This opens the kube-proxy configuration, and you must look for strictARP and set it to true.
Then, install MetalLB with the command below:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
This creates the metallb-system namespace and deploys the necessary components.
You must define which IPs MetalLB is allowed to represent. Create a file named metallb-config.yaml with the following config:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250 # Replace with your dedicated server's available reserved IPs
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
The IPAddressPool defines the range, and the L2Advertisement tells MetalLB to use Layer 2 (ARP) to advertise these IPs to the local network
Apply the configuration with the command below:
kubectl apply -f metallb-config.yaml
Tip: For a complete MetalLB setup guide and advanced configuration, check this guide on Setting Up MetalLB for Load Balancing on Bare Metal Kubernetes.
2. Using Ingress Nginx with MetalLB: Now that you have a working LoadBalancer, you should use an Ingress Controller to route traffic on ports 80 and 443.
Install Nginx Ingress with the command below:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
Note: We use the cloud deploy manifest, not baremetal, because we now have MetalLB acting as a cloud provider.
The Ingress Controller Service will request a LoadBalancer IP, and MetalLB will assign one. Now, any traffic hitting the assigned IP is routed to your Ingress controller.
3. Troubleshooting Commands: If networking is still failing, use the commands below to find the layer where traffic is dropping.
Verify that pods have IPs and are assigned to the correct nodes:
kubectl get pods -o wide --all-namespaces
If IPs are pending or match the host IP, your CNI is misconfigured.
Ensures your Service actually knows which Pod IPs to send traffic to:
kubectl get endpoints service-name
If endpoints are empty, the Service selector doesn’t match the Pod labels.
On a dedicated server, you need to check if packets are actually arriving at the physical interface:
tcpdump -i any -nn port 80
If services work by IP but not by name, you can check CoreDNS, which verifies internal cluster DNS resolution:
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
If this times out, your CNI or firewall rules are blocking UDP port 53.
Tip: If your External IP is still stuck in pending or is assigned but unreachable after trying these steps, check this guide to Discover and Fix MetalLB External IP Issues for detailed Kubernetes networking troubleshooting.
FAQs
Why do my server logs show the node’s IP instead of the real visitor’s IP?
Kubernetes hides the visitor’s real IP by default to handle internal routing. To see the actual client IP in your logs, add externalTrafficPolicy: Local to your Service YAML, but note that this might impact load balancing.
Why my pods can ping IP addresses but cannot open websites?
This is usually caused by the host firewall blocking internal traffic. Ensure your firewall allows UDP port 53 and permits traffic on the Kubernetes network interface.
Can I use multiple network cables (NICs) on my nodes?
Yes. Using separate interfaces for private and public traffic is best practice. However, CNI plugins often pick the wrong one by default, so you must explicitly set the correct interface in the CNI configuration.
Conclusion
Bare metal Kubernetes networking issues happen because it lacks a built-in load balancer. To fix this, you must install MetalLB and configure it with a valid IP pool that matches your network. For a stable dedicated server, always check the service status and logs after every change to ensure it is working.
We hope you enjoy this guide on fixing Kubernetes networking failures. Subscribe to our X and Facebook channels to get the latest updates and articles.