Setting Up Docker Swarm on VPS Clusters
A Docker Swarm VPS Cluster Setup is when you connect multiple VPS to form a cluster that runs Docker containers. Docker Swarm manages these servers as one system, which allows you to easily deploy, scale, and manage containers across all VPS nodes.
This tutorial guides you step by step through the entire process of setting up a Docker Swarm cluster on VPS nodes.
At PerLod Hosting, we provide optimized VPS clusters with preconfigured Docker support, which makes it easy for developers to launch and scale Swarm environments securely and efficiently.
Table of Contents
Prerequisites For Docker Swarm VPS Cluster Setup
The first and most essential step is to build a strong foundation, so you must prepare your infrastructure by ensuring you have the right servers, operating systems, and the correct network configuration.
Here are the prerequisites you will need for the Docker Swarm cluster setup:
- You will need at least three VPS servers for this setup. Use one or three manager nodes (an odd number is best for production) and one or more worker nodes.
Tip: If you are looking for the best VPS providers, our Flexible VPS Hosting Solutions offer VPS plans with high-speed SSD storage and dedicated network interfaces perfect for building Docker Swarm clusters.
- In this guide, we use Ubuntu 22.04 LTS, but you can also use other modern Linux versions.
- Make sure you have SSH access with a sudo user on each server.
- All your nodes should be able to connect over a private or public network, have unique hostnames, and use the same time zone or synced NTP if possible.
For Docker Swarm to work properly, make sure these ports are open and allowed between all nodes:
- TCP 2377: Used for Swarm management traffic between the manager and nodes.
- TCP and UDP 7946: Used for communication between nodes.
- UDP 4789: Used for VXLAN overlay networks.
- Application ports: For example, 80 and 443 if you’re running web applications
Now you can proceed to the following steps to start your Docker Swarm VPS Cluster Setup.
System Updates and Hostname Configuration on Each Node
Each node must be prepared to ensure all systems are up-to-date with the latest security patches and can be uniquely identified within the network.
On every VPS node, use the following commands to run the system update and upgrade:
sudo apt update
sudo apt upgrade -y
Set the hostname on each node with the following commands:
# On manager
sudo hostnamectl set-hostname swarm-manager-1
# On first worker
sudo hostnamectl set-hostname swarm-worker-1
# On second worker
sudo hostnamectl set-hostname swarm-worker-2
You can log out and back in to see that the new hostname is applied in your shell prompt.
Also, check the hostname with the following command:
hostnamectl
Important Note: You must run all commands in this section on every VPS node, both managers and workers.
Install Docker Engine on Each VPS Node
For a stable and consistent Swarm cluster, every node must run the same version of Docker, installed from the same source. Here we will show you how to install Docker from the official Docker repository.
If you have Docker installed on your servers, it is recommended to remove the old Docker packages with the command below:
sudo apt remove -y docker docker-engine docker.io containerd runc || true
Install the required packages and dependencies with the command below:
sudo apt install ca-certificates curl gnupg lsb-release -y
Use the following commands to create the Keyrings folder and add Docker’s official GPG key and repository:
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Give read permission to the keyrings folder so APT can use the key:
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Run the system update and install Docker engine and the required plugins with the following commands:
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin -y
Enable and start Docker engine service with the following commands:
sudo systemctl enable docker
sudo systemctl start docker
To check if Docker is active and running, use the command below:
sudo systemctl status docker
To test that Docker works correctly, you can run the command below, which should print the hello message:
sudo docker run hello-world
You must perform the above steps on every server that will join the Swarm, both managers and workers.
Configure Host Firewall for Docker Swarm
In this step, you must configure the UFW firewall on Ubuntu to secure your nodes while allowing essential Swarm and application traffic.
Note: If you are not using UFW, you must apply these same port rules within your existing firewall.
Install and enable the UFW firewall with the following commands:
sudo apt install ufw -y
# Allow SSH so you do not lock yourself out
sudo ufw allow OpenSSH
sudo ufw enable
Then, apply the following rules on every node in the cluster to allow them to communicate:
# Swarm management
sudo ufw allow 2377/tcp
# Node communication
sudo ufw allow 7946/tcp
sudo ufw allow 7946/udp
# Overlay network
sudo ufw allow 4789/udp
# Example application ports (HTTP and HTTPS)
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
You can use the command below to check your UFW status:
sudo ufw status
Initialize Docker Swarm on Manager Node
At this point, you are ready to create the Swarm cluster by initializing the first manager node, which becomes the control plane for the entire cluster. These commands must be run only on the single server you have chosen as the primary manager.
First, find the main IP address of the manager with the command below that the worker nodes will use to reach it:
ip addr show
We assume that the manager’s IP address is 10.0.0.10. Run the command below to initialize the Swarm:
sudo docker swarm init --advertise-addr 10.0.0.10
This command starts a new Swarm with the IP address that other nodes will use to connect to this manager.
If successful, the output will confirm the swarm initialization and provide a join command:
Swarm initialized: current node (xxxxx) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-... 10.0.0.10:2377
Important: Save this “docker swarm join” command, you will need it to add worker nodes.
You can also view the tokens later with the following commands:
sudo docker swarm join-token worker
sudo docker swarm join-token manager
Add Worker Nodes To Docker Swarm
Now that the manager node is active and listening for connections, you can join the worker nodes. Each worker will authenticate with the manager using the unique token generated during initialization and then join the secure overlay network.
On each worker node, run the join command that Docker printed:
sudo docker swarm join \
--token SWMTKN-1-abc123def456... \
10.0.0.10:2377
This connects the worker to the manager at 10.0.0.10 on port 2377. If successful, you will see a message confirming the node joined as a worker.
To verify all nodes have successfully joined, return to the manager node and run the command below:
sudo docker node ls
You should see output similar to this:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
a1b2c3d4e5f6g7h8i9j0 swarm-manager-1 Ready Active Leader
k1l2m3n4o5p6q7r8s9t0 swarm-worker-1 Ready Active
u1v2w3x4y5z6a7b8c9d0 swarm-worker-2 Ready Active
Container Communication: Create an Overlay Network for Services
In this step, you must create the virtual network that will enable secure communication between containers. In Docker Swarm, this is achieved through an overlay network. This network passes across all nodes in the cluster, which provides a private subnet for your services to discover and talk to each other, isolated from the underlying host network.
On the manager node, run the following command to create the network:
sudo docker network create \
--driver overlay \
--attachable \
app-overlay
Once you are done, list all Docker networks to confirm that it was created successfully:
sudo docker network ls
In your output, you should see app-overlay in the list with its driver listed as overlay.
Deploy a Sample Service with Swarm: A Multi-Container Nginx Demo
At this point, your Swarm cluster is ready to run containerized applications. Here we want to show you a sample by deploying a highly available Nginx service.
These commands run on the Manager node.
You can deploy Nginx with the command below:
sudo docker service create \
--name nginx-web \
--replicas 3 \
--publish 80:80 \
--network app-overlay \
nginx:alpine
Explanation of Flags:
- –name nginx-web: The name of the service within the Swarm.
- –replicas 3: Instructs Swarm to run 3 container instances (tasks) and distribute them across available nodes.
- –publish 80,80: Publishes port 80 on every node in the cluster to port 80 inside the containers. This is the Swarm routing mesh in action.
- –network app-overlay: Attaches the service containers to the overlay network you created.
- nginx:alpine: The lightweight container image to use.
Check the status of all services with the command below:
sudo docker service ls
View detailed status and see which nodes the replicas were placed on:
sudo docker service ps nginx-web
In the output, you should see each replica listed as “Running” on different nodes.
To test the application, open your web browser and navigate to the public IP address of any node in your Swarm on port 80, for example:
http://node-ip
You will see the Nginx welcome page. The Swarm routing mesh will automatically load-balance your request to one of the three running replicas, regardless of which node is actually hosting the container.
Deploy Services with Docker Stack and Compose File with Swarm
The Docker Stack enables you to define your entire application stack, including multiple services, networks, and volumes, in a single, version-controlled Docker Compose YAML file. This method is more suitable for real-world applications.
On the manager node, create a directory and the compose file for Swarm with the commands below:
sudo mkdir -p ~/swarm-demo
cd ~/swarm-demo
sudo nano docker-compose.yml
Add the following content to the file:
version: "3.9"
services:
web:
image: nginx:alpine
deploy:
replicas: 3
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker
ports:
- "80:80"
networks:
- app-overlay
visualizer:
image: dockersamples/visualizer:latest
ports:
- "8080:8080"
deploy:
placement:
constraints:
- node.role == manager
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- app-overlay
networks:
app-overlay:
external: true
This example shows a multi-service stack, including a web service and a visual tool to monitor your Swarm.
From the directory containing your Swarm Compose file, run the command below to deploy the stack:
sudo docker stack deploy -c docker-compose.yml swarm-demo
List all stacks in the Swarm with the command below:
sudo docker stack ls
Check the services running inside your stack by using the following command:
sudo docker stack services swarm-demo
Check tasks of the web service with:
sudo docker service ps swarm-demo_web
To access the applications:
- Nginx Web Server: Open the “
http://ANY_NODE_IP:80” in your browser to see the Nginx welcome page, load-balanced across the cluster. - Swarm Visualizer: Open the “
http://MANAGER_IP:8080” to see a dynamic visual map of your Swarm nodes and the containers running on them.
Manage and Scale the Docker Swarm
A Docker Swarm cluster requires ongoing management to ensure high availability, respond to load, and deploy updates seamlessly. All these management commands are executed on a manager node.
To horizontally scale the web service up to 5 replicas to handle increased load, you can use the command below:
sudo docker service scale swarm-demo_web=5
Verify the new replicas have been scheduled and are running with the command below:
sudo docker service ps swarm-demo_web
To perform a rolling update of the service to a new container image, for example, a new Nginx version, you can use the command below:
sudo docker service update \
--image nginx:1.27-alpine \
swarm-demo_web
Docker Swarm will automatically perform a rolling update, replacing containers one by one without downtime.
If an update faces an issue, you can quickly revert to the previous version of the service with the command below:
sudo docker service rollback swarm-demo_web
To safely prepare a node for maintenance, you can evacuate its containers. For example, for Swarm worker 1, set the node’s availability to drain:
sudo docker node update --availability drain swarm-worker-1
Check the node status to confirm the change:
sudo docker node ls
Swarm will stop tasks on this node and reschedule them on other available nodes.
After maintenance, return the node to an active state with the following command:
sudo docker node update --availability active swarm-worker-1
Inspect State and Logs for Docker Swarm
Docker Swarm provides a comprehensive set of inspection and logging commands to help you monitor node health, debug service issues, and track application behavior.
You can get detailed information about a specific node’s configuration, status, and resources with the following command:
sudo docker node inspect swarm-manager-1
sudo docker node inspect swarm-worker-1
For a more human-readable and summarized output, you can use the –pretty flag:
sudo docker node inspect --pretty swarm-manager-1
View the complete configuration and current state of a service, including its network settings, task template, and update status with the command below:
sudo docker service inspect swarm-demo_web
View the service logs from all replicas (containers) with the command below:
sudo docker service logs swarm-demo_web
To tail the logs in real-time, use the -f flag:
sudo docker service logs -f swarm-demo_web
Clean Up the Docker Swarm Cluster
If the Swarm cluster is no longer needed, you can clean it up to ensure that all services are properly stopped, network resources are released, and nodes are safely removed from the cluster.
Note: These commands will permanently remove your deployed services and the Swarm cluster itself.
On the manager node, remove the entire stack and all its services with the command below:
sudo docker stack rm swarm-demo
Wait for the operation to complete and verify all services are removed:
sudo docker service ls
The output should no longer show any swarm-demo_* services.
On each worker node, run the following command to leave the Swarm:
sudo docker swarm leave
On the manager node, use the –force flag to disable the entire Swarm cluster:
sudo docker swarm leave --force
FAQs
What is the difference between Docker Swarm and Kubernetes?
Docker Swarm is a native Docker clustering solution that is simpler and lighter, while Kubernetes is a more advanced.
Can I deploy Compose files directly to a Swarm cluster?
Yes. Docker Swarm supports Compose version 3 and higher using Docker Stack.
Does Swarm automatically handle load balancing?
Yes. Swarm provides an internal load balancer known as the Routing Mesh that distributes requests among service replicas automatically.
Conclusion
Docker Swarm VPS Cluster Setup transforms a collection of servers into a powerful and unified container orchestration platform. You have learned how to install Docker, initialize a Swarm, connect worker nodes, create overlay networks, and deploy services seamlessly across multiple nodes.
We hope you enjoy it. Subscribe to our X and Facebook channels to get the latest articles on Docker Swarm.
For further reading:
AI Serving with Docker on Dedicated Servers