Using Servers for High-Performance 3D Rendering Farms
In this guide, you will learn to build a scalable 3D render farm on dedicated servers, which is best for artists, studios, and technical directors who need serious rendering power. We will show you two method paths, including:
- Production Grade: A powerful render farm built around OpenCue, Google’s open-source render management system, to control and manage many GPU or CPU servers working together for fast, large-scale rendering.
- Quick Start (Minimal setup): A simple setup that uses SSH and GNU Parallel to split rendering tasks across servers right away, which is great for small teams or quick testing, with no extra software needed.
All examples in this guide use Ubuntu 22.04 or 24.04 LTS on every server from PerLod Hosting, with NVIDIA GPUs on the worker machines for GPU rendering. Blender’s Cycles engine is used as the main example, but you can easily switch to other render engines like Arnold, V-Ray, or Redshift.
By the end of this guide, you will have a fully working render farm that can grow with your needs, speed up rendering times, and give you complete control over your servers.
Table of Contents
Architecture and Prerequisites to Build a Scalable 3D Render Farm on Dedicated Servers
Before we begin building a 3D rendering farm, it is essential to understand the architecture. Here are the roles for servers:
- Controller: This server runs the render manager (OpenCue), along with the database and shared storage (NFS).
- Workers: Each worker is a separate server that runs the render daemon and headless renderers such as Blender or other engines.
- Submitter: This is your workstation or CI system that sends rendering jobs to the farm.
You must be sure all nodes have proper hostnames or DNS records, and static IP addresses are preferred for stability. Use at least a Gigabit network; 10 GbE is recommended for large scenes or heavy asset transfers.
For accounts and paths, you must:
- Create a service user named render with the same UID/GID on all nodes.
- Use a shared root directory /srv/render on the controller, mounted to workers at /mnt/render.
- Store project files under:
srv/render/projects/show/shot
This setup ensures consistent paths, smooth file sharing, and reliable job management across all nodes.
Install Required Packages on All Nodes For 3D Rendering Farm
On all servers, you must install the required packages, create the render user account, and configure the firewall.
Enable automatic time synchronization by using the command below:
sudo timedatectl set-ntp true
Run the system update and install the required packages with the following commands:
sudo apt update && sudo apt upgrade -y
sudo apt install build-essential curl wget git unzip jq python3-pip htop ufw nfs-common -y
Create the render user account with fixed IDs for running services consistently across multiple computers:
sudo groupadd -g 1500 render || true
sudo useradd -m -u 1500 -g 1500 -s /bin/bash render || true
Configure the firewall by blocking all incoming connections and allowing all outgoing connections by default:
sudo ufw default deny incoming
sudo ufw default allow outgoing
Allow SSH connections and NFS file sharing on port 2049:
sudo ufw allow OpenSSH
sudo ufw allow 2049
Once you are done, enable the firewall with the command below:
sudo ufw enable
Installing NVIDIA GPU Drivers on Workers
To enable GPU acceleration, the correct NVIDIA drivers must be installed on all worker nodes. You can use Ubuntu’s built-in tools to automatically find and install the best driver for your hardware. To do this, run the commands below:
sudo apt install ubuntu-drivers-common -y
ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
Reboot your workers and verify the NVIDIA drivers with the commands below:
sudo reboot
nvidia-smi
Note: If you are using Ubuntu 24.04, it’s best to follow Ubuntu’s official documentation for installing and managing packages, instead of relying on random third-party PPAs.
Install Docker and NVIDIA Container Toolkit on Workers
It is recommended to install and configure Docker alongside the NVIDIA Container Toolkit, which allows you to run GPU-accelerated applications inside clean and reproducible containers for deploying renderers and other GPU workloads.
On all worker nodes, use the commands below to add the Docker repository and install it:
sudo apt install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release; echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Add the current user and render user to the Docker group with the following commands:
sudo usermod -aG docker $USER
sudo usermod -aG docker render
Install NVIDIA container toolkit on all worker nodes with the following commands:
sudo apt update && sudo apt install curl gnupg2 -y
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit.gpg
curl -fsSL https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit.gpg] https://#' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install nvidia-container-toolkit -y
Configure Docker to use NVIDIA’s special runtime, which allows containers to access the GPU:
sudo nvidia-ctk runtime configure --runtime=docker
Restart Docker to apply the changes and run a test container to verify the GPU is correctly accessible from inside Docker:
sudo systemctl restart docker
docker run --rm --gpus all nvidia/cuda:12.4.1-base-ubuntu22.04 nvidia-smi
Set Up Shared Storage with NFS on Controller and Worker Nodes
You must configure a shared NFS to allow the controller and worker nodes to access the same storage space. The controller acts as the NFS server, exporting a directory, while the workers mount it. This is essential for a render farm to share scene files, assets, and output rendered frames.
On the controller node, set up the NFS server by using the commands below:
sudo apt install nfs-kernel-server -y
sudo mkdir -p /srv/render
sudo chown -R render:render /srv/render
echo "/srv/render 10.0.0.0/8(rw,sync,no_root_squash,no_subtree_check)" | sudo tee -a /etc/exports
sudo exportfs -rav
sudo systemctl enable --now nfs-server
On your worker nodes, set up the NFS client with the following commands:
sudo mkdir -p /mnt/render
echo "controller.example.com:/srv/render /mnt/render nfs defaults,_netdev 0 0" | sudo tee -a /etc/fstab
sudo mount -a
Verify if the NFS mount is working correctly for the render user with the following command:
sudo -u render bash -lc 'mkdir -p /mnt/render/tests && echo ok > /mnt/render/tests/ping.txt'
Install Blender for Headless Rendering on Worker Nodes
In this step, you must install Blender on worker nodes for command-line, headless rendering. You can use a direct native install or a Docker-based setup, which is recommended for GPU rendering.
Method 1: Blender Native Install
Download and install a stable Blender tarball package by using the commands below:
BLVER=4.1
cd /opt && sudo mkdir -p blender && cd blender
sudo wget https://mirror.clarkson.edu/blender/release/Blender$BLVER/blender-$BLVER.0.0-linux-x64.tar.xz
sudo tar xf blender-*.tar.xz
sudo ln -s /opt/blender/blender-*/blender /usr/local/bin/blender
Test Blender CLI help with the command below:
blender -h | head
Method 2: Blender Docker-based Install
This method is recommended with GPUs. You can use the following command to run Blender in a container and mount assets:
docker run --rm -it --gpus all \
-v /mnt/render:/mnt/render \
ghcr.io/nytimes/blender:latest blender -v
Render with Cycles on a GPU Using Blender’s Command Line
To use the full power of your GPU for fast, headless rendering with Cycles, you must specify the device using Blender’s command-line flags. Essential rendering options, like the device type, such as OPTIX, CUDA, and HIP, are passed after a double dash separator.
For example, render frame 1 from a .blend file on GPU (OPTIX) to PNG with the command below:
blender -b /mnt/render/projects/demo/scene.blend -E CYCLES -f 1 \
-o /mnt/render/projects/demo/out/frame_#### \
-F PNG -x 1 \
-- --cycles-device OPTIX --cycles-print-stats
This command leverages the dedicated NVIDIA GPUs in our worker servers for fast and hardware-accelerated rendering.
Building a Blender Render Farm with OpenCue
To manage and scale headless Blender rendering, you need a powerful render manager like OpenCue, which is a perfect open-source solution.
Set up OpenCue on the Controller Node
OpenCue is made up of several key parts, including:
- Cuebot: The main scheduler and control system.
- PostgreSQL: The database that stores job, task, and system information.
- Rqd: The worker daemon that runs on each render node and handles rendering tasks.
- CueGUI and CueWeb: The user interfaces for submitting and managing jobs.
In production setups, OpenCue is usually deployed using Docker Compose or Kubernetes. Here is a simplified Docker Compose example for the controller to show the general structure. You must be sure to update versions and configurations according to the official OpenCue documentation:
Create the Docker compose YAML file for the OpenCue controller:
sudo nano docker-compose.yml
Add the following configuration to the file:
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: cuebot
POSTGRES_USER: cue
POSTGRES_PASSWORD: cuepass
volumes:
- cue-pg:/var/lib/postgresql/data
networks: [cue]
cuebot:
image: opencue/cuebot:latest
environment:
CUEBOT_DB_URL: jdbc:postgresql://postgres:5432/cuebot
CUEBOT_DB_USER: cue
CUEBOT_DB_PASSWORD: cuepass
depends_on: [postgres]
ports: ["8443:8443","8080:8080"] # adjust per docs
networks: [cue]
cueweb:
image: opencue/cueweb:latest
environment:
CUEBOT_HOSTNAME: cuebot
depends_on: [cuebot]
ports: ["8081:8080"]
networks: [cue]
networks:
cue: {}
volumes:
cue-pg: {}
Bring up the Docker compose file with the command below:
docker compose up -d
Set up OpenCue RQD Agent on Worker Nodes
Install the OpenCue RQD agent on every worker node and connect it to your Cuebot server. For example, here is a containerized worker using host GPU and NFS:
docker run -d --name rqd --restart unless-stopped --gpus all \
-v /mnt/render:/mnt/render \
-e CUEBOT_HOSTNAME=controller.example.com \
opencue/rqd:latest
For Service Mapping, you can set up an OpenCue service for Blender, using a launch command that runs Blender in headless mode. Then, submit jobs using CueGUI or pycue.
Configure and Submit a Blender Render Job in OpenCue
To submit a Blender render in OpenCue, you create a job that includes a command describing how to render a single frame. Then, OpenCue uses this command as a template to generate one task per frame and distribute them across all available worker nodes.
The important part is the {FRAME} variable, which OpenCue automatically replaces this placeholder with the actual frame number for each task when rendering.
Here is an example command for a frame task:
/blender/blender -b /mnt/render/projects/demo/scene.blend \
-E CYCLES -f {FRAME} \
-o /mnt/render/projects/demo/out/frame_#### \
-F PNG -x 1 -- --cycles-device OPTIX --cycles-print-stats
Quick Start: Use SSH and GNU Parallel To Build a Basic Rendering Farm
For situations where you need a distributed render farm immediately without the full manager, you can use SSH and GNU Parallel. This method works by sending individual Blender render commands directly to multiple worker nodes, using their combined GPU power to speed up rendering.
On the controller node, create a host list with the command below:
cat > ~/workers.txt <<'EOF'
render@worker01
render@worker02
render@worker03
EOF
Then, distribute frames with Parallel:
seq 1 240 | parallel -j0 --sshloginfile ~/workers.txt --jobs 2 \
'blender -b /mnt/render/projects/demo/scene.blend -E CYCLES -f {} \
-o /mnt/render/projects/demo/out/frame_#### -F PNG -x 1 \
-- --cycles-device OPTIX --cycles-print-stats'
FAQs
What is a rendering farm?
A render farm is a cluster of computers that work together to render 3D scenes faster than a single machine.
Can I build a render farm with only CPUs, or do I need GPUs?
You can use CPU-only nodes, but GPU nodes accelerate rendering in software like Blender Cycles with CUDA, OptiX, or other GPU-accelerated renderers.
How do I scale my rendering farm?
Add more worker nodes, ensuring each node has consistent software, drivers, and network access to the shared storage. OpenCue automatically detects and distributes jobs to new nodes.
Conclusion
Building a scalable 3d rendering farm on dedicated servers by using GPU drivers, shared storage, and a render manager allows you to handle complex scenes and animations much faster. You can choose a lightweight SSH and GNU Parallel setup for smaller projects; on the other hand, you can choose a production-grade OpenCue cluster for large-scale work.
We hope you enjoy this guide. Subscribe to our X and Facebook channels to get the latest articles on 3D Rendering farm servers.
For further reading:
Explore Dedicated Servers Infrastructure for Video Streaming