How to Set Up JupyterHub on a GPU Server
JupyterHub lets multiple users run Jupyter notebooks on one server at the same time. Instead of each team member buying their own GPU hardware, everyone can share one GPU server and run their notebooks from any web browser. This guide will show you JupyterHub GPU Server Setup using Docker.
You will learn how to install the NVIDIA drivers, set up Docker with GPU support, and configure JupyterHub so each user gets their own isolated notebook environment with full GPU access.
Hosting option: PerLod Hosting is ideal for JupyterHub setup because it offers bare-metal performance, full root access, and 24/7 support.
Table of Contents
Requirements for JupyterHub GPU Server Setup
Before you begin the setup, make sure to have these requirements:
- Operating System: Ubuntu 22.04 or Ubuntu 24.04 LTS.
- GPU: NVIDIA GPU (RTX 4090, A5000, A6000, or similar).
- Docker: Docker 19.03 or newer.
- NVIDIA Drivers: Installed and working. Run the nvidia-smi command to verify them.
- Root Access: SSH access with sudo or root permissions.
If you need a GPU server, PerLod Hosting offers dedicated GPU servers with RTX 4090, A5000, and A6000 GPUs. These come pre-installed with Ubuntu and offer full root access, making them ideal for JupyterHub setups.
Connect to your server via SSH and check if the NVIDIA GPU is detected with the commands below:
ssh root@YOUR_SERVER_IP
nvidia-smi
In your output, you should see a table showing your GPU name, driver version, and CUDA version. If this command fails, you need to install the NVIDIA drivers first.
Run the system update and upgrade to ensure having the latest security patches and software versions:
sudo apt update -y
sudo apt upgrade -y
Step 1. Install Docker for JupyterHub GPU Setup
As we said, we want to set up JupyterHub with Docker, which lets you run JupyterHub and user notebooks in isolated containers.
If you have old versions of Docker installed on your server, remove them with the command below:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do
sudo apt-get remove $pkg
done
Then, add Docker’s official repository with the following commands:
sudo apt install ca-certificates curl -y
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker and the plugins with the commands below:
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Verify Docker is installed by checking its version:
docker --version
To run Docker commands without sudo, add your user to the Docker group:
sudo usermod -aG docker $USER
Log out and log in again to apply the changes.
Step 2. Install NVIDIA Container Toolkit for JupyterHub GPU Setup
To let Docker containers use your GPU, you must install Docker. Without this, your notebooks cannot access GPU power for training models.
Add the NVIDIA repository with the commands below:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
Install the toolkit by using the commands below:
sudo apt update
sudo apt install nvidia-container-toolkit -y
Configure Docker to use the NVIDIA runtime with the command below:
sudo nvidia-ctk runtime configure --runtime=docker
This command changes /etc/docker/daemon.json, so Docker knows how to talk to your GPU.
Restart Docker to apply the changes:
sudo systemctl restart docker
Test GPU access in Docker with the following command:
docker run --rm --gpus all nvidia/cuda:12.5.0-base-ubuntu22.04 nvidia-smi
You should see the same GPU table as before. This means Docker can now use your GPU.
Step 3. Set up the JupyterHub Project Folder
You must create a project directory for your JupyterHub setup and navigate to it with the commands below:
sudo mkdir ~/jupyterhub-gpu
cd ~/jupyterhub-gpu
In your project directory, create a Docker network for JupyterHub with the command below:
docker network create jupyterhub-network
Also, create a volume for storing JupyterHub data:
docker volume create jupyterhub-data
These prepare the foundation for your multi-user notebook server.
Step 4. Create JupyterHub Configuration File
In this step, you must create a configuration file which tells JupyterHub how to behave.
Create the JupyterHub config file with your desired text editor:
sudo nano jupyterhub_config.py
Add the following content to the file:
from dockerspawner import DockerSpawner
import os
# Use Docker to spawn user notebooks
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
# GPU-enabled PyTorch image from NVIDIA
c.DockerSpawner.image = 'nvcr.io/nvidia/pytorch:24.04-py3'
# Connect to the JupyterHub network
c.DockerSpawner.network_name = 'jupyterhub-network'
# Enable GPU access for all containers
c.DockerSpawner.extra_host_config = {
'runtime': 'nvidia',
'device_requests': [
{
'Driver': 'nvidia',
'Count': -1, # Use all GPUs
'Capabilities': [['gpu']]
}
]
}
# Environment variables for GPU access
c.DockerSpawner.environment = {
'NVIDIA_VISIBLE_DEVICES': 'all',
'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility'
}
# Remove containers when users log out
c.DockerSpawner.remove = True
# User storage - each user gets their own folder
c.DockerSpawner.volumes = {
'jupyterhub-user-{username}': '/home/jovyan/work'
}
# Hub settings
c.JupyterHub.hub_ip = '0.0.0.0'
c.JupyterHub.hub_connect_ip = 'jupyterhub'
# Allow anyone to sign up (change this for production)
c.JupyterHub.authenticator_class = 'nativeauthenticator.NativeAuthenticator'
c.NativeAuthenticator.open_signup = True
This configuration file does several important things, including:
- Uses DockerSpawner to create a container for each user.
- Uses the NVIDIA PyTorch image, which already has CUDA and PyTorch installed.
- Enables GPU access through the NVIDIA runtime.
- Gives each user their own storage volume.
- Allows users to sign up with a username and password.
Tip: If you want to run inference servers alongside JupyterHub, you can check this guide on AI Serving with Docker on a Dedicated Server to learn how to deploy vLLM or Triton on the same GPU host for a complete AI workload platform.
Step 5. Set up JupyterHub Docker Compose File
With Docker Compose, you can easily start JupyterHub with all the right settings.
Create the Docker compose YAML file with your desired text editor:
sudo nano docker-compose.yml
Add the following content to the file:
version: '3.8'
services:
jupyterhub:
build: .
container_name: jupyterhub
ports:
- "8000:8000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./jupyterhub_config.py:/srv/jupyterhub/jupyterhub_config.py
- jupyterhub-data:/srv/jupyterhub/data
environment:
- DOCKER_NETWORK_NAME=jupyterhub-network
networks:
- jupyterhub-network
restart: unless-stopped
volumes:
jupyterhub-data:
external: true
networks:
jupyterhub-network:
external: true
This will tell Docker how to run JupyterHub and connect it to the network and storage you created earlier.
Step 6. Create a Dockerfile for JupyterHub
At this point, you must create the Dockerfile, which builds the JupyterHub image with all needed packages by using the command below:
sudo nano Dockerfile
Add the following content to the Dockerfile:
FROM jupyterhub/jupyterhub:4.1
# Install Docker spawner and native authenticator
RUN pip install dockerspawner jupyterhub-nativeauthenticator
# Copy configuration
COPY jupyterhub_config.py /srv/jupyterhub/jupyterhub_config.py
WORKDIR /srv/jupyterhub
CMD ["jupyterhub", "-f", "/srv/jupyterhub/jupyterhub_config.py"]
Step 7. Start and Access JupyterHub
Now that you have configured everything, you can build the Docker image and start JupyterHub with the commands below:
docker-compose build
docker-compose up -d
To check if it is running, use the command below:
docker-compose ps
You should see the JupyterHub container in a “running” state.
To access JupyterHub, open your web browser and go to:
http://YOUR_SERVER_IP:8000
You will see the JupyterHub login page. Since you enabled open signup, you can create a new account by clicking the signup link.
After logging in, JupyterHub will start a new Docker container for you with GPU access. This might take a minute the first time because Docker needs to download the NVIDIA PyTorch image.
Step 8. Test GPU Access through JupyterLab
Once you are inside JupyterLab, open a new notebook and run this code to check GPU access:
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU Name: {torch.cuda.get_device_name(0)}")
print(f"GPU Memory: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.1f} GB")
You should see your GPU name and memory. If CUDA shows as “True”, everything is working correctly.
You can also run nvidia-smi in a terminal cell:
!nvidia-smi
This shows the full GPU status, including temperature, memory usage, and running processes.
How to Enable HTTPS for JupyterHub Security?
You should not run JupyterHub without HTTPS on a public network. HTTPS protects passwords and data as they travel between browsers and your server.
Option 1: Let’s Encrypt (Free SSL)
If you have a domain name pointing to your server, you can use Let’s Encrypt for free SSL certificates.
Add these lines with your domain name to your jupyterhub_config.py file:
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/yourdomain.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/yourdomain.com/fullchain.pem'
Install Certbot and get a certificate by using the commands below:
sudo apt install certbot -y
sudo certbot certonly --standalone -d yourdomain.com
Option 2: Self-Signed Certificate
For internal use or testing, you can create a self-signed certificate with the command below:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout jupyterhub.key -out jupyterhub.crt \
-subj "/CN=localhost"
Add these to your configuration:
c.JupyterHub.ssl_key = '/path/to/jupyterhub.key'
c.JupyterHub.ssl_cert = '/path/to/jupyterhub.crt'
For a complete security checklist for your GPU server, including container hardening and NVIDIA MIG isolation, you can check this guide on Best Practices for GPU Hosting Environments Security.
GPU Time-Slicing for More Users (Optional)
If you have many users but limited GPUs, you can use GPU time-slicing, which lets multiple users share one GPU by taking turns.
With time-slicing, if you have one GPU and set four time slices, four users can work on that GPU at the same time. Each user gets a fraction of the GPU’s power.
This is useful when users run light tasks like inference or small training jobs. For heavy training, each user should get a dedicated GPU.
Troubleshooting Common Issues in JupyterHub GPU Setup
Here are the most common errors in JupyterHub GPU server setup and their solutions:
GPU Not Visible in Container: If nvidia-smi works on the host but not in the container, check these things:
- Make sure the NVIDIA Container Toolkit is installed.
- Check that Docker is configured with the NVIDIA runtime.
- Verify the extra_host_config has the correct runtime settings.
Container Fails to Start: If user containers fail to start:
- Check if the Docker image exists with the docker images command.
- Look at logs with the docker-compose logs jupyterhub command.
- Make sure the network exists with the docker network ls command.
Connection Refused Error: If you cannot connect to port 8000:
- Check if the container is running with docker-compose ps.
- Check firewall settings with sudo ufw status.
- Allow port 8000 with sudo ufw allow 8000.
Why Use PerLod Hosting for JupyterHub GPU Setup?
PerLod Hosting provides dedicated GPU servers that work great for JupyterHub deployments. Here is why PerLod is a good choice:
- Full Root Access
- Bare-Metal Performance
- Ubuntu Pre-installed
- NVMe Storage
- 24/7 Support
- Predictable Pricing
PerLod offers GPU servers with RTX 4090 (24GB), A5000 (24GB), and A6000 (48GB) GPUs, which are perfect for machine learning teams who need reliable GPU access.
FAQs
What is JupyterHub?
JupyterHub is a multi-user server that manages Jupyter notebook servers for many users. Each user gets their own notebook environment while sharing the same GPU hardware.
Why do I need Docker for JupyterHub?
Docker isolates each user’s notebook in a separate container, which prevents users from interfering with each other.
What GPU should I use for JupyterHub?
For small teams doing inference or fine-tuning, an RTX 4090 (24GB) works well. For larger models or more users, consider an A6000 (48GB) or multiple GPUs.
Conclusion
JupyterHub GPU Server Setup gives your team a powerful platform for machine learning and data science work. With Docker and the NVIDIA Container Toolkit, each user gets their own isolated environment with full GPU access.
For teams that need reliable GPU servers, PerLod Hosting offers dedicated GPU servers with full root access, NVMe storage, and 24/7 support. This lets you focus on building AI applications instead of managing infrastructure.
We hope you enjoy this guide. Subscribe to our X and Facebook channels to get the latest updates and articles on GPU and AI hosting.