Debian/Ubuntu Deployment Guide¶
Tip
This guide has been tested on Ubuntu 24.04 LTS, but it also applies to Ubuntu 20.04/22.04+ and Debian 11/12+.
The instructions target x86-64 systems. For other architectures (such as ARM64), adjust download links and commands accordingly.
Run every command in this tutorial as the root user.
1. Environment preparation¶
1.1 Update package lists¶
Update existing packages:
1.2 Enable time synchronization¶
apt install -y chrony
systemctl restart systemd-timedated
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp true
1.3 Configure the firewall¶
Tip
Run this step on every node in the cluster—otherwise communication between nodes fails.
Refer to /etc/crane/config.yaml for port configuration details.
Debian/Ubuntu ships with UFW by default. Disable it with:
If the firewall must remain active, allow these ports:
1.4 Disable SELinux (optional)¶
# Temporarily disable (resets after reboot)
setenforce 0
# Permanently disable
sed -i s#SELINUX=enforcing#SELINUX=disabled# /etc/selinux/config
1.5 Choose the cgroup version (optional)¶
Ubuntu 20.04 uses cgroup v1 by default, while Ubuntu 22.04 and 24.04 default to cgroup v2.
CraneSched supports both cgroup v1 and cgroup v2. However, using GRES on a cgroup v2 system requires additional configuration; see the eBPF guide for the required steps.
1.5.1 Configure cgroup v1¶
If you cannot build the eBPF components and still need GRES, you can switch back to cgroup v1:
# Set kernel boot arguments to switch to cgroup v1
grubby --update-kernel=/boot/vmlinuz-$(uname -r) \
--args="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller"
# Reboot to apply the change
reboot
# Verify the version
mount | grep cgroup
1.5.2 Configure cgroup v2¶
# Verify that child cgroups expose resource controllers (expect cpu, io, memory, etc.)
cat /sys/fs/cgroup/cgroup.subtree_control
# Enable controllers for child cgroups
echo '+cpuset +cpu +io +memory +pids' > /sys/fs/cgroup/cgroup.subtree_control
As noted earlier, see the eBPF guide if you plan to use GRES on cgroup v2.
2. Install the toolchain¶
Your toolchain must meet these minimum versions:
- CMake ≥ 3.24
- clang++ ≥ 19
- g++ ≥ 14
2.1 GCC/G++¶
Tip
If your distribution already provides an up-to-date GCC (for example Ubuntu 24.04+ or the Ubuntu Toolchain PPA), install it directly.
-
Build and install GCC 14:
apt install build-essential wget https://ftp.gnu.org/gnu/gcc/gcc-14.3.0/gcc-14.3.0.tar.gz tar -xf gcc-14.3.0.tar.gz cd gcc-14.3.0 ./contrib/download_prerequisites mkdir build && cd build ../configure --prefix=/opt/gcc-14 --enable-checking=release --enable-languages=c,c++ --disable-multilib make -j$(nproc) make install -
Switch the default GCC with
update-alternatives:
2.2 CMake¶
2.3 Other build tools¶
3. Install project dependencies¶
apt install -y \
libssl-dev \
libcurl4-openssl-dev \
libpam0g-dev \
zlib1g-dev \
libaio-dev \
libsystemd-dev \
libelf-dev \
libsubid-dev
Info
libsubid-dev is unavailable on Ubuntu 22.04 and older releases. Build and install shadow 4.0+ from https://github.com/shadow-maint/shadow/releases/.
4. Install and configure MongoDB¶
MongoDB is only required on the control node.
See the Database Configuration Guide for step-by-step instructions.
5. Install and configure CraneSched¶
5.1 Build and install¶
Configure and build CraneSched:
git clone https://github.com/PKUHPC/CraneSched.git
cd CraneSched
# For cgroup v1
cmake -G Ninja -S . -B build
cmake --build build
# For cgroup v2
cmake -G Ninja -DCRANE_ENABLE_CGROUP_V2=true -S . -B build
cmake --build build
# For cgroup v2 with eBPF GRES support
cmake -G Ninja -DCRANE_ENABLE_CGROUP_V2=true -DCRANE_ENABLE_BPF=true -S . -B build
cmake --build build
Install the built binaries:
Tip
We recommend deploying CraneSched with DEB packages. Refer to the Packaging Guide for details.
For multi-node installations, follow the Multi-node Deployment Guide.
5.2 Configure the PAM module¶
Configuring PAM is optional but recommended in production clusters to control user access.
See the PAM Module Configuration Guide for details.
5.3 Configure the cluster¶
Refer to the Cluster Configuration Guide for configuration options.
6. Start CraneSched¶
Using systemd (Recommended)¶
Control node only: Create crane user (automatic with DEB packages):
sudo groupadd --system crane 2>/dev/null || true
sudo useradd --system --gid crane --shell /usr/sbin/nologin --create-home crane 2>/dev/null || true
Then start services:
systemctl daemon-reload
systemctl enable cranectld --now # Control node
systemctl enable craned --now # Compute node