Skip to content

Getting Started with k3d gpu

Andrew Mello edited this page May 2, 2025 · 6 revisions

Welcome to k3d-gpu! This guide walks you through the quickstart steps to build and deploy a GPU-enabled Kubernetes cluster using Docker, k3d, and NVIDIA CUDA.

Step 1: Install Requirements

  • Docker (20.10+)
  • nvidia-docker2 or Docker with --gpus support
  • k3d v5+
  • NVIDIA GPU driver (latest)

Step 2: Build the GPU Image

git clone https://github.com/88plug/k3d-gpu.git
cd k3d-gpu
./build.sh

Step 3: Create the Cluster

k3d cluster create gpu-cluster \
  --image cryptoandcoffee/k3d-gpu \
  --gpus all

Step 4: Deploy NVIDIA Device Plugin

kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml

πŸ“˜ Additional Resources

🧭 Why Use k3d-gpu Instead of Traditional Setups?

Traditional Kubernetes environments that support GPUs often involve heavy virtualization layers, complex drivers, and poor local reproducibility. k3d-gpu flips this model by offering a Docker-native, CUDA-enabled Kubernetes cluster you can spin up in seconds.

Whether you're training deep learning models, running high-throughput image processing pipelines, or benchmarking CUDA libraries, k3d-gpu allows you to develop and iterate locally without the headaches of VM setup or cloud lock-in. You get GPU scheduling, NVIDIA visibility, and full compatibility with frameworks like PyTorch, TensorFlow, ONNX, and RAPIDSβ€”all inside your laptop.

πŸ“Œ Use Cases for k3d-gpu

  • 🧠 Train machine learning models with GPU acceleration in local K8s
  • πŸ”¬ Run scientific computing workloads like CUDA-X, TensorRT, and JAX
  • πŸ§ͺ Develop and test distributed ML frameworks (Ray, Dask, Horovod)
  • βš™οΈ Deploy Helm charts that require GPU nodes
  • πŸ” Test GitOps CI/CD pipelines for GPU-bound microservices

Clone this wiki locally