-
Notifications
You must be signed in to change notification settings - Fork 0
Getting Started with k3d gpu
Welcome to k3d-gpu! This guide walks you through the quickstart steps to build and deploy a GPU-enabled Kubernetes cluster using Docker, k3d, and NVIDIA CUDA.
- Docker (20.10+)
-
nvidia-docker2or Docker with--gpussupport -
k3dv5+ - NVIDIA GPU driver (latest)
git clone https://github.com/88plug/k3d-gpu.git
cd k3d-gpu
./build.shk3d cluster create gpu-cluster \
--image cryptoandcoffee/k3d-gpu \
--gpus allkubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml- π¦ nvidia-docker GitHub
- π k3d documentation
- π Kubernetes NVIDIA GPU support
- π NVIDIA CUDA Toolkit Documentation
Traditional Kubernetes environments that support GPUs often involve heavy virtualization layers, complex drivers, and poor local reproducibility. k3d-gpu flips this model by offering a Docker-native, CUDA-enabled Kubernetes cluster you can spin up in seconds.
Whether you're training deep learning models, running high-throughput image processing pipelines, or benchmarking CUDA libraries, k3d-gpu allows you to develop and iterate locally without the headaches of VM setup or cloud lock-in. You get GPU scheduling, NVIDIA visibility, and full compatibility with frameworks like PyTorch, TensorFlow, ONNX, and RAPIDSβall inside your laptop.
- π§ Train machine learning models with GPU acceleration in local K8s
- π¬ Run scientific computing workloads like CUDA-X, TensorRT, and JAX
- π§ͺ Develop and test distributed ML frameworks (Ray, Dask, Horovod)
- βοΈ Deploy Helm charts that require GPU nodes
- π Test GitOps CI/CD pipelines for GPU-bound microservices