A complete CI/CD pipeline that deploys a static movie website to a virtual Kubernetes cluster created by VIND (Virtual-cluster IN Docker), all running inside a GitHub Actions runner β no cloud infrastructure needed.
VIND stands for Virtual-cluster IN Docker. It is a lightweight tool built on top of vCluster that spins up a fully functional, certified Kubernetes cluster inside a single Docker container. Think of it as "Kubernetes inside Docker" β but unlike other local K8s solutions, VIND creates virtual clusters that are isolated, disposable, and incredibly fast to boot.
VIND uses the Docker driver of vCluster to provision an entire Kubernetes control plane + worker node as a container on your machine (or in CI). There's no VM, no heavy hypervisor, and no special kernel requirements beyond a few standard network modules.
| Feature | VIND (vCluster Docker) | kind | Minikube |
|---|---|---|---|
| Architecture | Virtual cluster running inside a single Docker container | Kubernetes nodes as Docker containers | Full VM (or Docker container) with kubelet |
| Startup Time | ~30β60 seconds | ~60β90 seconds | ~2β5 minutes |
| Resource Usage | Very lightweight β shares host kernel, minimal overhead | Moderate β each "node" is a container | Heavy β runs a full VM by default |
| Isolation | Namespace-level virtual clusters; multiple clusters on one host easily | Container-level isolation | VM-level or container-level isolation |
| Multi-Cluster | Trivial β spin up multiple virtual clusters in seconds | Possible but heavier (each cluster = set of containers) | Possible via profiles, but slow and resource-heavy |
| CI/CD Friendly | Excellent β designed for ephemeral, disposable clusters in pipelines | Good β commonly used in CI | Usable but slower and more brittle in CI |
| Kubernetes Certified | Yes (CNCF conformant) | Yes (CNCF conformant) | Yes (CNCF conformant) |
| Driver | Docker | Docker | Docker, VirtualBox, HyperKit, KVM, etc. |
| Best For | CI/CD pipelines, dev/test, ephemeral environments, multi-tenancy | Local development, CI testing | Local development, full-cluster simulation |
- π Faster than kind β Optimized container-based architecture boots clusters in seconds.
- π€ Sleep & Wake β Pause clusters to save resources, resume them instantly when needed.
- π¨ Built-in UI β Free vCluster Platform UI for visual cluster management.
- β‘ Load Balancers Out of the Box β Automatic LoadBalancer services without extra setup (no MetalLB needed).
- π³ Docker Native β Leverages Docker's networking and storage directly.
- π Pull-through Cache β Faster image pulls via the local Docker daemon β no redundant downloads.
- π Hybrid Nodes β Join external nodes (even cloud instances) to your local cluster via VPN.
- πΈ Snapshots β Save and restore cluster state (coming soon).
- Ephemeral by design: Create a cluster, run tests, deploy apps, tear it down β all in one pipeline run. Zero leftover resources.
- Multi-tenancy ready: Need 5 isolated environments? Spin up 5 virtual clusters on the same Docker host without multiplying resource usage.
- Production-like: Despite being lightweight, VIND provides a fully conformant Kubernetes API β your manifests, Helm charts, and kubectl commands work exactly as they would on a real cluster.
- Docker must be installed and running on your machine.
macOS:
brew install loft-sh/tap/vclusterLinux:
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && \
sudo install -c -m 0755 vcluster /usr/local/bin && \
rm -f vclusterWindows: Download the latest binary from vCluster Releases and add it to your PATH.
vcluster use driver dockerThis tells vCluster to use Docker as the backend β which is what makes it VIND.
sudo vcluster create test-clusterThat's it! You now have a fully functional Kubernetes cluster running inside Docker. Verify with:
kubectl get nodes
vcluster lsStep 1: Create a values.yaml file with your node configuration:
experimental:
docker:
nodes:
- name: "worker-1"
ports:
- "9090:9090"
- name: "worker-2"
volumes:
- "/tmp/data:/data"
env:
- "NODE_ROLE=worker"Step 2: If using volumes, make sure the mount path exists:
mkdir -p /tmp/dataStep 3: Create the multi-node cluster:
sudo vcluster create multi-node-cluster --values values.yaml| Command | Description |
|---|---|
vcluster ls |
List all virtual clusters |
vcluster connect <name> |
Connect to a virtual cluster |
vcluster disconnect |
Disconnect from a virtual cluster |
vcluster delete <name> |
Delete a virtual cluster |
vcluster use driver docker |
Switch to Docker driver (VIND) |
For more details, refer to the official VIND documentation: https://github.com/loft-sh/vind
This project demonstrates a real-world use case of VIND β deploying a fully functional static website called CineVerse to a virtual Kubernetes cluster, entirely within a GitHub Actions CI/CD pipeline.
βββ index.html # CineVerse website β curated movies & series
βββ styles.css # Dark-themed responsive stylesheet
βββ Dockerfile # nginx:alpine container serving static files
βββ k8s/
β βββ deployment.yaml # Kubernetes Deployment manifest
β βββ service.yaml # Kubernetes Service manifest
βββ .github/
β βββ workflows/
β βββ deploy.yaml # Full CI/CD pipeline using VIND
βββ .dockerignore
βββ README.md
A beautiful, responsive, Netflix-style static movie website featuring:
- Trending section with the latest blockbusters
- Top Rated Movies β Shawshank Redemption, The Godfather, Interstellar, and more
- Top Rated Series β Breaking Bad, Game of Thrones, Stranger Things, and more
- Genre browser, newsletter signup, and a polished dark theme
- All poster images sourced from TMDB
The pipeline (.github/workflows/deploy.yaml) runs on every push or pull request to main. Here's what it does, step by step:
Checkout code β Build Docker image (nginx:alpine + static files) β Push to GHCR
The image is tagged with the short commit SHA and pushed to GitHub Container Registry (ghcr.io).
Install vcluster CLI β Load kernel modules (overlay, bridge, br_netfilter) β vcluster create demo
This is the core step β VIND spins up a virtual Kubernetes cluster called demo right inside the GitHub Actions runner. The vcluster use driver docker command tells vCluster to use the Docker driver (VIND mode). Within ~30 seconds, you have a fully working kubectl-ready cluster.
Create GHCR pull secret β sed-inject image tag into manifest β kubectl apply β Wait for rollout
The deployment manifest uses IMAGE_PLACEHOLDER which gets replaced with the actual GHCR image path at runtime. Kubernetes pulls the image, starts the pod, and the site is live inside the cluster.
Port-forward svc β Start ngrok tunnel β Print public URL β Keep alive for 10 minutes
Since the cluster runs inside CI, we use ngrok to create a temporary public tunnel. The workflow prints the live URL so you can open it in your browser and see the deployed website.
vcluster delete demo --force
The virtual cluster is destroyed at the end β leaving zero footprint on the runner.
ββββββββββββ ββββββββββββββββ ββββββββββββββββββββ ββββββββββββββ βββββββββββββ
β Checkout βββββΆβ Build & Push βββββΆβ Create VIND βββββΆβ Deploy to βββββΆβ Expose β
β Code β β Docker Image β β Cluster (demo) β β Kubernetes β β via ngrok β
ββββββββββββ ββββββββββββββββ ββββββββββββββββββββ ββββββββββββββ βββββββββββββ
β
βββββββΌββββββ
β Cleanup β
β (always) β
βββββββββββββ
The pipeline needs one secret to be configured manually. GITHUB_TOKEN is already provided automatically by GitHub Actions β you don't need to do anything for it.
| Secret | Required Action | Purpose |
|---|---|---|
GITHUB_TOKEN |
None β auto-provided | Used for GHCR login and image pull secrets. GitHub injects this automatically into every workflow run. |
NGROK_AUTH_TOKEN |
Manual setup required | Your ngrok auth token, needed to create the public tunnel to access the deployed site. |
- Go to https://ngrok.com and sign up for a free account (or log in if you already have one).
- After logging in, go to Your Authtoken page: https://dashboard.ngrok.com/get-started/your-authtoken.
- Click Copy to copy your auth token.
- Now go to your GitHub repository β click Settings (top tab).
- In the left sidebar, expand Secrets and variables β click Actions.
- Click the "New repository secret" button.
- Set the Name to:
NGROK_AUTH_TOKEN - Paste the auth token you copied from ngrok into the Secret field.
- Click "Add secret".
That's it! The secret is now available to your workflow.
- Fork or clone this repository.
- Set up the
NGROK_AUTH_TOKENsecret following the steps above. - Push to
mainor open a Pull Request β the pipeline triggers automatically. - Go to the Actions tab, open the running workflow, and wait for it to complete.
- Open the "π¬ Show public URL" step in the workflow logs.
- Click the ngrok URL β your CineVerse website is live!
Once the pipeline completes, the workflow logs will show something like:
============================================
π¬ YOUR WEBSITE IS LIVE! π¬
============================================
π https://xxxx-xx-xxx-xxx-xx.ngrok-free.app
============================================
Tunnel will stay open for ~10 minutes
============================================
Open the URL and you'll see the fully deployed CineVerse website running on a Kubernetes cluster that was created from scratch β all in under 5 minutes.
- Website: HTML5 + CSS3 (static, no frameworks)
- Container: nginx:alpine
- Orchestration: Kubernetes via VIND (vCluster Docker driver)
- Registry: GitHub Container Registry (GHCR)
- CI/CD: GitHub Actions
- Tunnel: ngrok v3
- Images: TMDB API (poster images)
This project is for educational and demonstration purposes.



