- Environment-specific configs: Terraform modules for
brownfunkandbeefunkprovision consistent VM definitions, matching documented IP ranges and resource allocations. Inventory generation vialocal_fileandinventory.tftplaccurately reflects dynamic mapping to Ansible inventory.
- Local state: Current configuration uses local Terraform state files, which can lead to conflicts in multi-operator or CI environments.
- Recommended: Adopt a remote backend (e.g., Terraform Cloud, S3 with DynamoDB locking, or HashiCorp Consul) to ensure state locking, versioning, and secure storage. This prevents drift and supports collaboration.
- Duplication: The
brownfunkandbeefunkdirectories contain nearly identical resources for VMs and LXC containers. - Recommended: Extract a reusable Terraform module for VM provisioning that parameterizes
vm_fleet,lxc_fleet, and environment-specific variables. This reduces duplication, simplifies future environment additions, and centralizes changes.
- Inventory regeneration: Terraform writes Ansible inventory via
local_file, but no automation triggers downstream Ansible runs. - Recommended:
- Use a
null_resourcewithtriggers = { terraform = timestamp() }and a provisioner to callansible-playbookafter apply. - Or integrate Terraform and Ansible workflows via Terraform Cloud or CI pipelines that consume generated inventory.
- Use a
- Provider version: The
proxmoxprovider is pinned at v0.94.0. While pinning is good, consider addingrequired_versionconstraint for Terraform CLI and tracking provider upgrades. - Recommended: Define
terraform.required_versioninterraform { }block, updaterequired_providersto latest stable minor versions, and schedule periodic provider audits.
- Naming conventions: VM names follow documented prefixes. However, tags or labels at the Proxmox VM resource level are not utilized.
- Recommended: Use
tagsor custom attributes (if supported by the provider) to annotate VMs with roles (nas, media, net, dev) and environment (brownfunk, beefunk) for easier filtering in the Proxmox UI and API.
| Task Name | The Issue | The Fix |
|---|---|---|
| Shell tasks without idempotency guards | Many shell or command modules lack creates, removes, or changed_when: false, causing repeated execution and slow runs |
Add appropriate creates/removes arguments or set changed_when: false to mark tasks as no-op when already applied |
| NFS mounts use default hard mounts | Media client mounts use default hard mounts, which can hang indefinitely if the NAS is down | Use opts: "rw,soft,intr,bg,nfsvers=4.1" to enable soft, bg, and appropriate timeouts |
| LUKS unlock tasks leak secrets | Passing luks_password inline in shell tasks exposes secrets in logs |
Use no_log: true and reference vault-protected vars in templated tasks to hide sensitive data |
| Proxmox SSH hacks instead of API modules | Using SSH remote-exec to edit LXC configs instead of leveraging Ansible Proxmox modules |
Replace SSH hacks with community.general.proxmox_lxc (or similar) modules for native resource management |
| Recommendation | Details |
|---|---|
| Router VM Viability | Leverage nested virtualization to run an OpenWrt or OPNsense VM with 512 MB–1 GB RAM and 1–2 vCPUs using a qcow2 image |
| Terraform Provision | Add a new vm_fleet entry for the router VM in terraform/<env>/variables.tf and apply via terraform apply |
| Ansible Setup | Create ansible/tasks/router.yml to install VLAN, WireGuard, and Tailscale; configure VLAN subinterfaces via netplan |
| VPN & Routing | Advertise VLAN subnets via Tailscale (--advertise-routes), configure WireGuard; forward UDP ports 41641 & 51820 |
| Benefits & Risks | Enables network segmentation and centralized VPN termination; adds complexity and resource overhead; ensure security |
Add an optional OpenWrt or OPNsense VM to enable VLAN segmentation in your homelab network.
Viability: Current hardware supports nested virtualization. A lightweight VM with 512 MB–1 GB RAM and 1–2 vCPUs running a qcow2 image of OpenWrt or OPNsense is feasible.
Terraform: In terraform/<env>/variables.tf, add a vm_fleet entry, e.g.:
"bf-router-01" = {
id = 190
ip = "192.168.1.1"
cores = 1
memory = 512
disk_size = 2
}Then run terraform apply to provision the router VM and regenerate inventory.
Ansible: Create ansible/tasks/router.yml to:
- Install packages (
vlan,wireguard,tailscale). - Configure VLAN subinterfaces (e.g.,
eth0.10,eth0.20) vianetworkdornetplantemplates. - Deploy firewall rules and VPN configuration.
VLAN Setup:
- Define VLAN interfaces (
eth0.<VID>) with IPs (e.g.,192.168.10.1/24). - Map subnets: IoT (10), Guest (20), Management (30).
Tailscale / WireGuard:
-
On router VM, advertise VLAN subnets:
tailscale up --authkey <key> --advertise-routes=192.168.10.0/24,192.168.20.0/24 -
Forward UDP port
41641(Tailscale) and51820(WireGuard) from the physical router to the VM’s IP. -
Adjust physical gateway rules for the new router VM.
Advantages:
- Segmented networks for security and traffic isolation.
- Centralized VPN termination within homelab.
- Flexible per-VLAN access policies.
Considerations:
- Adds Terraform/Ansible complexity.
- Router VM resource overhead.
- Secure the router VM management plane and monitor usage.