A NixOS-based operating system image with a pre-configured Automata deployment for on-premises and edge computing.
AutomataOS combines:
- k3s single-node cluster for container orchestration
- Automata deployment from the pinned
v0.2.12-ceAutomata source tree - PostgreSQL via Helm chart
- Temporal workflow orchestration
- cert-manager for certificate provisioning
- Traefik via the bundled k3s integration
Supported build targets:
x86_64-linuxaarch64-linux
Supported host architectures:
x86_64/amd64aarch64/arm64
Supported build paths:
- Native Nix builds for host-native workflows
- Docker builds for portable, containerized builds
- Podman builds for portable, containerized builds
Supported output packages:
automata-imagefor the existing VM runner bundleautomata-isofor a bootable ISO you can write to USB media or mount in QEMUautomata-qcow-efifor a QEMU-ready qcow2 disk imageautomata-raw-efifor a raw EFI disk image you can flash to USB storage
make build and ./scripts/build.sh choose a default Linux target from the host CPU architecture:
x86_64/amd64hosts default tox86_64-linuxaarch64/arm64hosts default toaarch64-linux
If your host does not have a local Nix setup that can build the selected Linux target, use the Docker or Podman flow instead.
automataos/
├── flake.nix # Nix flake entry point
├── flake.lock # Locked dependencies, including Automata tag pin
├── Containerfile.build # Containerized builder for Docker and Podman
├── Makefile # Build, run, and maintenance commands
├── modules/
│ ├── automata.nix # Automata + k3s + Helm deployment config
│ └── qemu-guest.nix # QEMU guest support and image import
├── scripts/
│ ├── build.sh # Native build helper with target selection
│ └── run-vm.sh # Run a built VM with host CPU settings
└── README.md
- Nix with flakes enabled for native builds
- Docker or Podman for containerized builds
- QEMU for VM testing
- 4 GB+ RAM recommended
- 20 GB+ disk space recommended
Build the host-native target:
nix buildBuild an explicit target:
nix build .#packages.x86_64-linux.automata-image
nix build .#packages.aarch64-linux.automata-image
nix build .#packages.x86_64-linux.automata-iso
nix build .#packages.x86_64-linux.automata-qcow-efi
nix build .#packages.aarch64-linux.automata-raw-efiUse the helper script:
./scripts/build.sh
TARGET_SYSTEM=x86_64-linux ./scripts/build.sh
TARGET_SYSTEM=aarch64-linux ./scripts/build.sh
TARGET_PACKAGE=automata-iso ./scripts/build.sh
TARGET_PACKAGE=automata-qcow-efi ./scripts/build.sh
TARGET_PACKAGE=automata-raw-efi ./scripts/build.shDocker and Podman run the same flake build inside nixos/nix:latest and map each target to the matching container platform:
x86_64-linux->linux/amd64aarch64-linux->linux/arm64
make build-docker and make build-podman validate the selected flake output inside the builder image. They do not create a host-side ./result symlink by themselves.
Build with Docker:
make build-docker TARGET_SYSTEM=x86_64-linux
make build-docker TARGET_SYSTEM=aarch64-linuxBuild with Podman:
make build-podman TARGET_SYSTEM=x86_64-linux
make build-podman TARGET_SYSTEM=aarch64-linux
make build-podman TARGET_SYSTEM=x86_64-linux TARGET_PACKAGE=automata-iso
make build-podman TARGET_SYSTEM=x86_64-linux TARGET_PACKAGE=automata-qcow-efi
make build-podman TARGET_SYSTEM=aarch64-linux TARGET_PACKAGE=automata-raw-efiExport a real disk image from the Podman build into dist/:
make export-podman-iso TARGET_SYSTEM=x86_64-linux
make export-podman-iso TARGET_SYSTEM=aarch64-linux
make export-podman-qcow-efi TARGET_SYSTEM=x86_64-linux
make export-podman-qcow-efi TARGET_SYSTEM=aarch64-linux
make export-podman-raw-efi TARGET_SYSTEM=x86_64-linux
make export-podman-raw-efi TARGET_SYSTEM=aarch64-linuxRun the raw container commands directly:
docker build --platform linux/amd64 --build-arg TARGET_SYSTEM=x86_64-linux --build-arg TARGET_PACKAGE=automata-qcow-efi -f Containerfile.build .
docker build --platform linux/arm64 --build-arg TARGET_SYSTEM=aarch64-linux --build-arg TARGET_PACKAGE=automata-raw-efi -f Containerfile.build .
podman build --platform linux/amd64 --build-arg TARGET_SYSTEM=x86_64-linux --build-arg TARGET_PACKAGE=automata-qcow-efi -f Containerfile.build .
podman build --platform linux/arm64 --build-arg TARGET_SYSTEM=aarch64-linux --build-arg TARGET_PACKAGE=automata-raw-efi -f Containerfile.build .make help # Show all available commands
make build # Build host-native Linux target
make build-x86_64 # Build x86_64-linux image
make build-aarch64 # Build aarch64-linux image
make build-iso # Build a bootable ISO image
make build-qcow-efi # Build a QEMU-ready qcow2 disk image
make build-raw-efi # Build a raw EFI disk image for USB flashing
make build-docker # Build in Docker
make build-podman # Build in Podman
make export-podman # Copy the selected Podman-built artifact to dist/
make export-podman-iso # Build/export an ISO image to dist/
make export-podman-qcow-efi # Build/export a qcow2 image to dist/
make export-podman-raw-efi # Build/export a raw EFI image to dist/
make dry-run # Dry-run the selected build target
make run # Run VM with default settings
make run-host # Run VM with host CPU (recommended)
make run-test # Run VM with temporary disk
make stop # Stop running VM
make clean # Remove built artifacts
make clean-disk # Remove disk images
make format # Format Nix files
make show # Show flake outputs
make update # Update flake inputs
make shell # Enter development shellUseful variables:
TARGET_SYSTEM=x86_64-linux
TARGET_PACKAGE=automata-iso
CONTAINER_RUNTIME=docker
CONTAINER_PLATFORM=linux/amd64
BUILD_CONTAINER_FILE=Containerfile.build
BUILD_IMAGE_TAG=automataos-build:x86_64-linux-automata-iso-v0.1.0After a native nix build, or after producing a compatible local result symlink, run the VM with:
nix run .#run-vm
nix run .#run-vm-host
./result/bin/run-nixos-vmThe helper script still works for a host-local result:
./scripts/run-vm.sh
./scripts/run-vm.sh -cpu host -smp 4
./scripts/run-vm.sh -nographicUse the exported disk images directly:
make run-qemu TARGET_SYSTEM=aarch64-linux
make run-qemu TARGET_SYSTEM=x86_64-linux
make run-qemu-iso TARGET_SYSTEM=x86_64-linux
make run-qemu-disk TARGET_SYSTEM=aarch64-linux
make run-qemu-disk TARGET_SYSTEM=x86_64-linux QEMU_IMAGE=/absolute/path/to/custom-image.qcow2
make test-qemu TARGET_SYSTEM=aarch64-linux
make test-qemu TARGET_SYSTEM=x86_64-linux
make test-qemu TARGET_SYSTEM=x86_64-linux QEMU_IMAGE=/absolute/path/to/custom-image.qcow2The host-QEMU targets auto-select a proven bootable artifact per architecture:
aarch64-linuxdefaults to the exportedautomata-raw-efiimage and uses the hostqemu-system-aarch64binary with EFI firmware.x86_64-linuxdefaults to the exportedautomata-isoimage and uses the hostqemu-system-x86_64binary.- Disk boots default to
-snapshotso downloaded artifacts stay unchanged while you test them. - Set
QEMU_IMAGE=/absolute/path/to/image.qcow2to boot a specific local qcow2 file instead of auto-discovering an exported artifact. - Exported
.imgand.qcow2files are made owner-writable so UTM and other hypervisors can attach them as writable disks. ISO files remain read-only.
You can still invoke QEMU manually if you prefer:
qemu-system-x86_64 -cdrom dist/x86_64-linux/automata-iso-v0.1.0/*.iso
qemu-system-x86_64 -drive if=pflash,format=raw,readonly=on,file=/opt/homebrew/share/qemu/edk2-x86_64-code.fd -snapshot -drive file=dist/x86_64-linux/automata-qcow-efi-v0.1.0/*.qcow2,if=virtio,format=qcow2
qemu-system-aarch64 -bios /opt/homebrew/share/qemu/edk2-aarch64-code.fd -snapshot -drive file=dist/aarch64-linux/automata-raw-efi-v0.1.0/*.img,if=virtio,format=raw
sudo dd if=dist/x86_64-linux/automata-iso-v0.1.0/*.iso of=/dev/<usb-device> bs=16M status=progress conv=fsync
sudo dd if=dist/aarch64-linux/automata-raw-efi-v0.1.0/*.img of=/dev/<usb-device> bs=16M status=progress conv=fsync- System boots and starts k3s.
automata-deploywaits for the cluster to be ready.- The deployment creates namespaces and installs supporting services.
- The Automata Helm chart is deployed from
/etc/automata/source/deploy/k8s. automata-importimports/var/lib/automata/automata-local.tarwhen present.
After the VM boots and the deployment completes:
https://automata.<vm-ip>.nip.io/
Find the VM IP from inside the VM:
hostname -IRead the generated admin password:
cat /var/log/automata-deploy.logEdit modules/automata.nix to customize the k3s and Automata deployment.
Load a custom Automata image into the VM:
scp automata-local.tar root@<vm-ip>:/var/lib/automata/
k3s ctr images import /var/lib/automata/automata-local.tarnix flake show
nix flake update
nix develop
nix fmt
nix build .#packages.x86_64-linux.automata-image
nix build .#packages.aarch64-linux.automata-image
nix build .#packages.x86_64-linux.automata-iso
nix build .#packages.x86_64-linux.automata-qcow-efi
nix build .#packages.aarch64-linux.automata-raw-efiUse the containerized builder:
make build-docker TARGET_SYSTEM=aarch64-linux
make build-podman TARGET_SYSTEM=x86_64-linuxls -la vm-disk/nixos.qcow2
rm -rf vm-disk
./scripts/run-vm.shsystemctl status k3s
journalctl -u k3s -fjournalctl -u automata-deploy -f
kubectl apply -f /etc/automata/source/deploy/k8s