Note
Make sure you enable host-based encryption in the subscription before you start
Generate the .auto.tfvars from the template:
cp config/template.tfvars .auto.tfvarsSet your public IP address in the allowed_source_address_prefixes variable using CIDR notation:
Create a temporary key for the Virtual Machine:
mkdir .keys && ssh-keygen -f .keys/azureDeploy the resources:
terraform init
terraform applyConnect to the VM and mount the data disk. The script at scripts/mount_new_disk.sh can be adapted for this.
Important
Make sure mount is persistent after reboots
If storing secrets locally in disk is unavoidable, extra protections should be provisioned.
Important
When implementing advanced features, check limits and restrictions that might apply
- Tunneling from the origin to destination
- Restrict origin addresses at the destination (IP, SNI)
- Proper file permissions setup
- Strong admin user access control
- Disk encryption with customer-managed key (CMK)
- Platform-specific encryption technology (Azure Encryption-at-Host, ADE)
- Use HSM
Complex approaches:
- Use a custom kernel module to change root access permissions (SELinux, AppArmor)
- Security events monitoring (SIEM)
- Auditing
Other approaches (not as effective, side effects):
- Encrypted locally but with password in the same filesystem (chicken and the egg problem)
- Create the secret files with a hidden prefix (".")
- Use a random name for the files
There are different options for disk encryption, as in this article. There is a comparison table as well.
Following this threat, there are some ways of increasing the security of local secrets.
Login as super user:
sudo su -Create the system user with the -r option (manual pages):
# A system user does not have a password, a home dir, and is unable to login
useradd -r litappCreate the appliation directory and assign ownership:
mkdir /opt/litapp
chown -R litapp /opt/litappSwitch to the litapp user:
sudo -u litapp -sEnter the directory and create the sample key:
cd /opt/litapp
ssh-keygen -f sample_rsaOnce the sample key is created, restrict the access to the files to read only:
Tip
The execute permission is required to cd into the directory
# Owner read-only to files
chmod 400 /opt/litapp/sample_rsa
chmod 400 /opt/litapp/sample_rsa.pub
# Owner read and execute for the directory
chmod 500 /opt/litappFor advanced protection for the root user, a custom kernel might have to be written. Modules such as with SELinux or AppArmor.
Key Vaults might have limited capabilities for keys.
Important
This project uses a Key Vault with Private Link to test CMK scenarios (in case there are network restrictions)
A SIEM-like approach can be used to monitor these directories that react to user actions that could potentially compromise the secrets.
Createt the user:
sudo adduser newusername
sudo usermod -aG sudo newusernameVerify:
groups newusername
su - newusername
sudo whoamiWhile logged in with the "newusername", set the SSH authentication key:
# On your server (logged as an existing sudo user):
sudo mkdir -p /home/newusername/.ssh
sudo nano /home/newusername/.ssh/authorized_keysAlternatively, change the ownership afterwards:
sudo chown -R newusername:newusername /home/newusername/.ssh
sudo chmod 700 /home/newusername/.ssh
sudo chmod 600 /home/newusername/.ssh/authorized_keysEdit the SSH config:
sudo nano /etc/ssh/sshd_configEnable password authentication:
PasswordAuthentication yes
Restart the service:
sudo systemctl restart sshThis section main reference is Digital Oceans's guide on the sudoers file.
The user linda will be configured with limited privileges.
Create a standard user:
sudo adduser lindaCreate the group on which to manage permissions:
sudo groupadd developers
sudo usermod -a -G developers lindaGrant SSH access to the user:
# Logged as "linda"
sudo su - linda
mkdir -p /home/linda/.ssh
nano /home/linda/.ssh/authorized_keysVerify the current permissions state:
# Confirm current privileges
id && groups && sudo -l
# Validate sudo policy health
sudo visudo -c || echo "Sudoers has errors — use console or pkexec visudo to repair."
# Verify absolute paths for commands you intend to allow (rules match full paths)
command -v systemctlIf required, remove the user from the sudo group:
sudo deluser username sudoPrivileged command execution will be available by default in the /var/log/auth.log file.
The following commands can be used to search for commands executed for a specific user:
sudo grep 'CWD=/home/username' /var/log/auth.log
sudo grep 'PWD=/home/username' /var/log/auth.log
sudo grep "sudo:.*username" /var/log/auth.logEdit a policy fragment:
# Edit or create a fragment safely
sudo visudo -f /etc/sudoers.d/99-developersValidate the fragement:
sudo visudo -cf /etc/sudoers.d/99-developersCreate Least-Privilege Rules (Scoped Access)
# Allow developers group to run apt commands
%developers ALL=(ALL) /usr/bin/apt-get, /usr/bin/apt, /usr/bin/apt-cache, /usr/bin/dpkg
# Allow developers group to mount/unmount with VeraCrypt
%developers ALL=(ALL) NOPASSWD: /usr/bin/veracryptApply the correct permission:
sudo chown root:root /etc/sudoers.d/99-developers
sudo chmod 0440 /etc/sudoers.d/99-developersIt's also possible to use aliases:
# /etc/sudoers.d/webops (edit with visudo -f)
# Command alias for power actions
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot
# Users allowed to run POWER
User_Alias GROUPTWO = brent, doris, eric
GROUPTWO ALL = POWERTo execute a disk migration:
- Create a snapshot of VM1 disk
- Create a new disk from the snapshot
- Attache the new disk to VM2
- Mount the disk
The scripts/mount_existing_disk.sh can be adapted for this.
Sample commands to get updated images:
# Canonical
az vm image list-offers --location eastus2 --publisher Canonical --output table
az vm image list --location eastus2 --publisher Canonical --offer ubuntu-22_04-lts --sku server --all --output table
# SUSE
az vm image list --location eastus2 --publisher SUSE --offer sles-15-sp7 --sku gen2 --architecture x64 --all --output tableThil section will implement rsync for file synchronization. VM1 will connect to VM2 and pull the data. Both VMs bust have rsync installed.
Both machines will have LUN0 mounted on /data/disk1. The initialization script will reboot to ensure mount persistence.
Run the following on both machines:
# Check cloud-init execution status
cloud-init status
# Verify that the mount is persistent after the reboot
echo "hello" > /data/disk1/hello.txt
# Esure rsync is installed and versions are compatible
rsync --version
# (VM1 only) Verify network connectivity
telnet <VM2> 22On the remote server VM2, create the rsync user:
sudo useradd -m -s /bin/bash rsync-test
# sudo passwd rsync-test
sudo -u rsync-test mkdir -p /home/rsync-test/.ssh
sudo -u rsync-test chmod 700 /home/rsync-test/.ssh
sudo -u rsync-test touch /home/rsync-test/.ssh/authorized_keys
# Optionally, lock to rsync
sudo nano /home/rsync-test/.ssh/authorized_keys
command="rsync --server --daemon .",no-port-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAAAC3NzaC1...On the local server VM1, create a key pair:
ssh-keygen -f rsyncCopy the public key to the remote server VM2, under user rsync-test inside file .ssh/authorized_keys.
Check the login:
ssh -i rsync rsync-test@vm2.litware.internalaz vm user update \
--resource-group rg-litware323-workload \
--name vm-litware323-vm1 \
--username azureuser \
--ssh-key-value .keys/azure.pubProcedure:
- Stop all services
- Create a VM snapshot (entire VM) and/or backup files before copy
- Lock all files
- Create users and groups on the copy target
- Dry run before the cutover
Create the test files on VM2:
sudo setfacl -R -m "u:azureuser:rwx" /data/disk1/
touch /data/disk1/file1.txt /data/disk1/file2.txt /data/disk1/file3.txtPull the files from the remote. These are some options to consider:
-cenables checksum-based file comparison, which is more thorough than just comparing file sizes and modification times-hhuman-readable format-aarchive (permissions, timestamp, links, etc)-vverbose-zenables compress during file transfer-Pcombiles two switches:--partialkeeps partially transferred files--progressshows transfer progress
-rrecursive-especify the remote shell to use (default is SSH)--dry-run--statsgive some file-transfer stats
Grant permissions:
# VM2 rsync-test
sudo setfacl -R -m "u:rsync-test:rx" /data/disk1/
getfacl /data/disk1/
# VM1 azureuser
sudo setfacl -R -m "u:azureuser:rwx" /data/disk001/
getfacl /data/disk001/Execute the copy:
Important
The running user must have necessary privileges to apply the file permissions and ownership. The option -a (archive) alone is not enough. Running as root might be a good approach.
rsync -rchav --stats -e 'ssh -i ./rsync' rsync-test@vm2.litware.internal:/data/disk1/ /data/disk001