- use Linux (some syscalls in code are Linux-only)
- VM-to-VM communication: Forwards packets between VMs based on MAC addresses (software switching)
- VM-to-Physical NIC: Forwards packets from VMs to the physical network interface
- Physical NIC-to-VM: Receives packets from the physical NIC and delivers them to the appropriate VM
- MAC learning: Learns VM MAC addresses from the first packet and maintains a forwarding table
- High performance: Uses DPDK for zero-copy, low-latency packet processing
Packet flow:
- VM → VM: Packet from VM1's virtio TX queue → vhost-switch → VM2's virtio RX queue
- VM → Physical: Packet from VM's virtio TX queue → vhost-switch → Physical NIC TX
- Physical → VM: Packet from Physical NIC RX → vhost-switch → VM's virtio RX queue
The switch worker loop continuously:
- Drains packets from physical NIC RX queues and delivers to VMs
- Drains packets from VM virtio TX queues and routes them (to other VMs or physical NIC)
- for vm setup, see
setup/setup_vm.md
# make sure to run this, even if it's TAP, ovs-dpdk (there's CPU settings + Intel NIC config)
./setup/cpu/slice_cpu.sh tap
./setup/init-dpdk.sh
# reserve hugepages
# 24576 × 2 MB = 48 GB mem for hugepages
sudo sysctl -w vm.nr_hugepages=24576
grep Huge /proc/meminfo
# mount hugepage FS
sudo mkdir -p /mnt/huge
sudo mount -t hugetlbfs nodev /mnt/huge
# check mounts
mount | grep huge
# load VFIO kernel modules
sudo modprobe vfio
sudo modprobe vfio-pci
lsmod | grep vfio
sudo dmesg | grep -e DMAR -e IOMMU
# check NICs
sudo dpdk-devbind.py --status
# unmount
sudo umount -l /dev/hugepages
# Clean up any leftover hugepage files from previous runs
sudo umount -l /mnt/huge
sudo rm -f /mnt/huge/*
sudo rm -rf /dev/shm/rte_* # remove shm
sudo rm -f /dev/hugepages/tas_memory- copy
.env.templateto.envand fill in the values ETH_RX_CORES,ETH_TX_CORES,VHOST_RX_CORES,VHOST_TX_CORESare the number of cores to use for the fast path, configurable in.env
# debug
tmux new -s dpdk
sudo ./build_and_run.sh debug 32 1
# terminal 2
tail -f switch.log
# test
sudo ./build_and_run.sh test 32 1
# kill process
sudo ps aux | grep vhost-switch | grep -v grep | awk '{print $2}' | xargs kill -9
# Or install and run from PATH
sudo ninja -C build install
sudo vhost-switch -l 2-3 -n 4 -b 0000:01:00.0 -- --portmask 0x1 --socket-file /mnt/huge/sock0 --stats 1./build_and_run.shto check it builds and runs- spin up a CH VM to test the TCP stack works
eth0/ens4 always send ARP pkt every sec, great for testing vhost connectivity
- Linux refuses to send ICMP until ARP resolves.
- pretend VM’s gateway is
02:00:00:00:00:01
sudo ip neigh replace 10.10.1.1 lladdr 02:00:00:00:00:01 dev ens4 nud permanent
# or
sudo ip neigh del 10.10.1.1 dev ens4
sudo ip neigh add 10.10.1.1 lladdr 02:00:00:00:00:01 dev ens4 nud permanent
# On node1:
sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:17:00.0 -- --forward-mode=txonly --tx-first
# On node2 (in another terminal):
sudo tcpdump -i enp23s0f0np0 -n- vm will now send TCP/UDP pkts asking for 8.8.8.8
- pinging pkts will also show