This repo now contains two benchmark harnesses:
storage_perf_windows.ps1: Windows guest benchmark runner usingdiskspd+ Windows perf counters.storage_perf.py: Linux guest benchmark runner usingfio+iostat+/proc/stat.
If your target is Windows guest behavior, use the PowerShell harness first.
Per workload + iteration:
- Throughput:
read_bw_mib_s,write_bw_mib_s,total_bw_mib_s - IOPS:
read_iops,write_iops,total_iops - Latency:
read_lat_mean_us,write_lat_mean_us, plusio_await_*from disk counters - Queue depth pressure:
io_queue_mean,io_queue_p95 - Utilization proxy:
io_util_mean_pct,io_util_p95_pct,io_util_max_pct - CPU iowait:
- Linux: populated from
/proc/stat - Windows: not available natively, left blank (
cpu_iowait_*), withcpu_busy_*included
- Linux: populated from
Outputs per run:
metadata.jsonworkloads.jsonsummary.csvresults.csv(same raw rows assummary.csv, for easier downstream naming)dashboard.html(single-run web dashboard)dashboard.pdf(single-run PDF report, best-effort via headless Edge/Chrome)raw/logs and counter snapshots
- Windows guest with admin shell
diskspd.exeonPATH, or let the harness auto-install it (default behavior)- PowerShell 5.1+ or 7+
If diskspd is missing, the harness now tries:
- Existing known folders (
.\tools\diskspd, local app data install dir, common Program Files paths) wingetpackage install- Official ZIP download/install (
https://aka.ms/diskspd) into local install dir
Set-Location C:\bench\storage-perf
.\storage_perf_windows.ps1 `
-Label virtio-scsi `
-TargetPath C:\bench\testfile.dat `
-Size 20G `
-DurationSeconds 60 `
-WarmupSeconds 10 `
-Repeat 3 `
-OutputDir .\resultsThen switch VM disk bus to virtio-blk and run:
.\storage_perf_windows.ps1 `
-Label virtio-blk `
-TargetPath C:\bench\testfile.dat `
-Size 20G `
-DurationSeconds 60 `
-WarmupSeconds 10 `
-Repeat 3 `
-OutputDir .\results.\storage_perf_windows.ps1 -Compare `
-A .\results\virtio-scsi-20260211-120000 `
-B .\results\virtio-blk-20260211-123000 `
-CompareOutput .\compare-scsi-vs-blk.csvCompare mode also writes a matching HTML dashboard next to the CSV (same name, .html extension).
If browser-based PDF export is available, compare mode also writes a matching .pdf.
-DiskInstance "0 C:"to pin whichPhysicalDisk(*)instance gets summarized-WorkloadsJson .\my_workloads.jsonto override workload matrix-ExtraDiskSpdArg "-h"(or repeated) to pass extradiskspdflags-AutoInstallDiskSpd $falseto disable auto-install attempts-DiskSpdInstallDir "$env:LOCALAPPDATA\diskspd"to control local auto-install location-GeneratePdf $falseto skip PDF generation-PdfBrowserPath "C:\Program Files\Microsoft\Edge\Application\msedge.exe"to force a specific browser binary for PDF export
[
{ "name": "seq_read_1m_qd32", "rw": "read", "bs": "1M", "o": 32, "t": 1 },
{ "name": "rand_rw_4k_70r_qd32", "rw": "randrw", "bs": "4K", "o": 32, "t": 1, "rwmixread": 70 }
]Supported rw: read, write, randread, randwrite, randrw.
Use t (threads) and o (outstanding I/O per thread) to explore low-vs-high concurrency regimes where backend differences usually appear.
If you also want Linux-side confirmation:
python3 storage_perf.py run --label virtio-scsi --filename /mnt/bench/testfile --size 20G
python3 storage_perf.py run --label virtio-blk --filename /mnt/bench/testfile --size 20G
python3 storage_perf.py compare --a results/virtio-scsi-... --b results/virtio-blk-...- Keep VM CPU/memory topology identical between
virtio-scsiandvirtio-blk. - Keep backing storage identical (same host disk, cache mode, qcow/raw format).
- Reboot between variants, or at least clear caches and let background IO settle.
- Use same workload file size and test path.
- Keep test duration long enough for tail latency to stabilize (60s+ is a good start).
- Run at least 3 iterations and compare means + variability.