Skip to content

feat: interrupt-driven VirtIO net PCI via GICv2m MSI-X#260

Merged
ryanbreen merged 7 commits intomainfrom
feat/pci-msi-networking
Mar 11, 2026
Merged

feat: interrupt-driven VirtIO net PCI via GICv2m MSI-X#260
ryanbreen merged 7 commits intomainfrom
feat/pci-msi-networking

Conversation

@ryanbreen
Copy link
Owner

Summary

  • True MSI-X interrupt-driven networking for VirtIO net PCI on ARM64/Parallels — replaces timer-polling for packet receive notification
  • NAPI-style dual interrupt suppression (device-level VRING_AVAIL_F_NO_INTERRUPT + GIC SPI disable) prevents GICv2m level-triggered SPI storm that caused system lockup
  • Deferred SPI enable — MSI-X configured at PCI level during init, GIC SPI only enabled after init_common() drains all pending work
  • IRQ-safe locks via DAIF save/restore on ARP cache and NET_CONFIG prevent deadlock between thread context and softirq handler
  • Linux virtqueue_enable_cb() pattern in re_enable_irq() for race-free interrupt re-enablement
  • GPU compositing improvements — per-window texture pool, simplified BWM
  • Process exit cleanup — free page tables, stacks, reparent children on exit
  • All 23 tests pass including DNS (34ms) and HTTP fetch. MSI-X interrupts confirmed active.

Test plan

  • Full 23/23 test suite passes on Parallels ARM64 VM (via quick-test.sh)
  • DNS resolution works (34ms, improved from 58ms with polling)
  • HTTP fetch completes successfully
  • MSI-X interrupts confirmed firing via one-shot diagnostic counter
  • No lockups or SPI storms during init or steady-state operation
  • ARP/ICMP work correctly with IRQ-safe lock wrappers

🤖 Generated with Claude Code

ryanbreen and others added 7 commits March 11, 2026 05:44
Three-phase plan for replacing timer-based network polling with
interrupt-driven packet delivery on ARM64:
- Phase 1: VirtIO net PCI MSI on Parallels (GICv2m, proven path)
- Phase 2: E1000 on VMware (ITS or ACPI _PRT approaches)
- Phase 3: Generic PCI interrupt framework + dynamic dispatch

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace timer-based 100Hz polling with true MSI interrupt delivery for
the VirtIO net PCI driver on Parallels (ARM64). Packets are now processed
immediately on arrival instead of waiting up to 10ms for the next poll.

- net_pci.rs: Add setup_net_pci_msi() following the proven xHCI/GPU MSI
  pattern — probe GICv2m at 0x0225_0000, allocate SPI, program MSI
  address/data, disable INTx, configure edge-triggered GIC delivery.
  Update handle_interrupt() with disable→clear→ack→enable SPI cycle.
  Add MSI interrupt counter (msi_interrupt_count()) for diagnostics.
- exception.rs: Add VirtIO net PCI to SPI dispatch chain in handle_irq().
- timer_interrupt.rs: Conditionalize polling — only poll when no MSI IRQ
  is configured (e1000 on VMware still uses polling fallback).
- procfs: Expose net_msi_irqs in /proc/stat on ARM64.
- build.rs: Remove cargo:warning= diagnostic messages.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rent children

Previously, process page tables, stacks, and pending old page tables were
only freed when waitpid reaped the zombie. This leaked significant memory
for orphaned processes. Now exit_process() and handle_thread_exit() free
these resources immediately after CoW refcount cleanup.

Also reparents children to init (PID 1) on exit, cleans up window buffers
from WINDOW_REGISTRY so the compositor stops reading freed pages, and
removes noisy GPU/BWM perf logging (counters still available via GDB).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
init: Replace waitpid(WNOHANG) + yield_now() busy-yield loop with
blocking waitpid(0). Init now sleeps in the kernel until a child
actually exits, instead of spinning one full CPU core forever.

bcheck: After tests complete, the results display loop was a tight
render/present cycle with no sleep — hundreds of FPS of identical
content burning 30-40% CPU. Now only redraws on scroll input and
sleeps 50ms when idle. Also handles CloseRequested event.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Plan to eliminate all CPU pixel copying from the rendering pipeline.
Linux probe proved per-frame path is SUBMIT_3D -> SET_SCANOUT ->
RESOURCE_FLUSH with zero TRANSFER_TO_HOST_3D. The "per-window textures
don't work" finding was a bug in our resource setup, not a Parallels
limitation.

Four phases: fix per-window textures, GPU compositing in BWM, client
GPU rendering (bounce), and GPU text rendering (font atlas).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Standalone C test that creates 2 TEXTURE_2D resources (400x300),
renders RED to one and BLUE to the other via separate SUBMIT_3D
batches, then composites both as textured quads onto the display
surface in a third SUBMIT_3D pass.

Proves per-window VirGL textures work on Parallels hardware, which
unblocks GPU-only compositing in BWM (Phase 1 of the GPU rendering
attack plan).

Build on Linux probe: gcc -O2 -o virgl_multi_texture_test virgl_multi_texture_test.c -ldrm

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ression

Implement true MSI-X interrupt-driven networking for VirtIO net PCI on
ARM64/Parallels, replacing timer-polling for packet receive notification.

Key changes:
- NAPI-style dual interrupt suppression (VRING_AVAIL_F_NO_INTERRUPT +
  GIC SPI disable) prevents GICv2m level-triggered SPI storm
- Deferred SPI enable: MSI-X configured at PCI level during init but
  GIC SPI only enabled after init_common() drains all pending work
- IRQ-safe locks via DAIF save/restore on ARP cache and NET_CONFIG to
  prevent deadlock between thread context and softirq handler
- Linux virtqueue_enable_cb() pattern in re_enable_irq() for race-free
  interrupt re-enablement with used-ring check
- GPU compositing improvements: per-window texture pool, simplified BWM
- Multi-texture VirGL test updates for Linux probe VM

All 23 tests pass including DNS (34ms) and HTTP fetch. MSI-X interrupts
confirmed firing via one-shot diagnostic counter.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@ryanbreen ryanbreen merged commit d5ebb75 into main Mar 11, 2026
@ryanbreen ryanbreen deleted the feat/pci-msi-networking branch March 11, 2026 22:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant