Skip to content

feat: add #simulate command#6

Open
rnbguy wants to merge 10 commits intoverse-lab:veil-2.0-previewfrom
rnbguy:feat/simulate
Open

feat: add #simulate command#6
rnbguy wants to merge 10 commits intoverse-lab:veil-2.0-previewfrom
rnbguy:feat/simulate

Conversation

@rnbguy
Copy link
Copy Markdown

@rnbguy rnbguy commented Mar 16, 2026

Random-walk state exploration for Veil -- runs random traces checking invariants at each step.

It finds shallow invariant violations faster than #model_check (exhaustive BFS), but it is not complete.

Usage

-- basic
#simulate {}

-- with type/theory instantiation
#simulate { node := Fin 4 } { nextNode := fun n => n + 1 }

-- with config
#simulate {} (seed := 42, maxTraces := 1000, maxSteps := 50)

Benchmark

Search time only (Lean loading overhead subtracted) on my machine:

Example #model_check interpreted #simulate
DieHard 1.4s 111ms
RiverCrossing 2.4s 5ms
BuggyCircularBuffer .9s 1ms
Traffic 1.2s 2ms
MutexViolation 22s 378ms

All five have known violations.

Disclaimer

Contains LLM generated code.

@dranov
Copy link
Copy Markdown
Contributor

dranov commented Mar 16, 2026

Thank you, @rnbguy! This looks good.

I'll have time to look at this more closely and merge it later in the week, after the OOPSLA deadline. (For future maintainability, I want to make sure #simulate and #model_check share as much code as possible.)

However, I'm wondering whether you're running into a bug with #model_check. We have two modes of operation for the model checker: (1) compiled and (2) interpreted.

By default, the way #model_check is supposed to work is it runs the model checker in interpreted mode while it does the compilation in the background (which can take quite long, as you're seeing). If the interpreted mode finds a violation, that gets displayed — there's no waiting for compilation to finish.

For me, #model_check for the benchmarks in your table all find a violation within 1 second. The timing you're seeing makes me think somehow only the compiled mode runs for you.

What do you see when you run #model_check? Is it something like this? (This shows the interpreted model checker running — states are being explored — whilst compilation happens in the background.)

image

@rnbguy
Copy link
Copy Markdown
Author

rnbguy commented Mar 16, 2026

hey @dranov ! Good luck with OOPSLA deadline 🍀 I am just playing around with Veil 😄 so, there is no rush.

You're correct. I was using CLI lake lean <example>.lean so I am sure it included the compilation too.

I just ran with #model_check interpreted {} {} and also validated the numbers on VSCode.

Example #model_check interpreted #simulate
DieHard 1.4s 111ms
RiverCrossing 2.4s 5ms
BuggyCircularBuffer .9s 1ms
Traffic 1.2s 2ms
MutexViolation 22s 378ms

Thanks for taking the time to point this out. 🙌🏼

@rnbguy rnbguy force-pushed the feat/simulate branch 2 times, most recently from 211e20b to f822ebe Compare March 16, 2026 05:10
@dranov
Copy link
Copy Markdown
Contributor

dranov commented Mar 25, 2026

@rnbguy Apologies for the delay. I'll let @zqy1018 handle integrating this. He developed and is in charge of the model checker in Veil.

We'd want #simulate to have a soundness proof, similar to the soundness and completeness proof of #model_check's new version, and that might require a rewrite. @zqy1018 will look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants