feat(sim): playback speed, edge animation, template scenarios#4
feat(sim): playback speed, edge animation, template scenarios#4albertgwo wants to merge 14 commits intofeat/interactive-simulationfrom
Conversation
| "dependencies": { | ||
| "@ruminaider/flowprint-editor": "workspace:*", | ||
| "@ruminaider/flowprint-engine": "workspace:*", | ||
| "@ruminaider/flowprint-schema": "workspace:*", | ||
| "react": "^19.0.0", |
There was a problem hiding this comment.
packages/app now depends directly on @ruminaider/flowprint-engine, so the dependency graph defined in CLAUDE.md (app may only depend on flowprint-editor + flowprint-schema) is violated, which introduces an undocumented boundary breach; can we remove the direct engine dep and expose simulation through the approved editor layer instead?
Finding type: AI Coding Guidelines | Severity: 🟠 Medium
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/package.json around lines 19-23, the package was changed to add a direct
dependency on @ruminaider/flowprint-engine which violates the CLAUDE.md dependency rules
(app may only depend on flowprint-editor + flowprint-schema). Remove the
"@ruminaider/flowprint-engine": "workspace:*" entry from dependencies. Then search the
app source for any direct imports/usages of @ruminaider/flowprint-engine and refactor
those files to import the needed simulation API from @ruminaider/flowprint-editor
instead (or add adapter functions in the editor package if necessary). After these
changes, run a build and tests to ensure no direct engine references remain and update
package.json lockfiles if needed.
| const handleSelectScenario = useCallback( | ||
| (scenario: TemplateScenario | null) => { | ||
| setRulesDataMap(scenario?.rulesData ?? {}) | ||
| }, | ||
| [], |
There was a problem hiding this comment.
handleSelectScenario writes scenario rules (or {}) directly into the shared rulesDataMap that the editor reads, and nothing restores the project's rules after the simulation panel closes, so selecting a template permanently hides the real project rules; can we keep scenario rules scoped to the simulation instead of mutating rulesDataMap?
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/src/App.tsx around lines 43 to 47, the handleSelectScenario callback
writes scenario.rulesData into the shared rulesDataMap which permanently overwrites
project rules. Change this by introducing a new state const [simulationRules,
setSimulationRules] = useState<RulesDataMap>({}) and modify handleSelectScenario to call
setSimulationRules(scenario?.rulesData ?? {}) instead of setRulesDataMap. Then compute
an effectiveRules variable (e.g. const effectiveRules = simulation.isActive ?
simulationRules : rulesDataMap) and pass effectiveRules to FlowprintEditor as the
rulesDataMap prop. Finally, ensure simulation.stop() and the simulated panel close path
clear setSimulationRules({}) so project rules are preserved when simulation ends.
There was a problem hiding this comment.
Commit 728ac41 addressed this comment by introducing simulationRules state and keeping scenario-specific rules isolated from the shared rulesDataMap. The new effectiveRules combine project rules with simulation rules only when needed and FlowprintEditor now receives those scoped rules, and all simulation stop/close paths clear simulationRules to restore project data.
| const colors: Record<string, { bg: string; fg: string }> = { | ||
| completed: { bg: '#a6e3a1', fg: '#1e1e2e' }, | ||
| matched: { bg: '#89b4fa', fg: '#1e1e2e' }, | ||
| activated: { bg: '#b4befe', fg: '#1e1e2e' }, | ||
| error: { bg: '#f38ba8', fg: '#1e1e2e' }, |
There was a problem hiding this comment.
Simulation handlers emit statuses such as fixture, timeout, skipped, no-match, and fired (simulator.ts/walker.ts) but StatusBadge only maps completed|matched|activated|error|handled|reached|default, so those traces always drift to the generic default badge with no UI mapping when new statuses slip in; can we widen the literal union/badge table to cover every emitted status so the panel stays in sync?
Finding type: Type Inconsistency | Severity: 🟢 Low
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/src/components/SimulationPanel.tsx around lines 89 to 106, the
StatusBadge function only maps a limited set of statuses (completed, matched, activated,
error, handled, reached, default) which causes simulator-emitted statuses like fixture,
timeout, skipped, no-match, fired to fall back to the generic default badge. Update
StatusBadge to include explicit entries for all statuses emitted by the simulator (at
least fixture, timeout, skipped, no-match, fired) with appropriate bg/fg colors, and
change the component prop typing to use the simulator's official status union type if
available (or accept string and normalize to lower-case). If a shared type exists (e.g.,
from simulator.ts/walker.ts), import and use it so the badge mapping stays in sync;
otherwise add a comment linking to the source of truth and ensure unknown statuses still
render a sensible fallback.
| hit_policy: 'first', | ||
| inputs: ['order.total_amount'], | ||
| rules: [ | ||
| { | ||
| when: { 'order.total_amount': { gte: 200 } }, |
There was a problem hiding this comment.
rulesData['rules/routing.rules.yaml'] (hit_policy/inputs/rules) is copy-pasted into each order_routing scenario (see same block at lines 125‑141 and 158‑174), so changing routing logic requires editing three literal blocks; can we extract this routing rules data into a shared helper/constant and reuse it instead?
Finding type: Code Dedup and Conventions | Severity: 🟢 Low
Want Baz to fix this for you? Activate Fixer
| hit_policy: 'first', | ||
| inputs: ['visit.type'], | ||
| rules: [ | ||
| { | ||
| when: { 'visit.type': { eq: 'routine' } }, |
There was a problem hiding this comment.
The rulesData['rules/provider-assignment.rules.yaml'] block in patient_intake.existing-routine is identical to the one in patient_intake.partial-coverage-urgent (and the scheduling block below is also duplicated); can we share these provider/scheduling rule definitions so updating the logic in one place keeps both scenarios aligned?
Finding type: Code Dedup and Conventions | Severity: 🟢 Low
Want Baz to fix this for you? Activate Fixer
| // Rules evaluation (browser-safe, for standalone rules testing) | ||
| export { evaluateRules } from './rules/core.js' | ||
| export type { ExpressionEvaluator } from './rules/core.js' | ||
| export type { |
There was a problem hiding this comment.
evaluateRules now re-exports from rules/core, so /browser exposes the (doc, ExecutionContext[, expressionEvaluator]) signature instead of the previous (doc, input) helper; any consumer importing from @ruminaider/flowprint-engine/browser expecting the old signature will stop compiling/executing. Can we keep the adapter exposed or keep exporting the original browser evaluator to maintain the old API?
Finding type: Breaking Changes | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/engine/src/browser.ts around lines 11-14, the exported evaluateRules was
changed to re-export the core implementation and now exposes the new signature (doc,
ExecutionContext[, expressionEvaluator]), which breaks the previous browser API that
accepted (doc, input). Restore the old browser adapter by importing the core
evaluateRules under a different name (e.g., evaluateRulesCore) and then export a wrapper
named evaluateRules that accepts the original parameters (doc, input[,
expressionEvaluator]), constructs an ExecutionContext from the input when necessary (and
forwards an optional expressionEvaluator), and calls the core function. Also ensure the
exported types for evaluateRules and any related types reflect the original
browser-facing signatures so existing consumers continue to compile.
| const simState = useNodeHighlight(nodeId) | ||
| const simAnimation = useSimulationAnimation() | ||
|
|
||
| // When stepping forward, delay the "active" glow until the particle arrives | ||
| const isDelayedActive = simState === 'active' && simAnimation.isForwardStep | ||
|
|
There was a problem hiding this comment.
isDelayedActive now depends on simAnimation.isForwardStep, but the animation context/provider defaults to {isForwardStep: true} when no config is supplied (SimulationContext.tsx:38‑40 / FlowprintEditor default config). Any caller that only provides nodeHighlights without simulation animation info now always has isDelayedActive true, so active highlights are postponed even when no playback step is happening; can we start with isForwardStep:false (or otherwise guard against the default) so highlights aren't delayed outside actual forward steps?
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/editor/src/nodes/NodeShell.tsx around lines 33 to 38, the NodeShell function
computes isDelayedActive using simAnimation.isForwardStep which can default to true when
the SimulationContext is unconfigured, causing highlights to be delayed incorrectly.
Change the logic to treat missing or default simAnimation as not a forward step: e.g.
read isForwardStep with optional chaining and an explicit boolean default (const
isDelayedActive = simState === 'active' && (simAnimation?.isForwardStep === true)), or
otherwise guard with !!simAnimation && simAnimation.isForwardStep. Ensure the import/use
at lines ~34-36 still works and that no other code paths rely on the previous truthy
default. If preferred, alternatively change the SimulationContext provider default to
isForwardStep:false and update usages accordingly.
There was a problem hiding this comment.
Commit bfd2b10 addressed this comment by making the SimulationContext default animation config start with isForwardStep: false, so consumers without an explicit animation state no longer force isDelayedActive to true. It also updates the related renderer test to assert the new default, ensuring highlights won’t be delayed when no forward step is active.
| // Check if fixture signals an error and node has error.catch | ||
| const isErrorFixture = | ||
| fixture === '_error' || | ||
| (typeof fixture === 'object' && | ||
| fixture !== null && | ||
| '_error' in (fixture as Record<string, unknown>) && | ||
| (fixture as Record<string, unknown>)._error === true) | ||
| const catchTarget = node.error?.catch | ||
| const next = isErrorFixture && catchTarget ? catchTarget : node.next | ||
|
|
||
| return { | ||
| node_id: nodeId, |
There was a problem hiding this comment.
Simulator now routes _error fixtures straight to node.error.catch without checking that catch points to an existing error node, but runner still rejects non-error targets, so a doc with an invalid catch reference would take the handler in simulation but blow up in real execution; can we mirror the runner's isErrorNode guard before routing? The relevant code is here where isErrorFixture, catchTarget and next are computed. Also review related tests in packages/engine/src/tests/simulator.test.ts which exercise error.catch routing to ensure simulation matches runtime validation.
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/engine/src/simulator/simulator.ts around lines 109-123, the logic that
computes isErrorFixture, catchTarget and next routes _error fixtures to node.error.catch
even when the catch target isn't an actual error node. Refactor by adding the same guard
the runner uses (e.g., an isErrorNode check or resolving catchTarget to a node and
confirming its type) before treating catchTarget as the next node; if the check fails,
keep next = node.next and status = 'completed'. Also update or add assertions in
packages/engine/src/__tests__/simulator.test.ts to ensure simulation mirrors runtime
validation when error.catch references are invalid.
There was a problem hiding this comment.
Commit 06f19ff addressed this comment by importing isErrorNode and verifying catchTarget resolves to a real error node before routing _error fixtures, so invalid references now fall back to the node’s next step and only valid catches change the status to error-caught.
| function buildCumulativeContext( | ||
| steps: { | ||
| stepOutput?: { nodeId: string; value: unknown } | ||
| branchOutputs?: Record<string, unknown> | ||
| }[], | ||
| upToIndex: number, | ||
| ): Record<string, unknown> { | ||
| const ctx: Record<string, unknown> = {} | ||
| for (let i = 0; i <= upToIndex; i++) { | ||
| const step = steps[i] | ||
| if (step?.stepOutput) ctx[step.stepOutput.nodeId] = step.stepOutput.value | ||
| if (step?.branchOutputs) { |
There was a problem hiding this comment.
buildCumulativeContext copies the loop/merge logic (iterate steps, apply stepOutput and branchOutputs entries) already implemented in useSimulation.buildTraceSnapshots (lines ~149-157); any future change to how cumulative contexts are computed would need updating in both places, so can we share a helper or rely on the precomputed snapshots instead of duplicating the merge?
Finding type: Code Dedup and Conventions | Severity: 🟢 Low
Want Baz to fix this for you? Activate Fixer
| // Previous step's node holds active appearance during particle transit | ||
| if (i > 0) { | ||
| const prevNode = steps[i - 1] | ||
| if (prevNode && prevNode.status !== 'error') { | ||
| snapshot[prevNode.node_id] = 'departing' | ||
| if (prevNode.branchNodeIds) { | ||
| for (const branchId of prevNode.branchNodeIds) { | ||
| snapshot[branchId] = 'departing' | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
When iterating steps the previous node (prevNode) and its branch nodes are set to 'departing' here. However, elsewhere the current step's node is immediately set to 'active', so when prevNode.node_id === step.node_id (e.g. parallel hub entered → completed) the 'departing' highlight gets overwritten and never renders. To preserve the departing visual for such cases, skip reassigning to 'active' when IDs match, or otherwise preserve the 'departing' state when prevNode.node_id === step.node_id. Note: the code that sets the node to 'active' is outside this hunk — review that assignment in conjunction with this block as the two together cause the overwrite.
Finding type: Logical Bugs | Severity: 🟠 Medium
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/src/hooks/useSimulation.ts around lines 138-148, the logic in this block
sets prevNode and its branch nodes to 'departing', but elsewhere the current step's node
is immediately set to 'active', which overwrites 'departing' when prevNode.node_id ===
step.node_id. Modify this code and the separate location that assigns 'active' so that
when prevNode.node_id === step.node_id (or when snapshot[id] === 'departing') you do not
override the 'departing' state: either skip setting 'active' for matching IDs or guard
the 'active' assignment with a check that preserves 'departing'. Apply the minimal
change to ensure the departing visual is not lost (prefer adding an if-check before
assignment) and add a short comment explaining why the guard is necessary.
2cb121b to
c271bb7
Compare
…cenarios Add simulation playback polish with three major features: 1. Edge animation: animated signal/particle traverses edges using SMIL animateMotion with multi-layered energy bead (Catppuccin green). Pre-computed trace snapshots give O(1) highlight lookups per step. 2. Playback speed control: 0.5x-4x speed selector in SimulationPanel with interval scaling for auto-play and particle duration. 3. Template scenarios: 26 example scenarios across 10 templates with prefilled input, fixtures, and embedded rules data. Switch fixture support added to engine for label-based case matching.
The active node now stays visually neutral during the particle transit, then lights up when the animation completes. Uses CSS animation-delay with animation-fill-mode:both to hold the pre-animation state.
Use var(--fp-bg-node) and var(--fp-border-default) at keyframe 0% so the node looks unchanged during the delay, not transparent.
The simulator's onAction handler always routed to node.next, ignoring
error.catch. This meant templates with error handlers (CI/CD, insurance)
never took the error path. Added _error fixture convention to trigger
error.catch routing.
Fixed CI/CD build-failure scenario to use { _error: true } so it routes
to handle_build_failure. Fixed insurance counter-offer scenario that
infinite-looped (static fixture always returned 'Counter-offer') by
renaming to senior-review with 'Accepted' outcome.
Added e2e test validating all 24 template×scenario pairs reach terminal
nodes without crashes, no-match fallthrough, or infinite loops.
The auto-play timer (1500/speed ms) didn't account for particle animation duration (1200/speed ms) or the 300ms CSS glow transition. At 2x+ speed, the next step fired before the particle finished or the node glow appeared, making edges look unanimated. Step interval is now particle duration + 500ms settle time, ensuring the particle completes and the node glow is visible before advancing: 1x: 1700ms (was 1500ms) 2x: 1100ms (was 750ms) 4x: 900ms (was 375ms)
Parallel hubs now emit an "entered" step before branch steps so the hub node lights up before branches fan out. The edge snapshot builder skips re-animating edges for the hub's "completed" step to avoid wrong visuals.
The hub→join edge physically crosses over branch nodes in the layout, causing the particle animation to visually "run over" branches. Now skip edge animation when the previous step is a parallel "completed" step (i.e. we're arriving at the join target).
Collapse parallel branch steps into a single grouped "completed" step with branchNodeIds so all branches highlight simultaneously instead of sequentially. Fix edge particle animations in both step-through and autoplay by making SMIL trigger effect depend on highlight state and stepKey, and restructure autoplay to use stepForward() via refs for identical behavior in both modes. - Add branchNodeIds/branchOutputs to SimulationStep type - Collapse per-branch steps into grouped completed step in onParallel - Update buildTraceSnapshots for simultaneous branch highlighting - Add stepKey to SimulationAnimationConfig for animation remounting - Fix EdgeSimulationOverlay useEffect deps for reliable SMIL triggers - Add parallel branch highlight test
Edge overlay now fades out after the particle arrives instead of persisting until the next step. Source node holds its active glow during particle transit via a new 'departing' highlight state, then transitions to visited.
f7dfd91 to
4037c72
Compare
| /** | ||
| * Build a lookup from "source->target" to React Flow edge ID. | ||
| * Mirrors the ID pattern in layout-engine.ts: `e-${source}-${target}-${index}` | ||
| */ | ||
| function buildEdgeLookup(doc: FlowprintDocument): Map<string, string> { | ||
| const schemaEdges = getEdges(doc) | ||
| const lookup = new Map<string, string>() | ||
| for (let i = 0; i < schemaEdges.length; i++) { | ||
| const edge = schemaEdges[i] | ||
| if (!edge) continue | ||
| const key = `${edge.source}->${edge.target}` | ||
| // First occurrence wins (matches computeEdges index ordering) |
There was a problem hiding this comment.
buildEdgeLookup maps each ${source}->${target} to a single edge ID, so for graphs that have multiple edges between the same nodes (e.g. the triage switch in packages/cli/templates/patient-intake/blueprint.flowprint.yaml has both the Urgent and Routine cases targeting assign_provider), only the first edge ID can ever be highlighted and EdgeSimulationOverlay will animate the wrong (or never the) case-specific edge.
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/src/hooks/useSimulation.ts around lines 45 to 59, the buildEdgeLookup
function creates a Map<string,string> and only stores the first edge ID for a given
`${source}->${target}` key, which drops multi-edge cases and causes incorrect edge
highlighting. Refactor buildEdgeLookup to return Map<string,string[]> (store an array of
all edge IDs for each key). Then update usages in buildTraceSnapshots (the logic around
lines ~96-129) to handle arrays: when looking up an edge, iterate the array and choose a
best match (fall back to the first element if no better heuristic), and when animating
fan-out edges, mark all IDs in the array as traversing. Ensure types are updated (Edge
lookup map type and any edgeId variable handling) and keep behavior unchanged when only
one edge exists.
There was a problem hiding this comment.
Commit 06f19ff addressed this comment by introducing the new EdgeLookups structure that maintains both a basic source→target map and a case-specific map. buildTraceSnapshots now consults edgeLookup.byCase for transitions with matched_case before falling back to the pair map, ensuring switch case edges that share the same source/target are resolved to the correct edge ID.
| const handleInputChange = useCallback( | ||
| (value: string) => { | ||
| setInputText(value) | ||
| if (selectedScenarioId) { | ||
| setSelectedScenarioId(null) | ||
| onSelectScenario?.(null) | ||
| } | ||
| }, | ||
| [selectedScenarioId, onSelectScenario], | ||
| ) | ||
|
|
||
| const handleFixturesChange = useCallback( |
There was a problem hiding this comment.
handleInputChange/handleFixturesChange always call onSelectScenario(null) after any manual edit, so App immediately resets rulesDataMap to {} (keep in mind handleSelectScenario in App only sets the scenario's rulesData), meaning editing a scenario input silently drops the rules such as rules/routing.rules.yaml/rules/provider-assignment.rules.yaml and the simulation no longer matches the template; can we keep the selected scenario's rules data (or ask before clearing) when the user tweaks the JSON?
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In packages/app/src/components/SimulationPanel.tsx around lines 174 to 191, the
handleInputChange and handleFixturesChange callbacks unconditionally clear the selected
scenario by calling setSelectedScenarioId(null) and onSelectScenario?.(null), which
causes the app to drop the scenario's rulesData. Change both handlers to prompt the user
before clearing the selected scenario: if selectedScenarioId is set, show a confirmation
(e.g. window.confirm) asking whether to abandon the selected scenario (and its rules)
before clearing; only call setSelectedScenarioId(null) and onSelectScenario?.(null) when
the user confirms. If the user cancels, keep the selected scenario (do not clear or call
onSelectScenario), but still update the inputText/fixturesText locally. This preserves
the scenario.rulesData unless the user explicitly agrees to discard it.
There was a problem hiding this comment.
Commit 728ac41 addressed this comment by preventing handleInputChange and handleFixturesChange from invoking onSelectScenario(null), so editing no longer clears the parent-selected scenario and its rules, and by merging scenario-specific rules into simulationRules so scenario data persists even while the local selectedScenarioId is null.
Prevents delayed node highlights when no SimulationAnimationProvider is present, since the default config should not trigger forward-step animation behavior.
Scenario rules now write to a separate simulationRules state instead of overwriting the shared rulesDataMap. Editing input JSON no longer silently drops rules. Simulation rules are cleared on stop/close.
albertgwo
left a comment
There was a problem hiding this comment.
Comprehensive Review: feat(sim) — playback speed, edge animation, template scenarios
Reviewed with 4 specialized agents analyzing code quality, error handling, test coverage, and type design across all 21 changed files.
Summary
This PR adds substantial, well-architected functionality. The snapshot-based precomputation approach (buildTraceSnapshots) is a strong design choice — O(n) build, O(1) per-step access. The SMIL edge animation with animateMotion is creative, and the template scenarios provide excellent out-of-box experience.
The test suite is particularly impressive — template scenario e2e tests running real simulations, comprehensive useSimulation hook behavioral tests, and thorough departing node highlight coverage.
Key concerns:
- Silent failure pattern — The PR consistently favors "keep running" over "tell the user what happened." Several catch blocks and fallbacks swallow errors that should at minimum be logged.
- Missing changesets — CLAUDE.md requires changesets for each PR that changes a package. Both
@ruminaider/flowprint-engine(behavioral change) and@ruminaider/flowprint-editor(new exports) need them. - Unguarded public API —
setPlaybackSpeedaccepts any number including 0, negative, NaN, and Infinity.
| Severity | Count |
|---|---|
| Critical | 3 |
| Important | 7 |
| Suggestion | 3 |
See inline comments for details.
| error: `Rules file "${rulesRef.file}" not found in rulesData`, | ||
| detail: { | ||
| file: rulesRef.file, | ||
| hitPolicy: 'none', |
There was a problem hiding this comment.
[Critical — Behavioral breaking change] Missing rules file now silently continues with empty output.
This changes engine behavior from returning an error step to returning a completed step with {}. Any consumer relying on the error status for missing rules will now get silent empty results.
A user who misconfigures a rules path gets a green "completed" badge with empty output and no indication that rules evaluation was skipped. They'll waste time debugging blueprint logic when the actual problem is a missing data file.
Also: this behavioral change in a published package requires a changeset per CLAUDE.md. No .changeset/*.md files are in this diff. Both @ruminaider/flowprint-engine (minor — behavioral change) and @ruminaider/flowprint-editor (minor — new exports) need changesets.
Suggestion: Include a warning field in the detail so the UI can surface it:
return {
detail: {
file: rulesRef.file,
hitPolicy: 'none',
matchedCount: 0,
output: {},
warning: `Rules file "${rulesRef.file}" not found — using empty output`,
},
output: {},
}| setCurrentStep((prev) => { | ||
| const clamped = Math.max(0, Math.min(step, totalSteps - 1)) | ||
| setIsForwardStep(clamped > prev) | ||
| return clamped |
There was a problem hiding this comment.
[Critical — React anti-pattern] setIsForwardStep called inside a setCurrentStep updater function.
The updater function passed to setCurrentStep should be a pure function of prev. Scheduling a separate state update from within it produces side effects that depend on React's internal batching behavior.
Suggestion: Use the existing currentStepRef to avoid the issue entirely:
const goToStep = useCallback(
(step: number) => {
const clamped = Math.max(0, Math.min(step, totalSteps - 1))
setIsForwardStep(clamped > currentStepRef.current)
setCurrentStep(clamped)
},
[totalSteps],
)Since currentStepRef is already maintained in this file, this is both safer and cleaner.
| smilElements.forEach((el) => { | ||
| try { | ||
| ;(el as SVGAnimationElement).beginElement() | ||
| } catch { |
There was a problem hiding this comment.
[Critical — Silent error swallowing] Bare catch {} swallows all SMIL errors indiscriminately.
The comment says "Ignore if element was removed before this frame" but the catch has zero discrimination — it also swallows TypeError (wrong cast), SecurityError, or any future browser-specific errors. If beginElement() fails for a non-removal reason, the animation silently doesn't play with no debugging signal.
Suggestion: Catch the specific expected DOMException:
try {
;(el as SVGAnimationElement).beginElement()
} catch (err: unknown) {
if (
err instanceof DOMException &&
(err.name === 'InvalidStateError' || err.name === 'NotFoundError')
) {
return // Expected: element removed before rAF fired
}
console.warn('[EdgeSimulationOverlay] Unexpected SMIL error:', err)
}| // Review #22: chained setTimeout instead of setInterval for auto-play | ||
| const handleSetPlaybackSpeed = useCallback((speed: number) => { | ||
| setPlaybackSpeed(speed) | ||
| playbackSpeedRef.current = speed |
There was a problem hiding this comment.
[Important — Unguarded input] setPlaybackSpeed accepts any number with no validation.
While the UI only offers [0.5, 1, 2, 4], the programmatic interface is unguarded:
speed = 0→Math.max(400, 1200/0)=Infinity→ auto-play timer never firesspeed < 0→ negative timeout → browser clamps to 0 → tight loopspeed = NaN→ propagates NaN through all timing
Suggestion: Add a guard:
const handleSetPlaybackSpeed = useCallback((speed: number) => {
if (!Number.isFinite(speed) || speed <= 0) return
const clamped = Math.max(0.25, Math.min(speed, 8))
setPlaybackSpeed(clamped)
playbackSpeedRef.current = clamped
}, [])Or narrow the type: type PlaybackSpeed = 0.5 | 1 | 2 | 4.
| const emptyHighlights: NodeHighlightMap = {} | ||
| const emptyEdgeHighlights: EdgeHighlightMap = {} | ||
| const defaultAnimationConfig: SimulationAnimationConfig = { | ||
| isForwardStep: true, |
There was a problem hiding this comment.
[Important — Default mismatch] This fallback sets isForwardStep: true, while SimulationContext.tsx:40 defines the context default as isForwardStep: false.
Components reading useSimulationAnimation() see different defaults depending on whether they're inside a FlowprintEditor or reading the raw context. This isForwardStep: true could cause edge overlays to render particle animation when no simulation is running (if an edge happened to have 'traversing' highlight state).
Suggestion: Align both defaults to isForwardStep: false.
| } | ||
|
|
||
| // Strategy 3: find any edge targeting this node in the schema | ||
| if (!edgeId) { |
There was a problem hiding this comment.
[Important — May animate wrong edge] Strategy 3 (wildcard edge lookup) picks the first edge targeting step.node_id regardless of source.
In a graph where multiple nodes route to the same target, this may animate an edge the simulation didn't actually traverse — creating a misleading visual.
When all 3 strategies fail, edge animation is silently skipped with no logging, making it hard to debug.
Suggestion: Consider removing Strategy 3 (accept no animation > wrong animation), and add dev-mode logging:
if (!edgeId && process.env.NODE_ENV === 'development') {
console.debug(
`[buildTraceSnapshots] No edge: "${prevStep.node_id}" → "${step.node_id}" (step ${String(i)})`,
)
}| @@ -0,0 +1,143 @@ | |||
| import { useEffect, useId, useRef } from 'react' | |||
There was a problem hiding this comment.
[Important — Test gap] This new 143-line component has zero unit tests.
EdgeSimulationOverlay is the sole visual entry point for edge animations. It contains conditional rendering logic, useEffect hooks triggering SMIL beginElement(), and requestAnimationFrame cleanup. A rendering regression would be invisible to the test suite.
Suggested tests:
- Returns
nullwhen highlight isundefined - Returns
nullwhen highlight is'traversed' - Returns
nullwhenisForwardStepisfalse - Renders SVG elements when
'traversing'andisForwardStepistrue
| rulesEvaluation?: RulesEvaluationDetail | ||
| expressionEvaluation?: ExpressionEvaluationDetail | ||
| stepOutput?: { nodeId: string; value: unknown } | ||
| branchNodeIds?: string[] |
There was a problem hiding this comment.
[Suggestion — Type design] branchNodeIds and branchOutputs are semantically coupled but independently optional.
They always appear together (only on parallel completed steps). The type permits one without the other, which could lead to incomplete highlights if either is refactored independently.
Suggestion: Group into a single optional field:
parallelResult?: {
branchNodeIds: string[]
branchOutputs: Record<string, unknown>
}| } | ||
|
|
||
| export function getScenarios(docName: string): TemplateScenario[] { | ||
| return TEMPLATE_SCENARIOS[docName] ?? [] |
There was a problem hiding this comment.
[Suggestion] getScenarios returns [] for both "known template with no scenarios" and "unknown template name" — indistinguishable.
If a developer misspells a template name, the scenario dropdown silently doesn't appear.
Suggestion: Add a dev-mode warning:
export function getScenarios(docName: string): TemplateScenario[] {
const scenarios = TEMPLATE_SCENARIOS[docName]
if (scenarios === undefined && process.env.NODE_ENV === 'development') {
console.warn(`[template-scenarios] No entry for "${docName}"`)
}
return scenarios ?? []
}| prevStep?.type === 'parallel' && prevStep?.status === 'completed' | ||
| if (i > 0 && !isParallelRevisit && !isParallelJoin && prevStep) { | ||
| // Strategy 1: direct edge from previous step | ||
| let edgeId = edgeLookup.get(`${prevStep.node_id}->${step.node_id}`) |
There was a problem hiding this comment.
[Suggestion — Test gap] Only Strategy 1 (direct linear edges) is tested for edgeHighlights.
Strategies 2 (routing via prevStep.next, line 111) and 3 (wildcard by target, line 116) exist for error routing and complex switch topologies but have no test coverage.
Also: buildEdgeLookup with duplicate source→target pairs (e.g., switch with two cases pointing to same target) is untested. The "first occurrence wins" invariant (line 57) is worth verifying.
Suggested tests: A doc with a switch routing through an intermediate node (Strategy 2), and an error catch path (Strategy 3).
When a switch node has multiple cases targeting the same node (e.g. triage_assessment → assign_provider for both Urgent and Routine), the simulation now resolves the correct edge using matched_case from the trace step rather than always selecting the first edge found. buildEdgeLookup returns both byPair (source→target) and byCase (source:caseIndex) maps. Edge resolution checks the case-specific lookup first when matched_case is present, falling back to byPair.
The simulator now validates that an error.catch target is both a real node in the document and an error-type node before routing to it. This aligns with the runner's behavior in walker.ts, which performs the same guard. Without this check, a misconfigured error.catch pointing at a non-error node would silently route there in simulation but throw in production.
Extract edgeKey()/caseKey() helpers to centralize map key formats. Replace "Strategy N" comments with ones explaining why each fallback exists. Hoist empty EdgeLookups to a static constant. Collapse the three-variable error-catch chain into a single boolean guard.
User description
Summary
animateMotionwith a multi-layered energy bead (Catppuccin Mocha green). Incoming-edge model with 3-strategy lookup handles linear chains, parallel branches, and hub nodes. Pre-computed trace snapshots give O(1) highlight lookups per step.max(400, 1200/speed).Test plan
Generated description
Enable animated signal playback by extending
useSimulationwith edge tracking, playback speed controls, and forward-step metadata plus wiring those outputs throughSimulationPanel, the editor contexts, and the edge/node renderers so the SMIL bead overlay and glow timing follow the simulation rhythm. Seed the built-in templates with curatedTemplateScenariopresets and engine-aware fixtures/rules so the app can run realistic flows immediately while tests verify scenario coverage and new fixture edge cases.useSimulationto build node/edge snapshots, playback timing data, and context for the new animation context while plumbing it throughSimulationPanel, the editor/provider stack, and the node/edge renderers so forward playback shows traversing edges, delayed node glows, and speed controls that keep auto-play in sync.Modified files (14)
Latest Contributors(1)
Modified files (1)
Latest Contributors(1)
getScenarioswith inputs, fixtures, and rules data, exposing those presets to the app, and extending engine fixtures/rules handling plus targeted tests so each built-in flow can be exercised end-to-end without manual setup.Modified files (7)
Latest Contributors(1)