Skip to content

Add memory mapper support#327

Open
mullinmax wants to merge 10 commits intomainfrom
memory-mapper-support
Open

Add memory mapper support#327
mullinmax wants to merge 10 commits intomainfrom
memory-mapper-support

Conversation

@mullinmax
Copy link
Copy Markdown
Contributor

@mullinmax mullinmax commented Mar 11, 2026

Description

Adds required routes / features to broadcast to a memory mapper tool on the LAN

@mullinmax mullinmax self-assigned this Mar 11, 2026
Copilot AI review requested due to automatic review settings March 11, 2026 21:22
@github-actions
Copy link
Copy Markdown
Contributor

Developer build links:
Sys11

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/sys11-update.json

Sys11 (Tiny)

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/sys11-tiny-update.json

WPC

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/wpc-update.json

EM

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/em-update.json

WhiteStar

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/whitestar-update.json

DataEast

https://raw.githubusercontent.com/warped-pinball/vector/pr-update-artifacts/pr-artifacts/pr-327/data-east-update.json

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an admin-controlled debug feature to broadcast RAM snapshots over UDP, backed by a new authenticated API route and scheduler support.

Changes:

  • Add “Broadcast Memory Snapshot” switch to the admin UI and wire it to a new API endpoint.
  • Implement /api/memory/toggle-broadcast and a scheduled UDP broadcaster for SRAM snapshots.
  • Add unschedule() to the phew.server scheduler and document the new route in docs/routes.md.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/common/web/js/admin.js Adds UI initialization for a new broadcast toggle that calls the backend API.
src/common/web/html/admin.html Adds a debug switch control for enabling/disabling memory snapshot broadcasting.
src/common/phew/server.py Introduces unschedule() to remove scheduled tasks.
src/common/backend.py Implements snapshot broadcasting over UDP and an authenticated toggle route using the scheduler.
docs/routes.md Adds the new API route to the route documentation.

You can also share your feedback on Copilot code review. Take the survey.

Comment thread src/common/phew/server.py
Comment on lines +244 to +246
def unschedule(func):
global _scheduled_tasks
_scheduled_tasks = [t for t in _scheduled_tasks if t[0] != func]
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unschedule() reassigns _scheduled_tasks to a new list. run_scheduled() iterates _scheduled_tasks and then mutates the global list later in the loop (_scheduled_tasks.remove(t) / _scheduled_tasks[i] = ...). If unschedule() runs concurrently (e.g., via an HTTP route) this can cause ValueError/IndexError and crash the scheduler coroutine. Avoid reassigning the list (keep identity stable) and make add/remove operations safe while run_scheduled() is iterating (e.g., apply pending ops inside run_scheduled, or pause scheduling while mutating).

Copilot uses AI. Check for mistakes.
Comment thread src/common/backend.py
Comment on lines +1702 to +1728
def broadcast_memory_snapshot():
import discovery

ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH))
chunk_size = 256
offset = 0

while offset < len(ram_access):
chunk = ram_access[offset : offset + chunk_size]
# Prepend 4-byte offset header to each chunk
message = offset.to_bytes(4, "big") + chunk
discovery.send_sock.sendto(message, ("255.255.255.255", 2040))
offset += chunk_size

return


@add_route("/api/memory/toggle-broadcast", auth=True)
def app_memory_broadcast(request):
data = request.data
if data.get("enable", False):
# add function call to scheduler
freq = data.get("frequency_ms", 100)
schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds
else:
# remove function call from scheduler
unschedule(broadcast_memory_snapshot)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

broadcast_memory_snapshot() allocates a full bytes(...) copy of SRAM and then sends the entire snapshot in a tight loop every time it runs. With the default 100ms frequency this is likely to monopolize the scheduler (and can cause allocation pressure / watchdog issues), impacting core tasks like poll_fast. Consider broadcasting incrementally across scheduler ticks (one chunk per run), and/or enforce a much larger minimum interval.

Suggested change
def broadcast_memory_snapshot():
import discovery
ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH))
chunk_size = 256
offset = 0
while offset < len(ram_access):
chunk = ram_access[offset : offset + chunk_size]
# Prepend 4-byte offset header to each chunk
message = offset.to_bytes(4, "big") + chunk
discovery.send_sock.sendto(message, ("255.255.255.255", 2040))
offset += chunk_size
return
@add_route("/api/memory/toggle-broadcast", auth=True)
def app_memory_broadcast(request):
data = request.data
if data.get("enable", False):
# add function call to scheduler
freq = data.get("frequency_ms", 100)
schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds
else:
# remove function call from scheduler
unschedule(broadcast_memory_snapshot)
MEM_SNAPSHOT_CHUNK_SIZE = const(256)
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False
def broadcast_memory_snapshot():
import discovery
global _memory_broadcast_buf, _memory_broadcast_offset, _memory_broadcast_active
# Initialize a new snapshot if one is not already in progress
if not _memory_broadcast_active:
try:
_memory_broadcast_buf = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH))
except MemoryError:
# If we cannot allocate the snapshot buffer, skip this cycle
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False
return
_memory_broadcast_offset = 0
_memory_broadcast_active = True
if not _memory_broadcast_buf:
return
# If we've reached the end of the buffer, reset state so the next call starts a new snapshot
if _memory_broadcast_offset >= len(_memory_broadcast_buf):
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False
return
# Send a single chunk per scheduler invocation to avoid monopolizing the scheduler
start = _memory_broadcast_offset
end = start + MEM_SNAPSHOT_CHUNK_SIZE
if end > len(_memory_broadcast_buf):
end = len(_memory_broadcast_buf)
chunk = _memory_broadcast_buf[start:end]
# Prepend 4-byte offset header to each chunk
message = start.to_bytes(4, "big") + chunk
discovery.send_sock.sendto(message, ("255.255.255.255", 2040))
_memory_broadcast_offset = end
# When we've finished sending the entire buffer, clear state so a new snapshot will start next time
if _memory_broadcast_offset >= len(_memory_broadcast_buf):
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False
@add_route("/api/memory/toggle-broadcast", auth=True)
def app_memory_broadcast(request):
global _memory_broadcast_buf, _memory_broadcast_offset, _memory_broadcast_active
data = request.data
if data.get("enable", False):
# Reset broadcast state whenever we (re)enable broadcasting
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False
# Add function call to scheduler with a minimum interval to avoid overload
freq = data.get("frequency_ms", 100)
if freq < 250:
freq = 250
schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds
else:
# Remove function call from scheduler and clear state
unschedule(broadcast_memory_snapshot)
_memory_broadcast_buf = None
_memory_broadcast_offset = 0
_memory_broadcast_active = False

Copilot uses AI. Check for mistakes.
Comment thread src/common/backend.py
Comment on lines +1702 to +1714
def broadcast_memory_snapshot():
import discovery

ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH))
chunk_size = 256
offset = 0

while offset < len(ram_access):
chunk = ram_access[offset : offset + chunk_size]
# Prepend 4-byte offset header to each chunk
message = offset.to_bytes(4, "big") + chunk
discovery.send_sock.sendto(message, ("255.255.255.255", 2040))
offset += chunk_size
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discovery.send_sock can be None (e.g., after LowMemoryMode closes sockets) and this function bypasses discovery._setup_sockets()/discovery._send(). This will raise and then keep failing on every scheduled run. Re-initialize sockets (or guard/unschedule on failure) before calling sendto.

Copilot uses AI. Check for mistakes.
Comment thread src/common/backend.py
data = request.data
if data.get("enable", False):
# add function call to scheduler
freq = data.get("frequency_ms", 100)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The toggle route schedules a new task every time it receives enable truthy; repeated calls can stack duplicate scheduled entries for the same function. Consider de-duplicating (unschedule first or check existing tasks) and validate/coerce frequency_ms to an int with sane bounds to prevent accidental overload.

Suggested change
freq = data.get("frequency_ms", 100)
raw_freq = data.get("frequency_ms", 100)
try:
freq = int(raw_freq)
except (TypeError, ValueError):
freq = 100
# Clamp frequency to sane bounds to avoid overload or absurd delays
if freq < 10:
freq = 10
elif freq > 60000:
freq = 60000
# Ensure we don't accumulate multiple scheduled entries for the same function
unschedule(broadcast_memory_snapshot)

Copilot uses AI. Check for mistakes.
Comment on lines +397 to +402
const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 };
try {
await window.smartFetch("/api/memory/toggle-broadcast", payload, true);
} catch (error) {
console.error("Failed to toggle memory broadcast", error);
memoryBroadcastToggle.checked = !memoryBroadcastToggle.checked;
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This handler assumes smartFetch failures will throw, but fetch() only throws on network errors. Non-2xx responses (including 401/500) won’t hit catch, so the UI can remain checked even though the backend rejected the change. Capture the Response from smartFetch and revert the checkbox if !response.ok.

Suggested change
const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 };
try {
await window.smartFetch("/api/memory/toggle-broadcast", payload, true);
} catch (error) {
console.error("Failed to toggle memory broadcast", error);
memoryBroadcastToggle.checked = !memoryBroadcastToggle.checked;
const previousChecked = !memoryBroadcastToggle.checked;
const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 };
try {
const response = await window.smartFetch(
"/api/memory/toggle-broadcast",
payload,
true,
);
if (!response.ok) {
console.error(
"Failed to toggle memory broadcast, server returned status",
response.status,
);
memoryBroadcastToggle.checked = previousChecked;
}
} catch (error) {
console.error("Failed to toggle memory broadcast", error);
memoryBroadcastToggle.checked = previousChecked;

Copilot uses AI. Check for mistakes.
Comment on lines +533 to +535
<b>Broadcast Memory Snapshots:</b> enables streaming of memory over
network for debugging purposes. Use with tools like memory mapper
</label>
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broadcasting raw RAM snapshots over UDP is inherently sensitive (it can include credentials/tokens or other secrets in memory) and is visible to anyone on the local network. The UI copy should include an explicit warning about sensitive data and recommending use only on trusted networks / during active debugging.

Suggested change
<b>Broadcast Memory Snapshots:</b> enables streaming of memory over
network for debugging purposes. Use with tools like memory mapper
</label>
<b>Broadcast Memory Snapshots (advanced, sensitive):</b> streams raw memory over the network for debugging purposes (for use with tools like memory mapper).
<span style="color: #e73434; font-weight: bold">Warning:</span>
may include passwords, tokens, or other secrets in RAM. Enable only on trusted networks during active debugging and turn off when not in use.

Copilot uses AI. Check for mistakes.
Comment thread docs/routes.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants