Conversation
|
Developer build links: Sys11 (Tiny) WPC EM WhiteStar DataEast |
There was a problem hiding this comment.
Pull request overview
Adds an admin-controlled debug feature to broadcast RAM snapshots over UDP, backed by a new authenticated API route and scheduler support.
Changes:
- Add “Broadcast Memory Snapshot” switch to the admin UI and wire it to a new API endpoint.
- Implement
/api/memory/toggle-broadcastand a scheduled UDP broadcaster for SRAM snapshots. - Add
unschedule()to thephew.serverscheduler and document the new route indocs/routes.md.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
src/common/web/js/admin.js |
Adds UI initialization for a new broadcast toggle that calls the backend API. |
src/common/web/html/admin.html |
Adds a debug switch control for enabling/disabling memory snapshot broadcasting. |
src/common/phew/server.py |
Introduces unschedule() to remove scheduled tasks. |
src/common/backend.py |
Implements snapshot broadcasting over UDP and an authenticated toggle route using the scheduler. |
docs/routes.md |
Adds the new API route to the route documentation. |
You can also share your feedback on Copilot code review. Take the survey.
| def unschedule(func): | ||
| global _scheduled_tasks | ||
| _scheduled_tasks = [t for t in _scheduled_tasks if t[0] != func] |
There was a problem hiding this comment.
unschedule() reassigns _scheduled_tasks to a new list. run_scheduled() iterates _scheduled_tasks and then mutates the global list later in the loop (_scheduled_tasks.remove(t) / _scheduled_tasks[i] = ...). If unschedule() runs concurrently (e.g., via an HTTP route) this can cause ValueError/IndexError and crash the scheduler coroutine. Avoid reassigning the list (keep identity stable) and make add/remove operations safe while run_scheduled() is iterating (e.g., apply pending ops inside run_scheduled, or pause scheduling while mutating).
| def broadcast_memory_snapshot(): | ||
| import discovery | ||
|
|
||
| ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH)) | ||
| chunk_size = 256 | ||
| offset = 0 | ||
|
|
||
| while offset < len(ram_access): | ||
| chunk = ram_access[offset : offset + chunk_size] | ||
| # Prepend 4-byte offset header to each chunk | ||
| message = offset.to_bytes(4, "big") + chunk | ||
| discovery.send_sock.sendto(message, ("255.255.255.255", 2040)) | ||
| offset += chunk_size | ||
|
|
||
| return | ||
|
|
||
|
|
||
| @add_route("/api/memory/toggle-broadcast", auth=True) | ||
| def app_memory_broadcast(request): | ||
| data = request.data | ||
| if data.get("enable", False): | ||
| # add function call to scheduler | ||
| freq = data.get("frequency_ms", 100) | ||
| schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds | ||
| else: | ||
| # remove function call from scheduler | ||
| unschedule(broadcast_memory_snapshot) |
There was a problem hiding this comment.
broadcast_memory_snapshot() allocates a full bytes(...) copy of SRAM and then sends the entire snapshot in a tight loop every time it runs. With the default 100ms frequency this is likely to monopolize the scheduler (and can cause allocation pressure / watchdog issues), impacting core tasks like poll_fast. Consider broadcasting incrementally across scheduler ticks (one chunk per run), and/or enforce a much larger minimum interval.
| def broadcast_memory_snapshot(): | |
| import discovery | |
| ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH)) | |
| chunk_size = 256 | |
| offset = 0 | |
| while offset < len(ram_access): | |
| chunk = ram_access[offset : offset + chunk_size] | |
| # Prepend 4-byte offset header to each chunk | |
| message = offset.to_bytes(4, "big") + chunk | |
| discovery.send_sock.sendto(message, ("255.255.255.255", 2040)) | |
| offset += chunk_size | |
| return | |
| @add_route("/api/memory/toggle-broadcast", auth=True) | |
| def app_memory_broadcast(request): | |
| data = request.data | |
| if data.get("enable", False): | |
| # add function call to scheduler | |
| freq = data.get("frequency_ms", 100) | |
| schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds | |
| else: | |
| # remove function call from scheduler | |
| unschedule(broadcast_memory_snapshot) | |
| MEM_SNAPSHOT_CHUNK_SIZE = const(256) | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False | |
| def broadcast_memory_snapshot(): | |
| import discovery | |
| global _memory_broadcast_buf, _memory_broadcast_offset, _memory_broadcast_active | |
| # Initialize a new snapshot if one is not already in progress | |
| if not _memory_broadcast_active: | |
| try: | |
| _memory_broadcast_buf = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH)) | |
| except MemoryError: | |
| # If we cannot allocate the snapshot buffer, skip this cycle | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False | |
| return | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = True | |
| if not _memory_broadcast_buf: | |
| return | |
| # If we've reached the end of the buffer, reset state so the next call starts a new snapshot | |
| if _memory_broadcast_offset >= len(_memory_broadcast_buf): | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False | |
| return | |
| # Send a single chunk per scheduler invocation to avoid monopolizing the scheduler | |
| start = _memory_broadcast_offset | |
| end = start + MEM_SNAPSHOT_CHUNK_SIZE | |
| if end > len(_memory_broadcast_buf): | |
| end = len(_memory_broadcast_buf) | |
| chunk = _memory_broadcast_buf[start:end] | |
| # Prepend 4-byte offset header to each chunk | |
| message = start.to_bytes(4, "big") + chunk | |
| discovery.send_sock.sendto(message, ("255.255.255.255", 2040)) | |
| _memory_broadcast_offset = end | |
| # When we've finished sending the entire buffer, clear state so a new snapshot will start next time | |
| if _memory_broadcast_offset >= len(_memory_broadcast_buf): | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False | |
| @add_route("/api/memory/toggle-broadcast", auth=True) | |
| def app_memory_broadcast(request): | |
| global _memory_broadcast_buf, _memory_broadcast_offset, _memory_broadcast_active | |
| data = request.data | |
| if data.get("enable", False): | |
| # Reset broadcast state whenever we (re)enable broadcasting | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False | |
| # Add function call to scheduler with a minimum interval to avoid overload | |
| freq = data.get("frequency_ms", 100) | |
| if freq < 250: | |
| freq = 250 | |
| schedule(broadcast_memory_snapshot, phase_ms=0, frequency_ms=freq) # broadcast every freq milliseconds | |
| else: | |
| # Remove function call from scheduler and clear state | |
| unschedule(broadcast_memory_snapshot) | |
| _memory_broadcast_buf = None | |
| _memory_broadcast_offset = 0 | |
| _memory_broadcast_active = False |
| def broadcast_memory_snapshot(): | ||
| import discovery | ||
|
|
||
| ram_access = bytes(uctypes.bytearray_at(SRAM_DATA_BASE, SRAM_DATA_LENGTH)) | ||
| chunk_size = 256 | ||
| offset = 0 | ||
|
|
||
| while offset < len(ram_access): | ||
| chunk = ram_access[offset : offset + chunk_size] | ||
| # Prepend 4-byte offset header to each chunk | ||
| message = offset.to_bytes(4, "big") + chunk | ||
| discovery.send_sock.sendto(message, ("255.255.255.255", 2040)) | ||
| offset += chunk_size |
There was a problem hiding this comment.
discovery.send_sock can be None (e.g., after LowMemoryMode closes sockets) and this function bypasses discovery._setup_sockets()/discovery._send(). This will raise and then keep failing on every scheduled run. Re-initialize sockets (or guard/unschedule on failure) before calling sendto.
| data = request.data | ||
| if data.get("enable", False): | ||
| # add function call to scheduler | ||
| freq = data.get("frequency_ms", 100) |
There was a problem hiding this comment.
The toggle route schedules a new task every time it receives enable truthy; repeated calls can stack duplicate scheduled entries for the same function. Consider de-duplicating (unschedule first or check existing tasks) and validate/coerce frequency_ms to an int with sane bounds to prevent accidental overload.
| freq = data.get("frequency_ms", 100) | |
| raw_freq = data.get("frequency_ms", 100) | |
| try: | |
| freq = int(raw_freq) | |
| except (TypeError, ValueError): | |
| freq = 100 | |
| # Clamp frequency to sane bounds to avoid overload or absurd delays | |
| if freq < 10: | |
| freq = 10 | |
| elif freq > 60000: | |
| freq = 60000 | |
| # Ensure we don't accumulate multiple scheduled entries for the same function | |
| unschedule(broadcast_memory_snapshot) |
| const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 }; | ||
| try { | ||
| await window.smartFetch("/api/memory/toggle-broadcast", payload, true); | ||
| } catch (error) { | ||
| console.error("Failed to toggle memory broadcast", error); | ||
| memoryBroadcastToggle.checked = !memoryBroadcastToggle.checked; |
There was a problem hiding this comment.
This handler assumes smartFetch failures will throw, but fetch() only throws on network errors. Non-2xx responses (including 401/500) won’t hit catch, so the UI can remain checked even though the backend rejected the change. Capture the Response from smartFetch and revert the checkbox if !response.ok.
| const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 }; | |
| try { | |
| await window.smartFetch("/api/memory/toggle-broadcast", payload, true); | |
| } catch (error) { | |
| console.error("Failed to toggle memory broadcast", error); | |
| memoryBroadcastToggle.checked = !memoryBroadcastToggle.checked; | |
| const previousChecked = !memoryBroadcastToggle.checked; | |
| const payload = { enable: memoryBroadcastToggle.checked ? 1 : 0 }; | |
| try { | |
| const response = await window.smartFetch( | |
| "/api/memory/toggle-broadcast", | |
| payload, | |
| true, | |
| ); | |
| if (!response.ok) { | |
| console.error( | |
| "Failed to toggle memory broadcast, server returned status", | |
| response.status, | |
| ); | |
| memoryBroadcastToggle.checked = previousChecked; | |
| } | |
| } catch (error) { | |
| console.error("Failed to toggle memory broadcast", error); | |
| memoryBroadcastToggle.checked = previousChecked; |
| <b>Broadcast Memory Snapshots:</b> enables streaming of memory over | ||
| network for debugging purposes. Use with tools like memory mapper | ||
| </label> |
There was a problem hiding this comment.
Broadcasting raw RAM snapshots over UDP is inherently sensitive (it can include credentials/tokens or other secrets in memory) and is visible to anyone on the local network. The UI copy should include an explicit warning about sensitive data and recommending use only on trusted networks / during active debugging.
| <b>Broadcast Memory Snapshots:</b> enables streaming of memory over | |
| network for debugging purposes. Use with tools like memory mapper | |
| </label> | |
| <b>Broadcast Memory Snapshots (advanced, sensitive):</b> streams raw memory over the network for debugging purposes (for use with tools like memory mapper). | |
| <span style="color: #e73434; font-weight: bold">Warning:</span> | |
| may include passwords, tokens, or other secrets in RAM. Enable only on trusted networks during active debugging and turn off when not in use. |
Description
Adds required routes / features to broadcast to a memory mapper tool on the LAN