Releases: audric/GeuReflector
v1.3.8 — Fix MQTT periodic full-status republish stuck after first fire
Critical fix
STATUS_INTERVAL periodic full-status republish was firing only once at startup, regardless of the configured interval. Operators relying on periodic MQTT snapshot refreshes (rather than the always-retained snapshot from the very first fire) should upgrade.
Root cause
The expiry lambda re-armed m_mqtt_status_timer via setEnable(true), but Async::Timer::setEnable() is guarded by if (do_enable && !m_is_enabled) and m_is_enabled is never reset to false on expiry. So the call was a no-op, the application erased the entry from its dispatch map, and the timer never fired again.
reset() does an unconditional delTimer + addTimer, which is what was intended.
Why existing coverage missed it
test_29_mqtt_full_status accepted ">=1 message, retained or fresh" — the always-present retained snapshot from the very first fire was enough to make it pass even with the timer dead.
New test_43_full_status_periodic_republish filters out msg.retain == True and requires ≥3 fresh publishes within a 4s window with STATUS_INTERVAL=1000, plus an inter-arrival cadence check (each gap within ±50% of the configured interval). This catches the regression class directly.
What changed
src/svxlink/reflector/Reflector.cpp— single-line fix (setEnable(true)→reset(...))tests/test_mqtt_deltas.py— newtest_43_full_status_periodic_republish
Upgrade
Drop-in. No config changes, no protocol changes, no lockstep requirement.
v1.3.7 — Per-client liveness over MQTT + critical frame-size fix
Critical fix
Twin and trunk peers were flapping every few minutes after v1.3.6, with logs showing "Connection closed by remote peer" alternating with "RX timeout, disconnecting". v1.3.7 fixes that — see "Frame-size fix" below. Operators on v1.3.6 should upgrade.
Highlights
Feature: per-client live deltas on every reflector's MQTT broker
A satellite reflector (or a twinned parent) can now run its own dashboard against its own MQTT broker and see full per-client activity for everyone in the visible mesh — not just locally-connected clients. Connect, disconnect, rx-status (squelch / signal level), and the rich client-status blob all flow as live MQTT events.
For every client visible to a reflector — local, on a connected satellite, or on its twin — the broker now emits:
peer/<peer_id>/client/<callsign>/connected (ephemeral)
peer/<peer_id>/client/<callsign>/disconnected (ephemeral)
peer/<peer_id>/client/<callsign>/rx (retained, 500 ms debounced)
peer/<peer_id>/client/<callsign>/status (retained)
peer/<peer_id>/talker/<tg>/{start,stop} (ephemeral)
Local-side topics gain symmetry: client/<call>/rx becomes retained (was ephemeral) and a new retained client/<call>/status surfaces the rich status blob as a per-client topic instead of only inside nodes/local.
A dashboard wanting full activity subscribes to:
client/+/{connected,disconnected,rx,status}— local clientspeer/+/client/+/{connected,disconnected,rx,status}— every peer's clientstalker/+/start|stopandpeer/+/talker/+/start|stop— local and peer talkers
Feature: rx debounce for retained MQTT topics
peer/<id>/client/<call>/rx is sender-side debounced at 500 ms per local callsign, so the wire and the broker retained-store don't see 50 Hz native rx-update churn. Live cap of 2 Hz per client per peer is comfortable for human-readable meters and bounded enough for thousands of clients.
Feature: peer-namespaced talker MQTT topics
External (peer-side) talker events move from the flat talker/<tg>/start with an external=true payload flag to peer/<peer_id>/talker/<tg>/{start,stop}, matching the rest of the per-peer namespace. Local-only talker/<tg>/... keeps today's behavior. Dashboards parsing the external flag will need a small update.
Cleanup: MsgTrunk* → MsgPeer* rename (wire-equivalent)
The eight existing peer-protocol message types (115–122) flow over trunk, satellite, and twin links — "Trunk" was a misleading prefix for what is really a shared peer protocol. Renamed throughout the codebase. Wire IDs and field layouts are unchanged, so this is wire-equivalent: a v1.3.7 build and a v1.3.6 build interoperate identically at the protocol level.
docs/TRUNK_PROTOCOL.md is now docs/PEER_PROTOCOL.md.
Configuration: MQTT_NAME mandatory in twin mode
When two twin reflectors share a single MQTT broker (a supported deployment), per-reflector MQTT subtree namespacing relies on MQTT_NAME. v1.3.7 fails fast at startup if [TWIN_x] and [MQTT] are both configured but MQTT_NAME is empty — preventing silent retained-topic collision on nodes/local.
Frame-size fix
MsgPeerNodeList (type 121) gained two new per-client vectors in v1.3.6:
status_blobs(b4b06f9) — rich per-client JSON, ~1 KB per clientsat_ids(4bb6477) — per-client satellite attribution
…but MAX_POSTAUTH_FRAME_SIZE stayed at 32 KiB since it was inherited from upstream. A reflector with even ~30 clients now produces a NodeList frame larger than 32 KiB. The receiver's Async::FramedTcpConnection::onDataReceived rejects oversized frames with DR_PROTOCOL_ERROR, closing the connection. Both sides reconnect, hello succeeds, the next NodeList tears the link down again. Loop.
Fix: MAX_POSTAUTH_FRAME_SIZE raised from 32 KiB → 4 MiB. Comfortably fits thousands of clients with rich status while keeping the receive-buffer ceiling bounded.
Reported by Volodymyr (ur3qjw) after the v1.3.6 update on issue #3.
What changed
- Four new wire types
MsgPeerClient{Connected,Disconnected,Rx,Status}(125–128) for per-client deltas. Routed over satellite and twin links only — trunk peers continue to exchange nodelist snapshots + talker events. - New
Reflector::fanoutClient*family hooked at the existing local-MQTT emit points (connect / disconnect / rx / status).SatelliteLink,SatelliteClient, andTwinLinkgain symmetricsendClient*andhandleMsgPeerClient*. MqttPublishergainsonClientStatus,publishPeerClientEvent,clearPeerClientRetained.onRxUpdateis now retained.Reflector::onTrunkTalkerUpdatedslot extended withpeer_idso the new peer-namespaced talker topics fire correctly.TGHandlerordering bug inclearTrunkTalkerForTG(peer_id was erased before the signal fired) fixed in passing.
Tests
39/39 trunk + 6/6 mqtt-deltas + 13/13 twin + 6/6 logging = 64 tests, all passing. Integration coverage added for: parent ↔ satellite per-client deltas, twin-axis liveness, retained-rx late-subscriber bootstrap, 500 ms debounce invariant, satellite-filter respect, trunk-peer absence, MQTT_NAME twin-mode validation.
Known limitation — snapshot retained-housekeeping deferred
A planned cleanup-on-snapshot mechanism that would prune retained peer/<id>/client/<call>/{rx,status} for callsigns that disappeared between two consecutive MsgPeerNodeList snapshots was deferred. Calling mosquitto_publish from inside Reflector::onPeerNodeList reliably crashes the reflector with Assertion sock >= 0 after a trunk heartbeat timeout — suspected libmosquitto background-thread interaction. Out-of-band broker housekeeping (e.g. periodic mosquitto retained-message audit) is the recommended hygiene strategy at scale until the in-process path is understood.
Known follow-up — SatelliteLink::sendClientConnected filter on tg=0
V2 clients authenticate with tg=0 (TG selection happens later via select_tg), and SatelliteLink::sendClientConnected calls filterPassesTg(tg) directly — so a non-empty SATELLITE_FILTER currently suppresses every connect event. The fanoutClient{Disconnected,Rx,Status} paths use tg == 0 || filterPassesTg(tg) to default-allow; sendClientConnected should mirror that. KNOWN ISSUE comment in place at the call site (SatelliteLink::sendClientConnected).
⚠️ Lockstep upgrade required
The frame-size bump only takes effect when a peer's binary has it. Mixed deployments (one side v1.3.6, other side v1.3.7) still flap on the unupgraded side. Both peers in any twin or trunk pair must be upgraded together.
The MsgTrunk* → MsgPeer* rename is wire-equivalent, so it does NOT add any compatibility concern.
The new MsgPeerClient* types (125–128) are gracefully ignored by older builds (verified at every dispatch site), so they don't break older peers — older peers just lose the live-deltas benefit on the channels they receive.
Issues addressed
- v1.3.6 follow-up reported by Volodymyr (
ur3qjw) — twin-link flap on production deployments after the v1.3.6 update.
v1.3.6 — Multi-satellite roster parity with sat_id attribution
Highlights
Feature: every reflector and satellite renders the same nodelist
Before this release, a satellite's /status showed only its own clients; a parent reflector's /status.satellites[<id>] had no nodes array; and trunk peers had no way to tell whether a partner-roster entry was on the peer reflector itself or on one of its satellites.
Now, a parent reflector with two satellites (S1, S2) and a far-trunk peer (B) all see the same set of callsigns, with each entry tagged so consumers can tell where it physically lives:
- refA's
/status: parent-local clients + every connected satellite's contribution, surfaced under/status.satellites[<sat_id>].nodes - S1's
/status: S1-local clients (under/status.nodes) + the parent's combined view (under/status.satellite.parent_nodes), excluding S1's own contribution to avoid self-echo; entries from refA carry nosat_id, entries from S2 carrysat_id="S2" - B's
/status: every callsign in the refA tree under/status.trunks[<section>].nodes, withsat_id=""for refA-local andsat_id="<id>"for satellite-attached
Recipient-relative semantics: an empty sat_id always means "on the sender of this list," so the same wire field works on the trunk, twin and satellite paths without a global namespace.
Also in this release
- Rich per-client status blob in
MsgTrunkNodeList(#3): trunk peers now render partner nodes with the same fidelity as local ones — rx/tx config,qth,monitoredTGs,restrictedTG,protoVer, etc. - Default
STATUS_INTERVALraised to 30 s (was 1 s). The retained MQTT status payload was firing every second, faster than any typical dashboard polling cadence.
What changed
MsgTrunkNodeList(type 121) now carries 7 vectors. The two additions —status_blobs(b4b06f9) andsat_ids(4bb6477) — are documented indocs/TRUNK_PROTOCOL.md.Reflector::sendNodeListToAllPeersbuilds a single combined view (parent-local + every satellite's stamped roster) and fans it out to trunks/twin in full and to each satellite minus that sat's own contribution (self-echo guard), filtered bySATELLITE_FILTER.SatelliteLink(parent side) andSatelliteClient(satellite side) gainsendNodeList/handleMsgTrunkNodeList; satellite path now carries type 121 in both directions.- Tests: 35 → 39. Three
/status-level checks (sat-attached client visible on parent, on far-trunk peer withsat_id, parent-local visible on satellite) plus a multi-sat wire-level cross-visibility test exercisingsat_idstamping and the self-echo guard.
⚠️ Wire compatibility — lockstep upgrade required
Type 121 took two lockstep bumps in this release (5 → 6 → 7 vectors). All trunk, twin and satellite peers in a mesh must be upgraded together. Older fork builds that know type 121 but expect fewer vectors will fail to unpack and silently empty their partner roster until upgraded. Pre-jay peers (no type 121 at all) keep ignoring it as before.
Issues closed
- #3 — partner nodes need parity with local clients
v1.3.5 — Trunk owner-relay: three-way mesh conversations now work
Highlights
Fix: trunk is now a true audio mesh
Before this release, a reflector that received trunk audio/talker/flush for a TG it owned broadcast the audio only to its local V2 clients and satellites — never to other trunk peers. That made three-way conversations impossible whenever the speaker and the intended listener sat on different non-owner reflectors.
Concrete example that used to silently fail:
- refA (prefix
240), refB (prefix262), refC (prefix222), all fully trunked - A client on refA transmits on TG 2626 (owned by refB, longest-prefix match)
- A client on refB hears it — ✅
- A client on refC hears nothing — ❌ (refB delivered locally and stopped)
Now, when TrunkLink::handleMsgTrunk* runs on the TG's owner, Reflector re-forwards the event to every other trunk peer with interest (shared / cluster / peer-interest), via the existing onLocal* filter. Same-link exclusion keeps it single-hop and loop-free.
Scales to any mesh size
"Three-way" is just the smallest interesting case — the fanout is N-wide by construction. With owner B and N−1 non-owners in a full mesh:
- sender → B directly (prefix match)
- B fans out to the other N−2 peers (source link excluded)
- each non-owner delivers to its local clients and stops (non-owners do not relay → no loops)
Complexity per audio frame on the owner is O(N) TCP sends — inherent to any mesh fanout, not a new limitation. 4, 5, 20 reflectors all work on the same code path.
Caveat: peer interest (m_peer_interested_tgs) is populated when a peer emits TrunkTalkerStart/TrunkAudio on a TG, so a non-owner's clients must PTT on a TG at least once before the owner will forward that TG back to them. Same semantics as test_17 — not new.
What changed
Reflector::isLocalTG(tg)— longest-prefix match acrossm_local_prefixesvs.m_all_prefixes; true iff our prefix wins globally.Reflector::forwardTrunkAudio/Flush/TalkerStart/TalkerStop ToOtherTrunks(src, …)— iteratem_trunk_links, skipsrc, call the matchingonLocal*on each peer link.TrunkLink::handleMsgTrunk{TalkerStart,TalkerStop,Audio,Flush}— after existing local handling, if owner, call the fanout helper.- Dynamic
addTrunkLink/removeTrunkLinknow refreshm_all_prefixestoo.
Tests
- New
test_32_three_way_conversation— one distinct V2 client per reflector, all select the same TG, then each talks in round-robin. Asserts all six sender→listener pairs deliver UDP audio/flush. 2-pass structure (prime peer interest, then measure) + periodic TCP heartbeats so idle listener sockets don't hitHEARTBEAT_RX_CNT_RESET. - Third test callsign
N0THRDadded totopology.py::TEST_CLIENTS. Distinct callsigns are required because the owner's per-TG trunk-talker slot is keyed only by TG — two non-owners forwarding with the same callsign cannot be disambiguated. In real-world amateur use callsigns are globally unique, so this is purely a test-harness constraint.
35/35 trunk tests pass. No wire-protocol change.
v1.3.4 — SATELLITE_FILTER: opt-in bidirectional TG scope for satellite links
Highlights
New: SATELLITE_FILTER
A satellite can now narrow the set of TGs it participates in, in both directions, via a single [GLOBAL] config key on the satellite side:
SATELLITE_FILTER=24*,262*,2427-2438Grammar matches the existing TgFilter used by TRUNK_x links: exact TG, 24* prefix, 2427-2438 range, comma-separated. Empty or absent ⇒ no filtering (pre-existing behavior).
Bidirectional scope:
- Outbound (satellite → parent): the satellite suppresses local events for non-matching TGs before they ever leave.
- Inbound (parent → satellite): the satellite advertises the filter to the parent via
MsgTrunkFilter(type 122) right after authenticating; the parent then skips forwarding non-matching TGs back.
Backwards compatible. Older parents silently ignore MsgTrunkFilter (unknown message types are skipped per the protocol). In that case the satellite-side outbound suppression still works, but the parent keeps forwarding all TGs — the satellite has no local signal it's in this state, so operators should verify by checking the parent's /status (see below).
Observability
The active filter surfaces as satellites[<id>].filter in the parent's SatelliteLink::statusJson(). This single field feeds all three observer surfaces for free:
- HTTP
/status.satellites[<id>].filter - MQTT retained
statustopic - Redis
live:satellite:<id>snapshot
Key is omitted when no filter is active.
Tests
Two new integration tests in test_trunk.py:
test_16b— satellite sendsMsgTrunkFilter; the parent forwards the matching cluster TG but drops the non-matching one.test_16c— active filter appears under/status.satellites[<id>].filter, and clears when the satellite sends an empty filter.
Trunk suite is now 34/34; twin and Redis suites unchanged.
Credits
Original satellite-filter patch contributed by Jens DJ1JAY / FM-Funknetz.
v1.3.3 — Logging facade + TWIN/trunk roster observability
Highlights
Logging facade
geulog:: replaces ad-hoc TRUNK_DEBUG flags and direct stderr writes. Async worker, 7 subsystems (core, client, trunk, twin, satellite, redis, mqtt), configurable via LOG= in [GLOBAL] and live-reloaded over the command PTY. Documented in docs/LOGGING.md.
Twin protocol observability (fixes #3)
Twin-connected reflectors now exchange the full connected-station roster over the [TWIN] link (reusing MsgTrunkNodeList, no new wire type).
/statusexposes a newtwinobject: connection/hello state for both directions, peer id, priorities, andtwin.nodes— the partner's roster.- MQTT publishes the twin partner's roster under
nodes/<peer_id>. - Redis
pushPeerNode/ tombstones also cover twin partners. - Cleared automatically when the twin link goes fully inactive.
/status parity for trunk peers
/status.trunks[SECTION].nodes now surfaces the per-peer roster that was already flowing to MQTT and Redis. Dashboards can attribute every node to a specific reflector from a single /status call.
Other
docs/LOGGING.mdsysop reference (linked from README).- Redis peer-node mirror + input sanitization on trunk-received strings.
- MQTT init fixed to work in satellite mode; retained per-client status blobs published.
reflector: fix clientStatus() isMember check to target nodes subtree.
Caveat (pre-existing)
In PAIRED trunk mode, both paired peers share one peerId(), so the latest roster arrival overwrites the prior one. MQTT, Redis, and /status.trunks[X].nodes all reflect this. Fixing it requires per-connection peer_id attribution and is tracked as future work.
v1.3.2 — Redis peer-node mirror + trunk string sanitization
Extends the jay-port node-list feature so incoming peer rosters land in
Redis alongside the existing MQTT publish, and hardens the trunk input
path against malformed strings from untrusted peers.
Redis peer-node mirror
Every reflector was already pushing local clients/talkers into
live:client:* / live:talker:*. Peer node lists received over the
trunk (MsgTrunkNodeList, type 121) were previously fanned out to MQTT
only — consumers reading Redis got a partial view of the mesh.
This release mirrors them too. For each peer roster entry the
reflector now writes:
<prefix>:live:peer_node:<peer_id>:<callsign> (HSET, 60s TTL)
peer_id = sanitized hello id
callsign = sanitized callsign
tg = current talk group
updated_at = unix ts
lat/lon = only when finite and in range
qth_name = only when non-empty
The periodic refreshLiveExpire heartbeat keeps the TTL alive. A
per-peer callsign cache diffs successive snapshots so dropped
callsigns are DEL'd immediately rather than waiting for expiry, and
a full trunk-down (both directions inactive) deletes all of that
peer's entries up-front.
String sanitization at trunk receive
All untrusted strings arriving over the trunk are now sanitized in
TrunkLink::handleMsgTrunkNodeList before they hit Redis keys, MQTT
payloads, or log output:
| Field | Policy | Cap |
|---|---|---|
peer_id (hello id) |
strip control chars + :, truncate |
64 bytes |
callsign |
strip control chars + :, truncate |
32 bytes |
qth_name |
strip control chars, keep UTF-8, truncate | 64 bytes |
lat / lon |
require finite + in [-90,90] / [-180,180] |
— |
Entries whose callsign becomes empty after sanitization are dropped
with a single *** WARN line summarising the count. Entries with
out-of-range coordinates keep the callsign but lose lat/lon.
TrunkLink disconnect ordering
onInboundDisconnected and onDisconnected (outbound) previously
fired Reflector::onTrunkStateChanged(..., up=false) before
clearing m_inbound_con / m_ob_hello_received. Consumers that
inspected isActive() in the state-change callback — including the
Redis peer-node cleanup added here — saw stale state. Reordered so
the notification fires after the per-direction state reset.
TrunkLink::isActive() is now public so external state-change
consumers can check overall link liveness.
Tests
Two new classes in tests/test_redis.py:
RedisPeerNodeTest.test_peer_node_list_creates_and_updates_hashes—
inbound node list populateslive:peer_node:*, a shrunk list DELs
dropped callsigns and refreshes remaining ones.RedisPeerNodeTest.test_hostile_strings_are_sanitized— 6
adversarial entries exercise control-char / colon stripping,
over-length truncation, dropped entries, NaN/out-of-range
coordinates.RedisPeerNodeTest.test_peer_nodes_cleared_on_trunk_disconnect—
closing the inbound trunk with outbound unroutable clears all
peer_node keys.
Harness changes:
tests/topology_redis.py+generate_redis_configs.pyexpose the
trunk port (45302) on the Redis-test reflector so the host can
inject frames.tests/test_trunk.pygainsbuild_node_list()and
TrunkPeer.send_node_list()wire helpers (reused by the new
tests).
All 13 Redis tests and all 30 main trunk tests pass on this release.
Upgrade notes
No config changes required.
- If you weren't using
[REDIS], behaviour is unchanged. - If you were using
[REDIS], you gain the newlive:peer_node:*
keys automatically. Existinglive:client:*/live:talker:*/
live:trunk:*schemas are untouched.
Fully wire-compatible with v1.3.1 peers. Older peers sending node
lists with characters that this version strips will see the
sanitized form in Redis; the raw form still flows to MQTT exactly as
before.
v1.3.1 — TwinLink satellite audio/flush forwarding fix
Fixes an asymmetry between TrunkLink and TwinLink: audio and flush frames arriving over the [TWIN] mirror are now forwarded to satellites attached to the receiving twin, matching the behaviour already present for trunk-delivered audio.
What changed
TwinLink::handleMsgTrunkAudionow callsReflector::forwardAudioToSatellitesExcept(nullptr, tg, audio)after broadcasting to local UDP clients.TwinLink::handleMsgTrunkFlushnow callsReflector::forwardFlushToSatellitesExcept(nullptr, tg).
Before this fix, a satellite attached to one twin would see MsgTrunkTalkerStart/Stop for traffic originated on the partner (those go through TGHandler → Reflector::onTrunkTalkerUpdated, which already forwards to satellites), but the audio frames and flush marker were dropped silently. The result was a ghost talker with no voice.
Tests
Two new integration tests in tests/test_twin.py exercising the client → TWIN → satellite audio path:
test_09_satellite_handshake_with_twin_member— aSatellitePeerhandshakes with a twin member and appears in its/status.satelliteslisting.test_10_satellite_receives_twin_mirrored_audio— a V2 client onref2transmits on TG26201; the satellite onref1receivesMsgTrunkTalkerStartandMsgTrunkAudioframes via the TWIN-mirror path.
The twin topology generator (tests/generate_configs.py) now emits a [SATELLITE] section on the designated twin parent (ref1) and maps its port in the test compose file. SatellitePeer.connect_satellite accepts an explicit port= override so twin tests can target the non-default port without duplicating the class.
All 10 twin integration tests pass against this release.
Upgrade notes
No config changes required. The fix only affects reflectors running both [TWIN] and [SATELLITE] on the same node — existing twin-only or satellite-only deployments are unaffected.
v1.3.0 — TWIN (HA-pair) protocol
A new [TWIN] link type that pairs two reflectors sharing a LOCAL_PREFIX and makes them appear as one logical trunk peer to the rest of the mesh. External reflectors see the pair via a single [TRUNK_x] section with PAIRED=1 and a multi-host HOST= list; each frame of a transmission sticks to one socket, with instant failover to the other on TCP failure.
Highlights
[TWIN]section pointing at the partner reflector — both twins declare the sameLOCAL_PREFIXand the twin link mirrors fullTGHandlerstate (local-client talker, external-trunk-talker, audio, flush, roster). 2 s TX / 5 s RX heartbeat onTWIN_LISTEN_PORT(default 5304).PAIRED=1flag on[TRUNK_x]— the external peer opens outbound connections to every host in a comma-separatedHOST=list and accepts inbound from each. Sending uses sticky-per-transmission selection with instant failover (no holdoff).- New messages:
MsgTwinExtTalkerStart/Stop(types 123/124),MsgTrunkHello::ROLE_TWIN=2. TGHandlergainssetTrunkTalkerForTGViaPeer,clearTrunkTalkersForPeer,peerIdForTGfor per-peer attribution of external trunk-talker state.- Inbound validation hardened: twin inbound rejects role-mismatch, HMAC failure, and
local_prefixmismatch before handoff.sendMsg*paths guarded against writing to closed sockets; inbound disconnect signal clears the handle so reconnects aren't rejected.
Configuration
See docs/TWIN_PROTOCOL.md for the full spec. Commented examples in svxreflector.conf.in under [TWIN] and the PAIRED=1 external-trunk example.
Tests
8 new integration tests in tests/test_twin.py, wired into run_tests.sh (runs after the existing trunk suite on a separately generated 4-reflector topology):
- Twin handshake (authenticated hello exchange)
- No HMAC / prefix mismatch on startup
PAIRED=1trunk reportsconnected=Trueon the non-pair side- Return leg from each pair member to the non-pair reflector
- Clean startup logs (no
ERROR[TWIN]) - Kill / restart partner → RX timeout + re-handshake
- Kill one pair member → external peer stays connected via the other (sticky failover)
- End-to-end audio mirror: V2 client on one twin talks; V2 client on the other receives UDP audio + flush
Non-goals
- Quorum witness for split-brain prevention (brief artifacts during twin-link outage are tolerated and documented).
- UDP multicast for audio between same-LAN twins (TCP framing is fine at expected bandwidths).
- More than two nodes per twin group.
See docs/TWIN_PROTOCOL.md for details.
v1.2.0 — Redis-backed config store
Optional Redis backend for runtime configuration and live state — the reflector can now be driven by a web dashboard without restarts, while remaining byte-compatible with the previous .conf-only behavior when no [REDIS] section is present.
Highlights
- Users, password-groups, cluster TGs, and per-trunk dynamic settings (
BLACKLIST_TGS,ALLOW_TGS,TG_MAP) can be sourced from Redis instead of the config file. - Trunk peers themselves (host / port / secret / remote_prefix / peer_id) can be added and removed at runtime via a pub/sub event — no reflector restart.
- Change notifications via
<KEY_PREFIX>:config.changed; the reflector reloads only the affected scope (users / cluster / trunk:<section> / all). - Live state is published to Redis (
live:client:*,live:talker:*,live:trunk:*) through a bounded queue drained every 75 ms with 60 s TTLs refreshed every 30 s. The audio path never blocks on Redis. - Resilience: startup exits cleanly if Redis is unreachable; mid-flight disconnects reconnect with exponential backoff (1 s → 30 s cap) and emit a full reload on resume.
- Migration tool:
svxreflector --import-conf-to-redis [--dry-run]copies the relevant sections from an existing.confinto Redis. Idempotent. - Mute state exposed in the
/statusJSON as a per-trunkmutedarray (for dashboard display). Mute commands continue to go through the PTY. /statusadditions: newredis.live_queue_sizeandredis.dropped_live_writescounters.
Configuration
Add a [REDIS] section to svxreflector.conf — see docs/REDIS.md for the schema reference, dashboard operation cookbook, and failure modes. The commented template is also in svxreflector.conf.in.
Dependencies
- Build:
libhiredis-dev - Runtime:
libhiredisshared library
Both are available in Debian / Ubuntu / Alpine package archives. The Dockerfile has been updated accordingly.
Tests
- 10 new Redis integration tests (parallel harness;
tests/run_redis_tests.sh up|test|down). - Legacy trunk suite: 27/27 passing (up from 23/27). Three pre-existing bugs resolved:
- Test harness was missing
COMMAND_PTY=in the generated config, so every PTY-driven test silently no-op'd. statusJson::active_talkersdropped TGs that had been remapped byTG_MAPbecause the per-peer filter used the remote prefix.- Python
MsgTrunkNodeListparser usedu32vector-length prefixes; the C++ wire format usesu16.
- Test harness was missing
Non-goals (out of scope)
- The web dashboard itself. This release provides the reflector-side contract only.
- Password hashing at rest (plaintext, matching the
.confconvention). - Audit log of configuration changes.
See `docs/REDIS.md` for full details.