A small ESP-NOW–based sensor network for environmental data logging, designed to survive field power cuts and resume logging automatically, with explicit time sync between mothership and nodes.
- docs/README.md - documentation index
- firmware/README.md - firmware structure overview
- firmware/nodes/README.md - node project overview
- firmware/mothership/src/README.md - mothership source layout and build target
- firmware/nodes/sensor-node/src/README.md - node source layout and build target
- CONTRIBUTING.md - repo organization and contribution rules
The system is built around:
-
Mothership (ESP32-S3):
- Runs a Wi-Fi access point (
Logger001) - Exposes a web UI dashboard
- Manages node discovery, pairing, deployment & unpairing
- Stores incoming sensor data to CSV on an SD card
- Keeps time via a DS3231 RTC
- Periodically sends TIME_SYNC messages to nodes
- Persists node state (paired / deployed, IDs, names, wake interval) in NVS
- Tracks time health per node (how fresh the last TIME_SYNC is)
- Runs a Wi-Fi access point (
-
Sensor Nodes (ESP32-C3 Mini, one or more):
- Measure environmental variables such as:
- Air temperature (DS18B20 backend)
- Soil volumetric water content + soil temperature (ADS1115 + thermistors backend)
- Communicate with the mothership using ESP-NOW
- Use a DS3231 RTC + Alarm 1 to drive their sampling interval
- Persist state in NVS:
- mothership MAC
- deployed flag
- wake interval
- RTC sync flag
- last time sync (unix timestamp)
- Automatically request time sync (
REQUEST_TIME) when needed - Designed to resume operation automatically after a power cut (with RTC coin cell)
- Measure environmental variables such as:
The ultrasonic anemometer documentation is maintained in the following aligned documents:
- docs/NODE-PCB-OVERVIEW.md — Hardware Design Document
- docs/NODE-FIRMWARE_NOTES.md — Firmware Architecture Document
- hardware/ultrasonic_anemometer/docs/MECHANICAL_DESIGN.md — Mechanical Design Document
- hardware/ultrasonic_anemometer/docs/TOF_WORKOUT_GUIDE.md — TOF geometry/constants and validation workflow
-
ESP-NOW “mesh-ish” communication
- Data: node → mothership (
sensor_data_message_t) - Control: mothership → node (
PAIR_NODE,DEPLOY_NODE,SET_SCHEDULE,UNPAIR_NODE,TIME_SYNC)
- Data: node → mothership (
-
DS3231 RTC integration (both sides)
- Accurate timestamps on measurements
- Configurable wake/sampling intervals via Alarm 1 on the node DS3231
- Mothership DS3231 is the time authority; nodes sync to it
-
TIME_SYNC protocol
- Nodes send
REQUEST_TIMEwhen:- They’re bound to a mothership but
rtcSynced == false, or - More than 24 h has passed since their last time sync
- They’re bound to a mothership but
- Mothership responds with
TIME_SYNCcarrying DS3231 time - Mothership can also broadcast fleet-wide time sync periodically
- UI shows “Fresh / OK / Stale / Unknown” time health per node
- Nodes send
-
Sensor backend abstraction
- Nodes use a small registry of logical “sensor slots”:
struct SensorSlot { const char* label; // e.g. "DS18B20_TEMP_1", "SOIL1_VWC" const char* sensorType; // e.g. "DS18B20", "SOIL_VWC", "SOIL_TEMP" }; extern SensorSlot g_sensors[]; extern size_t g_numSensors;
- Each backend populates slots and implements
read(index, float&):- DS18B20 backend (
sensors_ds18b20.*)- Scans a OneWire bus and registers one slot per DS18B20:
- Labels like
DS18B20_TEMP_1,DS18B20_TEMP_2, … - Type string typically
DS18B20
- Labels like
- Scans a OneWire bus and registers one slot per DS18B20:
- Soil moisture + temperature backend (
soil_moist_temp.*)- Uses one ADS1115 on the root I²C bus (no mux) to provide:
SOIL1_VWC(ADS ch0) – θv from mV via polynomial calibrationSOIL2_VWC(ADS ch1)SOIL1_TEMP(ADS ch2) – thermistor with Steinhart–Hart fitSOIL2_TEMP(ADS ch3)
- Uses one ADS1115 on the root I²C bus (no mux) to provide:
- DS18B20 backend (
sendSensorData()walksg_sensors[0..g_numSensors-1]and sends oneSENSOR_DATApacket per slot; the CSV uses thelabel/sensorTypestring as thesensor_typecolumn.
- Nodes use a small registry of logical “sensor slots”:
-
CSV logging to SD card (mothership)
- Single
datalog.csvwith rows:timestamp,node_id,node_name,mac,sensor_type,value sensor_typeis a free string such as:DS18B20_TEMP_1,DS18B20_TEMP_2SOIL1_VWC,SOIL2_VWCSOIL1_TEMP,SOIL2_TEMP
- Periodic mothership heartbeats:
timestamp,MOTHERSHIP,<mac>,STATUS,ACTIVE - Optional TIME_SYNC fleet events logged as
TIME_SYNC_FLEET
- Single
-
Web UI dashboard (from mothership)
- View live RTC time (ticking in the browser)
- Set global wake interval (1–60 minutes) → broadcasts
SET_SCHEDULEto paired/deployed nodes - Node Manager:
- See all nodes with state chips (Unpaired / Paired / Deployed)
- Per-node “time health” (Fresh / OK / Stale / Unknown)
- Configure & Start (ID, name, interval, Start/Stop/Unpair)
- Node discovery (“Discover Nodes” button)
- Download CSV log
-
Persistent state in NVS
- Mothership:
- Paired/deployed nodes (
paired_nodesnamespace) - Node metadata (
node_metanamespace:id_<firmwareId>,name_<firmwareId>) - Global wake interval (
uinamespace)
- Paired/deployed nodes (
- Nodes:
- Node state enum
rtcSynced,deployedFlagg_intervalMin(wake interval)mothershipMAClastTimeSyncUnix
- Mothership:
-
Power-loss resilience
- After a hard power cut, both sides reload their state from NVS
- If the node RTC still has valid time (coin cell present), deployed nodes resume sending data without manual intervention
- If RTC power is lost, nodes fall back to a safe state and ask for fresh time
+---------------------------+ +------------------------------+
| Sensor Node(s) | ESP-NOW | Mothership |
| (ESP32-C3 Mini) | <-----------> | (ESP32-S3, AP) |
+---------------------------+ +------------------------------+
- Firmware ID (e.g. NODE_001) - Wi-Fi AP "Logger001"
- DS3231 RTC + Alarm 1 - DS3231 RTC
- (Future) RTC INT → FET/wake - SD card (datalog.csv)
- NVS (MAC, deployedFlag, etc.) - ESP-NOW manager
- Sensor backends: - Web UI (HTTP server)
DS18B20 - Node Manager + TIME_SYNC
soil_moist_temp (ADS1115)
- Packets: Control packets:
DISCOVER_REQUEST DISCOVER_RESPONSE / SCAN
PAIRING_REQUEST PAIR_NODE / PAIRING_RESPONSE
REQUEST_TIME DEPLOY_NODE
SENSOR_DATA SET_SCHEDULE
UNPAIR_NODE
TIME_SYNC (+ fleet broadcast)
(Current implementation polls the DS3231 Alarm 1 flag in firmware; no GPIO wiring to INT is required yet, but the design is ready for INT→FET / wake pin.)
At a high level, nodes are in one of three effective states:
-
Unpaired
No mothership MAC known.
Node periodically sends DISCOVER_REQUEST and PAIRING_REQUEST broadcasts.
-
Paired / Bound
Mothership MAC known and stored in NVS.
Node is “owned” but not yet deployed.
RTC may or may not be synced (rtcSynced flag).
-
Deployed
Mothership MAC known.
deployedFlag == true.
RTC has been synced from a DEPLOY_NODE or TIME_SYNC message.
Node arms the DS3231 Alarm 1 based on g_intervalMin and sends data on each alarm.
Internally:
bool hasMothershipMAC(); // derived from stored MAC in NVS
bool rtcSynced; // true once time is set via DEPLOY or TIME_SYNC
bool deployedFlag; // persisted "this node is deployed" flag
enum NodeState {
STATE_UNPAIRED = 0, // no mothership MAC known
STATE_PAIRED = 1, // has mothership MAC, but not deployed
STATE_DEPLOYED = 2 // has mothership MAC + deployed flag set
};
struct NodeInfo {
uint8_t mac[6];
String nodeId; // firmware ID (e.g. "NODE_001")
String nodeType; // e.g. "AIR_SOIL"
uint32_t lastSeen; // millis() of last packet
bool isActive; // auto-false after 5 min silence
NodeState state; // UNPAIRED / PAIRED / DEPLOYED
uint8_t channel;
// User-facing meta (from NVS "node_meta")
String userId; // numeric ID, e.g. "001"
String name; // friendly name, e.g. "North Hedge 01"
// Time sync health
uint32_t lastTimeSyncMs; // millis() when last TIME_SYNC was sent
};The Node Manager page uses lastTimeSyncMs to show a small “time health” pill per node:
- Fresh: < 6 h since last TIME_SYNC
- OK: 6–24 h
- Stale: > 24 h
- Unknown: no TIME_SYNC yet
The firmware/nodes/sensor-node firmware currently exposes:
-
DS18B20 backend (sensors_ds18b20.*)
- One OneWire bus on DS18B20_PIN
- All DS18B20s on the bus are registered:
- Slots like DS18B20_TEMP_1, DS18B20_TEMP_2, …
- Each slot is read via DallasTemperature and sent as its own SENSOR_DATA packet
-
Soil moisture + temp backend (soil_moist_temp.*)
-
One ADS1115 on the root I²C bus (same as RTC)
-
Channels are used as:
- ch0 → SOIL1_VWC (Probe 1 moisture, calibrated to θv)
- ch1 → SOIL2_VWC (Probe 2 moisture)
- ch2 → SOIL1_TEMP (Probe 1 thermistor → °C)
- ch3 → SOIL2_TEMP (Probe 2 thermistor → °C)
-
Moisture uses polynomial coefficients ported from the earlier MicroPython logger
-
Thermistors use Steinhart–Hart fits based on your anchor measurements (cold/room/warm)
-
On each DS3231 alarm:
- Node checks it is STATE_DEPLOYED, rtcSynced == true, and has a mothership MAC.
- It iterates g_sensors and sends one SENSOR_DATA packet per slot.
- Mothership logs one CSV row per packet.
Node-initiated TIME_SYNC
- If
hasMothershipMAC()&&!rtcSynced:- Send REQUEST_TIME every ~30 s until TIME_SYNC arrives.
- If
rtcSynced == true:- If >24 h since lastTimeSyncUnix, send another REQUEST_TIME (rate-limited to max once per 30 s).
- When TIME_SYNC is received:
- Node sets DS3231, rtcSynced = true, lastTimeSyncUnix = dt.unixtime().
- Persists everything to NVS.
- If STATE_DEPLOYED and interval is set, it re-arms the DS3231 alarm.
Fleet-wide TIME_SYNC
- On mothership:
espnow_loop()callsbroadcastTimeSyncIfDue(false):- If >24 h since last fleet sync, broadcast TIME_SYNC to all PAIRED/DEPLOYED.
- Log the event and a CSV row.
broadcastTimeSyncIfDue(true)forces an immediate fleet sync (e.g. from UI).
Normal Power Cut (RTC coin cell OK)
- NVS state retained, node DS3231 keeps time.
- On reboot:
- Mothership reloads nodes from NVS.
- Node reloads its state: often STATE_DEPLOYED with a valid RTC.
- Node re-arms Alarm 1 based on stored g_intervalMin.
- Alarm-driven sends resume without manual intervention.
RTC Lost Power (no coin cell / dead cell)
- If DS3231 lost backup:
rtc.lostPower() == trueat boot.- Node clears rtcSynced, deployedFlag, lastTimeSyncUnix.
- Keeps mothershipMAC.
- Effective state becomes STATE_PAIRED.
- Node begins REQUEST_TIME messages until re-synced and re-deployed.
# from repo root
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" run -e esp32s3
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" run -e esp32s3 -t upload
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" device monitor -e esp32s3Then:
- Connect to Logger001 (default password logger123)
- Open http://192.168.4.1/ in a browser
# from repo root (sensor-node)
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" run -d .\firmware\nodes\sensor-node -e esp32c3
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" run -d .\firmware\nodes\sensor-node -e esp32c3 -t upload
& "$env:USERPROFILE\.platformio\penv\Scripts\platformio.exe" device monitor -d .\firmware\nodes\sensor-node -e esp32c3Current node platformio.ini pins upload_port and monitor_port to COM3; adjust that file if your port differs.
On first boot you should see logs like:
STATE_UNPAIRED ...
📡 Discovery request sent
⏰ Bound but RTC unsynced → requesting initial TIME_SYNC
⏰ Time sync request sent
Once the mothership is running and you click “Discover Nodes” in the UI, the node will appear in /nodes.
- ✅ Robust node state model (Unpaired / Paired / Deployed)
- ✅ End-to-end Pair / Deploy / Stop / Unpair flows via web UI
- ✅ CSV logging to SD card with node ID + friendly name
- ✅ Web UI for discovery, control, and RTC management
- ✅ NVS persistence on both mothership and nodes
- ✅ Recovery from full power cuts (with RTC coin cell present)
- ✅ Explicit REQUEST_TIME / TIME_SYNC handshake per node
- ✅ Fleet-wide periodic TIME_SYNC
- ✅ Per-node time-health indicators in the Node Manager
- ✅ Modular sensor backend system:
- DS18B20 OneWire air temperature
- ADS1115-based soil moisture + soil temperature (2 probes)
- Wire DS3231 INT → FET / wake pin; enable true alarm-driven deep sleep
- Per-node wake intervals instead of a global broadcast
- Additional sensor backends (BME280, PAR, ultrasonic wind, etc.)
- OTA firmware updates (at least for the mothership)
- Diagnostic charts in web UI (per-node sparklines)
- Data ingestion helpers for R/Python pipelines