Feat: Proactive Weather Sentinel using apscheduler#72
Feat: Proactive Weather Sentinel using apscheduler#72DivyanshuChipa wants to merge 3 commits intomasterfrom
Conversation
… and open-meteo - Added apscheduler and httpx to requirements.txt. - Created `backend/tasks.py` with an async job to check morning weather (filtered strictly by > 2.0mm rain or > 40.0°C heat) and invoke the AI using a hidden system prompt. - Handled dynamic location configuration by updating `get_default_location()` in `backend/users.py` with a fallback to Maheshpura coordinates. - Integrated the scheduler gracefully using FastAPI's modern `@asynccontextmanager` lifespan in `backend/server.py` and scheduled it to run at 07:00 AM IST. - Saved AI-generated weather warnings to the DB under "Family Group" and broadcasted the payload asynchronously to all connected users. Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 39962fa120
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
backend/tasks.py
Outdated
| ) | ||
|
|
||
| # Trigger AI in background (we use the main loop or asyncio thread if needed) | ||
| ai_reply = ask_ai(prompt=system_prompt, config=config, history=[], sender="System") |
There was a problem hiding this comment.
Offload AI call from scheduler coroutine
check_morning_weather is an async APScheduler job, but it invokes ask_ai directly; ask_ai performs blocking requests.post calls (including a 100s fallback timeout), so when an alert is triggered this can block the main asyncio event loop and stall WebSocket/HTTP handling for all users until the model call returns. Run the AI call in a worker thread (e.g., await asyncio.to_thread(...)) or make the AI client fully async.
Useful? React with 👍 / 👎.
backend/tasks.py
Outdated
| # Trigger AI in background (we use the main loop or asyncio thread if needed) | ||
| ai_reply = ask_ai(prompt=system_prompt, config=config, history=[], sender="System") | ||
|
|
||
| if ai_reply: |
There was a problem hiding this comment.
Guard proactive alerts when AI is disabled
This path treats any non-empty ai_reply as sendable, but ask_ai returns a non-empty admin-disabled string when ai_enabled is false (the default config), so extreme-weather runs will broadcast a "AI processing is currently disabled" chat message to the Family Group instead of skipping the proactive alert. Add an explicit ai_enabled check before calling ask_ai (or filter this sentinel response) to avoid noisy false alerts.
Useful? React with 👍 / 👎.
- Update `ask_ai` invocation in `backend/tasks.py` to use `await asyncio.to_thread(ask_ai, ...)` to ensure the synchronous AI generation does not block the main async event loop. Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
- Replace initial `backend/tasks.py` implementation with Codex's refined logic featuring offline delivery tracking (`create_delivery_entries`) and structured standard logging. - Update `backend/users.py` with Codex's config helper `set_default_location` and streamlined exception handling in `get_default_location`. Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
This change implements the Level 3 Proactive Weather Sentinel as per the requirements.
A scheduled task checks the Open-Meteo API daily at 07:00 AM IST. If extreme conditions (precipitation_sum > 2.0mm or temperature_2m_max > 40.0°C) are detected in today's forecast, it dynamically triggers the Lumir AI engine using a strict hidden prompt. The AI's warning is saved to the SQLite database and gracefully broadcasted via WebSocket to all connected users dynamically derived from
get_all_users(). The feature safely logs network/logic errors to prevent crashing the main FastAPI application and correctly manages dependencies (apscheduler,httpx).PR created automatically by Jules for task 1831087400239551450 started by @DivyanshuChipa