Releases: EudaLabs/HealthWithSevgi
1.15.12
v1.5.11 — Lighthouse re-audit (100 / 100 / 100 on three categories) + 100 % code documentation
Highlights
- Lighthouse re-audit sweep — re-ran the production preview build after adding
robots.txt, swapping the eager recharts preload for per-route lazy chunks, enabling sourcemaps, and preview-proxying/api. Scores moved from the Sprint 5 baseline of 93 / 100 / 96 / 91 (Perf / A11y / BP / SEO) to 91 / 100 / 100 / 100 — +4 Best Practices, +9 SEO, kept Accessibility at 100. - JSDoc + docstring coverage is now 100 % on both sides. Frontend JSDoc went from ~64 % to 38 / 38 = 100 %; backend
interrogatewent from 23.9 % to 188 / 188 = 100 %. Tests stayed green (191 / 191). - Week 11 jury showcase placeholder — new wiki page with the required-deliverables checklist, live surfaces, 10-min deck outline, FE prep checklist, and risk register.
Notable code changes
frontend/public/robots.txtadded (was missing — SEO 91 → 100).frontend/vite.config.ts:build.sourcemap: true;modulePreload.resolveDependenciesdropsvendor-chartsfrom the HTML preload list; newpreview.proxyso Lighthouse audits againstpnpm previewcan reach/api/*.frontend/src/App.tsx: Step 2/3/4/7 all lazy (recharts no longer eagerly loaded on Step 1 landing — ≈70 KiB less initial JS); loading skeleton getsmin-height: 60vh.frontend/src/components/NavBar.tsx: navbar uses/logo-192.pngwithfetchPriority="high"+decoding="async"(~35 KiB less).frontend/src/styles/globals.css: nav-dropdown trigger uses black-alpha background so the white label clears 4.5 : 1 WCAG AA on the green nav.frontend/src/pages/Step1ClinicalContext.tsx: dropsopacity: 0.85on the active-step description (was blending to 4.34 : 1).- +143 backend docstrings across
app/**(package modules, Pydantic schemas, routers, services, specialty registry). +10 frontend JSDoc blocks across the App shell, components, Step 4, glossary, and the legend colour map.
Evidence artifacts
docs/reports/Sprint5_Lighthouse.report.html/.json— re-audit (21 Apr).docs/reports/Sprint5_Lighthouse.report.baseline.html/.json— baseline snapshot (20 Apr, pre-re-audit).docs/reports/Sprint5_Lighthouse_Report.png— re-audit screenshot.docs/reports/Sprint5_Lighthouse_Report.baseline.png— baseline screenshot (for before/after).docs/reports/coverage/backend-docstring-coverage.txt+backend-docstring-badge.svg— interrogate output.docs/reports/coverage/frontend-jsdoc-coverage.txt— per-directory JSDoc scan.
Wiki: Sprint-5 page now shows both Lighthouse screenshots side by side, the Reports table carries the coverage + re-audit artifacts, the Releases table lists v1.5.11, and the new [[Final Submission]] page collects the Week 11 jury deliverables.
Not user-facing
- 191 / 191 pytest passed after the docstring sweep. No runtime code changed — only new docstrings / JSDoc blocks, frontend lazy-load plumbing, and the Vite/CSS tweaks called out above.
v1.5.10 — Step 7 label fix
UI fix
- Step 7 AI Clinical Assessment card badge now correctly shows Gemma 4 instead of the stale Gemini 2.5 Flash label. The default provider switched to Gemma 4 in v1.5.8 but this label was missed.
v1.5.9 — Step 7 AI insights reliability
Bug fix
- Step 7 AI Clinical Assessment no longer goes blank after the spinner.
Gemma 4 reasoning calls occasionally exceeded the old 45s httpx timeout and surfaced as a silentReadTimeout(''), leaving the card empty.
Changes
- Backend per-call LLM timeout: 45s → 200s
- Backend now retries transient failures (ReadTimeout, TransportError, 429, 5xx) once with jittered exponential backoff before falling back to the template.
- Backend logs exception
reprso the real error class is visible. - Frontend axios timeout: 120s → 450s
- Step 7 now renders a "assessment unavailable, reload to retry" warning instead of a blank space when the LLM falls back to the template.
Verification
5 consecutive /api/insights/<model_id> calls all returned source=gemini for all three tasks (ethics_insight, case_studies, eu_ai_act_insights). Times: 87s, 291s, 79s, 90s, 99s.
v1.5.8 — Gemma 4 default + MIT License
Highlights
Default LLM provider switched to Gemma 4 via Google AI Studio. Clinical insight generation (Step 7 Ethics, EU AI Act enrichment, case studies) now uses gemma-4-26b-a4b-it — a 26B Mixture-of-Experts model with only 4B active parameters per token, balancing clinical-reasoning quality with free-tier throughput.
Repository licensed under MIT. The informal "Academic / all rights reserved" note is replaced with a standard MIT LICENSE file so the project is usable as a portfolio reference and contributions are unambiguously welcome.
Changes
LLM / Insights
backend/app/services/insight_service.py: defaultGEMINI_MODEL→gemma-4-26b-a4b-it- Response parser now filters out
thought=trueparts (Gemma 4 returns chain-of-thought in a dedicated part) so the final answer is extracted correctly. Works transparently for any reasoning model, including future Gemini variants with thinking enabled. - System instructions (clinical safety framing) confirmed to work on Gemma 4 via AI Studio.
.env.exampledocuments the new default; users can still pin a specific model viaGEMINI_MODEL.
Licensing
- Added
LICENSE(MIT) at repo root. GitHub now detects the license as "MIT" (previously "Other"). - README badge:
License-Academic→License-MIT. frontend/package.jsondeclares"license": "MIT".
Runtime requirements (HF Space)
Set GEMINI_API_KEY as a Secret in the HuggingFace Space settings (Variables and secrets → New secret). Without it, the provider falls back to static templates — no behaviour regression, only AI-generated narrative disappears.
Release details
Upstream diff: v1.5.7...v1.5.8
v1.5.7 — Sprint 5 Polish: Brand Identity, Docker, A11y 100, JSDoc
Sprint 5 Polish
Brand identity
- New HealthWithSevgi logo wired up: navbar mark (44×44 white rounded tile over green header), favicon family (16/32/48 ICO + 16/32 PNG + 180 Apple Touch), OpenGraph + Twitter previews at 192px
- Logo is alpha-centroid-centred (not just bbox-centred) so the heart + figure sit visually in the middle of the tile even with the leaf flourish on the right
site.webmanifest(PWA-ready) with theme colour#1a7a4c
Accessibility
- Lighthouse Accessibility: 91 → 100
- Fixed 5
color-contrastviolations: removed the blanketopacity: 0.45that flattened locked wizard step labels to 1.79:1. Opacity is now scoped to the step-number circle; labels use--text-secondary(5.85:1). Session-privacy footer uses--text-secondarytoo. - Added missing
<main>landmark (landmark-one-main) - Full WCAG math + before/after in
docs/wiki/Accessibility-Log.md
Docker
docker-compose.ymlpullsghcr.io/eudalabs/healthwithsevgi:latestwhen available, falls back to local multi-stage build- Added healthcheck (probes
/api/specialtiesevery 10 s),container_name, andrestart: unless-stopped - Measured startup: 8 seconds — well inside the Sprint 5 ≤30 s target
Code documentation
- Added JSDoc to every frontend export in
api/,pages/, andcomponents/charts/— 100%+ coverage (37 exports, 38 JSDoc blocks), hits the Sprint 5 ≥80% target
Sprint 5 planning
docs/seng430-sprints/sprint-5-task-distribution.md: 24-task breakdown across BE (Efe+Berat), FE (Batu+Burak), QA (Berfin) with a 10-day schedule
Reports (new)
docs/reports/Sprint5_Lighthouse_Report.png(+.report.html,.report.json)docs/reports/Sprint5_Docker_Running.pngdocs/reports/Sprint5_Logo_Navbar.png
No runtime changes to
- Backend API contract
- ML pipelines
- Training / certificate flows
v1.5.6 — Sprint 4 Final: EU AI Act Checklist Fix
Sprint 4 Final
Fix
- EU AI Act compliance checklist now has exactly 8 items with 2 pre-checked on load, matching Sprint 4 spec (removed entry that caused 9-item/3-pre-checked state)
v1.5.5 — PDF Certificate Text Overflow Fix
Fixed
- PDF text overflow: EU AI Act checklist items (e.g. "Dataset licensing verified — 18/20 datasets...") now wrap properly across lines using ReportLab Paragraph objects instead of plain strings
Improved
- F1 Score added to subgroup fairness table (was on web UI but missing from PDF)
- Metric values color-coded (green/amber/red) in the Value column, matching the Status column
- Footer version updated to v1.5
- Page numbers added at bottom of every page
Verified
- 191/191 backend tests passing
v1.5.4 — Fix Learn More links + popover positioning
Fixed
- Learn More links now clickable: Portal popover's click-outside handler was treating clicks inside the popover as "outside" (since the portal is not a DOM child of the wrapper). Added
popoverRefto the check so popover interactions work correctly. - Popover edge clamping: Popovers near left/right screen edges now clamp their position so they don't overflow the viewport.
v1.5.3 — Fix popover clipping + broken links
Fixed
- InfoTip popover clipping: Popover now renders via React Portal at
document.bodylevel withposition: fixed— no longer clipped byoverflow: hiddencontainers (charts, column mapper, badge areas) - 2 broken Learn More links: Fixed Google ML Crash Course URLs for "Overfitting" and "Train/Test Split" that were returning 404