From f9b960db2f6c67bbd1444b8706a18c5dce0bed58 Mon Sep 17 00:00:00 2001 From: Henrik Andersson Date: Sat, 28 Feb 2026 08:53:01 +0100 Subject: [PATCH 1/2] Add product roadmap with feature pages and overview generator --- roadmap/README.md | 40 +++++++++++++++++ roadmap/features/automatic-reports.md | 24 ++++++++++ roadmap/features/band-pass-filtering.md | 23 ++++++++++ roadmap/features/baseline-comparisons.md | 24 ++++++++++ roadmap/features/custom-metrics.md | 23 ++++++++++ roadmap/features/ensemble-support.md | 24 ++++++++++ roadmap/features/forecast-lead-time.md | 24 ++++++++++ roadmap/features/network-models.md | 24 ++++++++++ roadmap/features/not-planned.md | 22 +++++++++ roadmap/features/outlier-detection.md | 24 ++++++++++ roadmap/features/rolling-skill.md | 24 ++++++++++ roadmap/features/skill-aggregation.md | 24 ++++++++++ roadmap/features/vertical-profiles.md | 24 ++++++++++ roadmap/features/web-app.md | 24 ++++++++++ roadmap/scripts/generate_overview.py | 57 ++++++++++++++++++++++++ roadmap/templates/overview.md.j2 | 21 +++++++++ 16 files changed, 426 insertions(+) create mode 100644 roadmap/README.md create mode 100644 roadmap/features/automatic-reports.md create mode 100644 roadmap/features/band-pass-filtering.md create mode 100644 roadmap/features/baseline-comparisons.md create mode 100644 roadmap/features/custom-metrics.md create mode 100644 roadmap/features/ensemble-support.md create mode 100644 roadmap/features/forecast-lead-time.md create mode 100644 roadmap/features/network-models.md create mode 100644 roadmap/features/not-planned.md create mode 100644 roadmap/features/outlier-detection.md create mode 100644 roadmap/features/rolling-skill.md create mode 100644 roadmap/features/skill-aggregation.md create mode 100644 roadmap/features/vertical-profiles.md create mode 100644 roadmap/features/web-app.md create mode 100644 roadmap/scripts/generate_overview.py create mode 100644 roadmap/templates/overview.md.j2 diff --git a/roadmap/README.md b/roadmap/README.md new file mode 100644 index 000000000..217057ad5 --- /dev/null +++ b/roadmap/README.md @@ -0,0 +1,40 @@ +# ModelSkill Product Roadmap + +This roadmap outlines the current and future direction of ModelSkill — a toolkit for evaluating simulation model quality by comparing results against observations. + +For questions or feature requests, please open a [GitHub Discussion](https://github.com/DHI/modelskill/discussions). + +--- + + +## Delivered + + +- **[Baseline Model Comparisons](features/baseline-comparisons.md)** — Compare any model against synthetic baselines (mean, persistence) to quantify the added value of a simulation. +- **[Custom Metrics](features/custom-metrics.md)** — Define domain-specific quality metrics that integrate fully into all skill tables and reports. +- **[Spatial and Temporal Skill Aggregation](features/skill-aggregation.md)** — Assess model performance by geographic region, time period, season, or any custom grouping to identify where and when a model performs well or poorly. + +## In Development + + +- **[Network Model Support](features/network-models.md)** — Compare MIKE 1D hydraulic network simulations against observations at network nodes, covering collection systems, water distribution, and river networks. + +## Planned + + +- **[Vertical Profile Assessment](features/vertical-profiles.md)** — Validate 3D models by comparing against depth-varying observations such as temperature and salinity profiles. + +## Under Consideration + + +- **[Automatic Report Generation](features/automatic-reports.md)** — Generate standardised model skill assessment reports in HTML, PDF, or PowerPoint from a single command. +- **[Band-Pass Filtering](features/band-pass-filtering.md)** — Separate model skill assessment into slow dynamics and fast dynamics to understand where a model captures trends versus peaks. +- **[Ensemble and Probabilistic Forecast Support](features/ensemble-support.md)** — Evaluate ensemble model runs using established probabilistic scoring methods alongside standard deterministic metrics. +- **[Forecast Lead-Time Analysis](features/forecast-lead-time.md)** — Assess how model skill degrades with forecast horizon to optimise forecast update frequency and communicate prediction reliability. +- **[Outlier Detection](features/outlier-detection.md)** — Automatically identify suspect observations using model-observation differences to improve data quality and skill assessment reliability. +- **[Rolling Skill Assessment](features/rolling-skill.md)** — Track how model skill evolves over time using moving windows to detect performance trends and seasonal patterns. +- **[Web Application](features/web-app.md)** — Browser-based interface for model skill assessment, accessible to users without Python experience. + +## Not Planned + +See [features considered out of scope](features/not-planned.md). diff --git a/roadmap/features/automatic-reports.md b/roadmap/features/automatic-reports.md new file mode 100644 index 000000000..fad5a2907 --- /dev/null +++ b/roadmap/features/automatic-reports.md @@ -0,0 +1,24 @@ +--- +title: "Automatic Report Generation" +status: "Under Consideration" +category: "Visualization & Reporting" +summary: "Generate standardised model skill assessment reports in HTML, PDF, or PowerPoint from a single command." +--- + +## Value Proposition + +Model validation is often documented in reports that follow a consistent structure: overview of data, skill tables, scatter plots, time series comparisons, and summary conclusions. Today, assembling these reports is a manual process that takes hours and introduces risk of copy-paste errors. + +Automated report generation would reduce the time from completed analysis to delivered report from hours to minutes, improve consistency across projects, and free modellers to focus on interpretation rather than formatting. + +## What This Enables + +- Generate a complete validation report from a completed model-observation comparison with a single command +- Output to HTML (for web viewing), PDF (for formal deliverables), or PowerPoint (for presentations) +- Customise report templates to match organisational branding and structure +- Include all standard elements: data overview, skill tables, scatter plots, time series, spatial maps +- Support batch report generation for multi-scenario or multi-site assessments + +## Current Status + +Under consideration. Early thinking has explored integration with MS Word templates and web-based report generation. No implementation timeline has been set. diff --git a/roadmap/features/band-pass-filtering.md b/roadmap/features/band-pass-filtering.md new file mode 100644 index 000000000..48f94f38b --- /dev/null +++ b/roadmap/features/band-pass-filtering.md @@ -0,0 +1,23 @@ +--- +title: "Band-Pass Filtering" +status: "Under Consideration" +category: "Analysis & Metrics" +summary: "Separate model skill assessment into slow dynamics and fast dynamics to understand where a model captures trends versus peaks." +--- + +## Value Proposition + +A model might reproduce long-term trends well but miss short-term peaks, or vice versa. Band-pass filtering allows modellers to decompose the signal into frequency bands and assess skill separately for each — for example, tidal vs. surge components in water level, or diurnal vs. seasonal cycles in temperature. + +This provides deeper diagnostic insight into model behaviour, helping modellers understand not just how good a model is overall, but which physical processes it captures well and which need improvement. + +## What This Enables + +- Decompose observed and modelled time series into frequency bands (e.g., high-frequency peaks, low-frequency trends) +- Assess model skill separately for each band to diagnose where performance is strong or weak +- Compare model versions to see whether improvements target the right frequency range +- Support tidal/surge separation, seasonal/event decomposition, and similar domain-specific analyses + +## Current Status + +Under consideration. Identified in community discussions as a valuable diagnostic capability. No implementation timeline has been set. diff --git a/roadmap/features/baseline-comparisons.md b/roadmap/features/baseline-comparisons.md new file mode 100644 index 000000000..08ab6d917 --- /dev/null +++ b/roadmap/features/baseline-comparisons.md @@ -0,0 +1,24 @@ +--- +title: "Baseline Model Comparisons" +status: "Delivered" +category: "Analysis & Metrics" +summary: "Compare any model against synthetic baselines (mean, persistence) to quantify the added value of a simulation." +--- + +## Value Proposition + +When presenting model results to stakeholders, a common question is: "How much better is this model than a simple guess?" Baseline comparisons answer that question by letting you evaluate your model against trivial reference models — such as using the observed mean or simply repeating the last known measurement. + +This turns model validation from "the RMSE is 0.3 m" (which means little on its own) into "our model reduces error by 40% compared to using the historical average" — a statement that clearly communicates the value of the modelling investment. + +## What This Enables + +- Compare any simulation against a **mean baseline** (constant prediction equal to the observed average) +- Compare against a **persistence baseline** (repeat the last observation forward) +- Define **custom baselines** such as climatological averages or seasonal means +- Compute standard skill scores (e.g., Nash-Sutcliffe Efficiency) that inherently measure improvement over a reference +- Present model value in terms stakeholders can immediately understand + +## Current Status + +Delivered in ModelSkill v1.3. Users can create synthetic baseline references and include them alongside real model results in skill assessments. diff --git a/roadmap/features/custom-metrics.md b/roadmap/features/custom-metrics.md new file mode 100644 index 000000000..a1df8174b --- /dev/null +++ b/roadmap/features/custom-metrics.md @@ -0,0 +1,23 @@ +--- +title: "Custom Metrics" +status: "Delivered" +category: "Analysis & Metrics" +summary: "Define domain-specific quality metrics that integrate fully into all skill tables and reports." +--- + +## Value Proposition + +Different domains have different standards for what constitutes a good model. Coastal engineers may need scatter index, wastewater modellers may need peak timing error, and hydrologists may need specific flow duration curve metrics. Rather than being limited to a fixed set of statistics, ModelSkill allows organisations to define their own metrics and use them exactly like built-in ones. + +This means teams can align model validation with their organisation's quality standards, regulatory requirements, or client-specific acceptance criteria — without any workarounds. + +## What This Enables + +- Define a new metric and make it available throughout ModelSkill with minimal setup +- Use custom metrics in all skill tables, aggregations, and reports alongside built-in metrics +- Share metric definitions across teams to ensure consistent evaluation standards +- Support both standard and directional (e.g., wind direction, wave direction) custom metrics + +## Current Status + +Delivered in ModelSkill v1.2. Custom metrics are defined once and automatically appear in all skill assessment results. diff --git a/roadmap/features/ensemble-support.md b/roadmap/features/ensemble-support.md new file mode 100644 index 000000000..457edce2a --- /dev/null +++ b/roadmap/features/ensemble-support.md @@ -0,0 +1,24 @@ +--- +title: "Ensemble and Probabilistic Forecast Support" +status: "Under Consideration" +category: "Analysis & Metrics" +summary: "Evaluate ensemble model runs using established probabilistic scoring methods alongside standard deterministic metrics." +--- + +## Value Proposition + +Ensemble forecasting — running multiple model realisations to capture uncertainty — is increasingly common in operational hydrology, meteorology, and coastal forecasting. Standard deterministic metrics (RMSE, bias) cannot properly evaluate probabilistic predictions. Proper scoring rules like the Continuous Ranked Probability Score (CRPS) are needed to assess whether the ensemble spread accurately reflects forecast uncertainty. + +Supporting ensemble evaluation would position ModelSkill as a complete validation toolkit for both deterministic and probabilistic forecasting workflows. + +## What This Enables + +- Load and compare ensemble model runs (multiple realisations) against observations +- Compute probabilistic metrics: CRPS, rank histograms, reliability diagrams, Brier score +- Assess ensemble spread calibration — is the ensemble over-confident or under-confident? +- Compare deterministic and ensemble forecasts using appropriate metrics for each +- Produce ensemble-specific visualisations (spaghetti plots, fan charts, probability exceedance curves) + +## Current Status + +Under consideration. This feature would require expanding ModelSkill's internal structure to handle multiple ensemble members and adding probabilistic scoring metrics. No implementation timeline has been set. diff --git a/roadmap/features/forecast-lead-time.md b/roadmap/features/forecast-lead-time.md new file mode 100644 index 000000000..c1dce0c35 --- /dev/null +++ b/roadmap/features/forecast-lead-time.md @@ -0,0 +1,24 @@ +--- +title: "Forecast Lead-Time Analysis" +status: "Under Consideration" +category: "Analysis & Metrics" +summary: "Assess how model skill degrades with forecast horizon to optimise forecast update frequency and communicate prediction reliability." +--- + +## Value Proposition + +Operational forecasting systems produce predictions at varying lead times — a 6-hour forecast is typically more accurate than a 48-hour forecast. Understanding exactly how skill degrades with lead time is essential for deciding how often to update forecasts and for communicating to end users how far ahead they can trust predictions. + +This capability would enable forecast managers to make data-driven decisions about operational scheduling and to set appropriate confidence levels for different forecast horizons. + +## What This Enables + +- Evaluate model skill as a function of forecast lead time (e.g., skill at T+6h, T+12h, T+24h, T+48h) +- Identify the forecast horizon beyond which predictions become unreliable +- Compare lead-time degradation across different model versions or configurations +- Support decisions about forecast update frequency and operational scheduling +- Produce lead-time skill curves for inclusion in forecast service documentation + +## Current Status + +Under consideration. The underlying architecture can accommodate lead-time analysis, and early design thinking has been done. No implementation timeline has been set. diff --git a/roadmap/features/network-models.md b/roadmap/features/network-models.md new file mode 100644 index 000000000..b4b586662 --- /dev/null +++ b/roadmap/features/network-models.md @@ -0,0 +1,24 @@ +--- +title: "Network Model Support" +status: "In Development" +category: "Domain Expansion" +summary: "Compare MIKE 1D hydraulic network simulations against observations at network nodes, covering collection systems, water distribution, and river networks." +--- + +## Value Proposition + +Hydraulic network models — for urban drainage, water distribution, and river systems — are a major part of DHI's modelling portfolio. Currently, validating these models against observed data requires custom scripts and manual workflows. Adding native support for MIKE 1D results brings the same structured, reproducible validation workflow to network modellers that already exists for coastal and marine models. + +This reduces the effort required to produce quality-assured model deliverables and ensures consistent validation standards across all model types within an organisation. + +## What This Enables + +- Load MIKE 1D simulation results (Res1D files) as model results +- Match network model outputs against point observations at specific nodes, reaches, or catchments +- Apply the full suite of ModelSkill metrics and visualisations to network model validation +- Compare multiple network model scenarios side by side +- Produce standardised skill assessments for urban drainage, water supply, and river modelling projects + +## Current Status + +In active development. Reading of MIKE 1D result files is already supported. Integration with ModelSkill's validation workflow is underway. diff --git a/roadmap/features/not-planned.md b/roadmap/features/not-planned.md new file mode 100644 index 000000000..f56630f39 --- /dev/null +++ b/roadmap/features/not-planned.md @@ -0,0 +1,22 @@ +--- +title: "Not Planned" +status: "Not Planned" +category: "Out of Scope" +summary: "Features that have been considered and determined to be outside ModelSkill's scope." +--- + +## Extreme Value Analysis + +Statistical analysis of extreme events (return periods, GEV fitting, peaks-over-threshold) is a specialised domain with dedicated statistical tools designed for this purpose. ModelSkill focuses on routine skill assessment of model outputs against observations, not on the statistical characterisation of extremes. Users needing extreme value analysis should apply those methods to ModelSkill's matched datasets using established statistical packages. + +## Deterministic Wave Analysis + +Wave crossing analysis, spectral decomposition, and other signal-processing techniques for wave data are best served by dedicated wave analysis tooling. ModelSkill evaluates how well a model reproduces observed values — it does not perform signal-level analysis of the data itself. + +## Project-Specific File Formats + +ModelSkill supports widely used data formats: NetCDF, CSV, tabular data, and MIKE file formats (dfs0, dfs2, dfsu). Supporting niche or project-specific file formats would serve only a small number of users while requiring ongoing maintenance. Users with non-standard formats should convert their data to a supported format before loading into ModelSkill. + +## Timezone-Aware Timestamps + +Model comparison workflows assume all data is in a consistent time reference (typically UTC). Adding timezone handling throughout the matching and comparison process would introduce significant complexity without clear benefit for skill assessment, since the prerequisite for meaningful comparison is that observation and model data are already on the same time reference. diff --git a/roadmap/features/outlier-detection.md b/roadmap/features/outlier-detection.md new file mode 100644 index 000000000..2d572a680 --- /dev/null +++ b/roadmap/features/outlier-detection.md @@ -0,0 +1,24 @@ +--- +title: "Outlier Detection" +status: "Under Consideration" +category: "Analysis & Metrics" +summary: "Automatically identify suspect observations using model-observation differences to improve data quality and skill assessment reliability." +--- + +## Value Proposition + +Observation data often contains errors — sensor malfunctions, biofouling, transmission glitches — that can distort skill assessments. Currently, identifying and removing these outliers is a manual process. Using the model-observation difference as a diagnostic signal can flag suspect data points systematically, improving both the reliability of skill metrics and the quality of observation datasets. + +This benefits both model validation (more accurate skill scores) and data management (systematic quality control of observation networks). + +## What This Enables + +- Flag observations that deviate significantly from model predictions as potential outliers +- Review flagged points before deciding whether to include or exclude them from skill assessment +- Apply configurable thresholds based on standard deviations, percentiles, or domain-specific criteria +- Improve consistency and defensibility of skill assessments by documenting data quality decisions +- Feed back data quality insights to observation network operators + +## Current Status + +Under consideration. Identified in community discussions as a practical need for working with real-world observation data. No implementation timeline has been set. diff --git a/roadmap/features/rolling-skill.md b/roadmap/features/rolling-skill.md new file mode 100644 index 000000000..f325aa2aa --- /dev/null +++ b/roadmap/features/rolling-skill.md @@ -0,0 +1,24 @@ +--- +title: "Rolling Skill Assessment" +status: "Under Consideration" +category: "Analysis & Metrics" +summary: "Track how model skill evolves over time using moving windows to detect performance trends and seasonal patterns." +--- + +## Value Proposition + +A single skill score for an entire simulation period can mask important temporal patterns. Model performance may drift over time due to changing boundary conditions, seasonal effects, or data assimilation quality. Rolling skill assessment reveals these trends, enabling operational teams to detect when a model starts underperforming and take corrective action. + +This is particularly valuable for operational forecasting systems where continuous performance monitoring is essential for maintaining service quality. + +## What This Enables + +- Compute skill metrics over a moving time window to produce a skill time series +- Detect performance degradation or improvement trends over the simulation period +- Identify seasonal patterns in model skill (e.g., consistently lower skill in winter) +- Monitor operational model performance in near-real-time dashboards +- Compare rolling skill between model versions to verify that updates improve performance consistently, not just on average + +## Current Status + +Under consideration. Identified in community discussions as a valuable operational monitoring capability. No implementation timeline has been set. diff --git a/roadmap/features/skill-aggregation.md b/roadmap/features/skill-aggregation.md new file mode 100644 index 000000000..5f1d0b195 --- /dev/null +++ b/roadmap/features/skill-aggregation.md @@ -0,0 +1,24 @@ +--- +title: "Spatial and Temporal Skill Aggregation" +status: "Delivered" +category: "Analysis & Metrics" +summary: "Assess model performance by geographic region, time period, season, or any custom grouping to identify where and when a model performs well or poorly." +--- + +## Value Proposition + +A single overall skill score can hide important patterns. A model might perform well on average but fail during storm events, in shallow areas, or during specific seasons. Skill aggregation lets managers and modellers break down performance by any relevant dimension — time, space, or custom categories — to understand exactly where model investments are paying off and where improvements are needed. + +This supports informed decision-making about model deployment: for example, identifying that a model is reliable in summer but not winter directly informs operational planning. + +## What This Enables + +- **Temporal aggregation**: Assess skill by month, season, year, or custom time periods +- **Spatial aggregation**: Compute skill on a regular grid to produce spatial maps of model quality +- **Category-based grouping**: Group by custom attributes such as station type, basin, depth zone, or operational regime +- **Multi-level analysis**: Combine groupings (e.g., skill by station and season) for detailed performance breakdowns +- Produce skill maps and heatmaps that communicate spatial performance patterns at a glance + +## Current Status + +Delivered. Users can group skill results by time period, category, or geographic grid when running assessments. diff --git a/roadmap/features/vertical-profiles.md b/roadmap/features/vertical-profiles.md new file mode 100644 index 000000000..dd4c0391c --- /dev/null +++ b/roadmap/features/vertical-profiles.md @@ -0,0 +1,24 @@ +--- +title: "Vertical Profile Assessment" +status: "Planned" +category: "Domain Expansion" +summary: "Validate 3D models by comparing against depth-varying observations such as temperature and salinity profiles." +--- + +## Value Proposition + +Three-dimensional hydrodynamic models produce results that vary with depth — temperature stratification, salinity gradients, and current profiles are critical outputs for environmental assessments, aquaculture siting, and water quality management. Today, validating these depth-varying outputs requires significant manual effort to align model layers with observation depths. + +Supporting vertical profiles natively will allow modellers to validate 3D model performance with the same ease as surface-level comparisons, providing confidence in the full water column representation. + +## What This Enables + +- Compare model output at multiple depths against profiling instruments (CTD casts, ADCP profiles) +- Assess skill as a function of depth to identify where in the water column the model performs best +- Visualise observed vs modelled profiles side by side +- Aggregate skill across depth bins for summary reporting +- Support different vertical grid types used in 3D models + +## Current Status + +Planned for a future release. The underlying architecture supports this type of data. Design work is needed to define how profile observations are loaded and compared. diff --git a/roadmap/features/web-app.md b/roadmap/features/web-app.md new file mode 100644 index 000000000..a379d8b76 --- /dev/null +++ b/roadmap/features/web-app.md @@ -0,0 +1,24 @@ +--- +title: "Web Application" +status: "Under Consideration" +category: "Platform" +summary: "Browser-based interface for model skill assessment, accessible to users without Python experience." +--- + +## Value Proposition + +ModelSkill's current workflow requires Python programming skills, which limits its audience to technical modellers. A web-based interface would make model validation accessible to project managers, clients, and domain experts who need to review model quality but do not write code. + +This would broaden the user base, enable self-service model evaluation, and reduce the bottleneck of having skilled Python users produce every validation analysis. + +## What This Enables + +- Upload observation and model result files through a browser interface +- Run matching and skill assessment without writing any code +- Explore interactive scatter plots, time series, and skill tables in the browser +- Share validation results via URL with colleagues and clients +- Serve as a demonstration and onboarding tool for new ModelSkill users + +## Current Status + +Under consideration. No technical approach has been selected. Both rapid prototyping options and production-grade solutions are being evaluated. No implementation timeline has been set. diff --git a/roadmap/scripts/generate_overview.py b/roadmap/scripts/generate_overview.py new file mode 100644 index 000000000..2faf44c19 --- /dev/null +++ b/roadmap/scripts/generate_overview.py @@ -0,0 +1,57 @@ +# /// script +# requires-python = ">=3.10" +# dependencies = ["jinja2", "pyyaml"] +# /// +"""Generate roadmap/README.md from feature page YAML frontmatter.""" + +from pathlib import Path + +import yaml +from jinja2 import Environment, FileSystemLoader + +ROADMAP_DIR = Path(__file__).resolve().parent.parent +FEATURES_DIR = ROADMAP_DIR / "features" +TEMPLATE_DIR = ROADMAP_DIR / "templates" + +STATUSES = [ + "Delivered", + "In Development", + "Planned", + "Under Consideration", + "Not Planned", +] + + +def parse_frontmatter(path: Path) -> dict: + """Extract YAML frontmatter from a markdown file.""" + text = path.read_text(encoding="utf-8") + if not text.startswith("---"): + raise ValueError(f"No YAML frontmatter found in {path}") + _, frontmatter, _ = text.split("---", 2) + meta = yaml.safe_load(frontmatter) + meta["filename"] = path.name + return meta + + +def main(): + features = [] + for path in sorted(FEATURES_DIR.glob("*.md")): + features.append(parse_frontmatter(path)) + + features_by_status: dict[str, list[dict]] = {s: [] for s in STATUSES} + for f in features: + status = f["status"] + if status in features_by_status: + features_by_status[status].append(f) + + env = Environment(loader=FileSystemLoader(TEMPLATE_DIR), keep_trailing_newline=True) + template = env.get_template("overview.md.j2") + readme = template.render(statuses=STATUSES, features_by_status=features_by_status) + + output_path = ROADMAP_DIR / "README.md" + output_path.write_text(readme, encoding="utf-8") + print(f"Generated {output_path}") + + +if __name__ == "__main__": + main() diff --git a/roadmap/templates/overview.md.j2 b/roadmap/templates/overview.md.j2 new file mode 100644 index 000000000..d600789f3 --- /dev/null +++ b/roadmap/templates/overview.md.j2 @@ -0,0 +1,21 @@ +# ModelSkill Product Roadmap + +This roadmap outlines the current and future direction of ModelSkill — a toolkit for evaluating simulation model quality by comparing results against observations. + +For questions or feature requests, please open a [GitHub Discussion](https://github.com/DHI/modelskill/discussions). + +--- +{% for status in statuses %} +{%- if features_by_status[status] %} + +## {{ status }} + +{% if status == "Not Planned" -%} +See [features considered out of scope](features/{{ features_by_status[status][0].filename }}). +{%- else -%} +{% for f in features_by_status[status] %} +- **[{{ f.title }}](features/{{ f.filename }})** — {{ f.summary }} +{%- endfor %} +{%- endif %} +{%- endif %} +{%- endfor %} From c397ccfb44410166da4bee2edfc787f3876c40f4 Mon Sep 17 00:00:00 2001 From: Henrik Andersson Date: Sat, 28 Feb 2026 09:17:09 +0100 Subject: [PATCH 2/2] Shorten roadmap intro Shortened the description of the ModelSkill roadmap. --- roadmap/templates/overview.md.j2 | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/roadmap/templates/overview.md.j2 b/roadmap/templates/overview.md.j2 index d600789f3..158c22b12 100644 --- a/roadmap/templates/overview.md.j2 +++ b/roadmap/templates/overview.md.j2 @@ -1,8 +1,6 @@ -# ModelSkill Product Roadmap +# ModelSkill Roadmap -This roadmap outlines the current and future direction of ModelSkill — a toolkit for evaluating simulation model quality by comparing results against observations. - -For questions or feature requests, please open a [GitHub Discussion](https://github.com/DHI/modelskill/discussions). +This roadmap outlines the current and future direction of ModelSkill. --- {% for status in statuses %}