Skip to content

[Feature]: Admin metrics dashboard improvements for library owners #2451

@kemister85

Description

@kemister85

Problem Description

Summary

Library owners who are actively optimizing documentation for Context7 benchmark scores need deeper metrics and programmatic access to close the feedback loop. The current admin Metrics tab provides high-level usage counts but lacks the granularity, time range controls, and API access needed to identify gaps, measure impact, and automate improvement workflows.

Current limitations

  1. Fixed time range — The usage chart appears limited to ~30 days with no controls to adjust the window. There is no way to view 7-day, 90-day, or custom date ranges.
  2. No query-level data — "Topic Queries" shows no data. There is no visibility into what users are asking, which queries return poor results, or which topics have the highest demand.
  3. No snippet-level data — No insight into which documentation snippets are retrieved most/least frequently, or which snippets are returned but fail to answer the query.
  4. MCP client breakdown is aggregate only — The bar chart shows daily totals per client but there is no way to drill into request types, session depth, or trends over longer periods.
  5. No programmatic access — All metrics are locked behind the web UI. There is no API to pull this data into CI/CD pipelines, reporting tools, or automation scripts.
  6. No correlation with benchmark — Metrics and benchmark scores exist on separate tabs with no way to overlay them or see how documentation changes affect both usage and scores over time.

Proposed Solution

Requested features

1. Configurable time range and resolution

  • Date range picker (7d, 30d, 90d, YTD, custom range)
  • Resolution toggle (hourly, daily, weekly, monthly)
  • Comparison mode (e.g., this month vs last month) to measure the impact of documentation updates

2. Query and topic analytics (highest priority)

  • Most frequently requested topics — ranked list of what users ask about this library, with request counts
  • Failed/low-confidence queries — queries where Context7 returned no result or low-relevance snippets. This is the most direct signal for documentation gaps and the fastest path to improving benchmark scores
  • Query trend over time — see if a topic is growing in demand
  • Query-to-snippet mapping — which doc snippets were served for each query, enabling targeted rewrites

3. Snippet retrieval analytics

  • Most retrieved snippets — top N snippets by retrieval count
  • Unretrieved snippets — indexed content that is never served (candidates for restructuring or better keyword coverage)
  • Snippet performance — if any relevance/feedback signal exists, surface it per snippet

4. Benchmark score integration

  • Score history over time — plot benchmark score on the same timeline as usage metrics
  • Per-question score tracking — show how each benchmark question's score changes after re-indexing
  • Re-index event markers — annotate the timeline when re-indexing occurred so score changes can be correlated with specific documentation updates

5. Expanded MCP client analytics

  • Wider time range — apply the same date range controls to the MCP client chart
  • Per-client trend lines — line chart showing adoption trajectory per client over time
  • Request type breakdown — which MCP tools (resolve-library-id, get-library-docs) are called per client
  • Session metrics — average requests per conversation/session, indicating depth of usage

Alternatives Considered

No response

Priority

Nice to have

Additional Context

Thanks for taking the time to review 😄

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions