This project exposes ticket data from csv/data.csv through REST, MCP, and agent tools.
Use this guide when building prompts or automation for usecase demo idea generation.
- Source of truth:
csv/data.csv(read-only in current implementation). - Backend normalizes many BMC headers into the typed
Ticketmodel (backend/tickets.py). - Do not assume every CSV column is mapped. Use only exposed normalized fields.
- Always treat missing values as unknown, not false.
- Call
csv_ticket_fieldsto discover available fields. - Call
csv_ticket_statsto get high-level distribution (status, priority, city, group). - Narrow data with
csv_list_ticketsfilters (status,assigned_group,has_assignee). - Use
csv_search_ticketsfor text scenarios (problem patterns, products, cities, notes). - Call
csv_get_ticketonly for deep dives on specific IDs.
csv_ticket_fieldscsv_ticket_statscsv_list_ticketscsv_search_ticketscsv_get_ticket
All tools are available via POST /mcp (tools/list and tools/call).
When generating project ideas:
- Start from real evidence in tickets (priority, volume, repeated keywords, bottlenecks).
- Explicitly reference ticket IDs used as evidence.
- Prefer one menu point per project idea.
- Return both:
- Human summary
- Structured rows (JSON) for table rendering
Suggested output schema:
{
"rows": [
{
"menu_point": "Smart Routing",
"project_name": "Auto Assignment Optimizer",
"summary": "Reduces unassigned high-priority incidents.",
"agent_prompt": "Analyze unassigned critical/high tickets and propose routing rules.",
"ticket_ids": "id1,id2,id3",
"csv_evidence": "24 high-priority tickets without assignee in top 2 groups."
}
]
}- Never invent tickets, IDs, or field values.
- If the dataset is insufficient, say so clearly.
- Keep responses deterministic and auditable: include filtering logic used.
- Prefer concise tables over long prose when showing candidate projects.