Skip to content

Commit 7ac9bb4

Browse files
abossardSubSonic731Copilot
authored
Kba draft review fixes (#20)
* kba-draft implementiert * - test dateien entfernt - struktur aufgeräumt - README.md angepasst - learning_mechanism.md plan erstellt - desing fixes * feat: add search questions generation with database migration and UI Database & Backend: - Add search_questions column migration in operations.py (ALTER TABLE for existing databases) - Add /api/kba/drafts/{id}/replace endpoint in app.py - Fix backward compatibility in kba_service.py (_table_to_draft, _draft_to_table) - Add search questions generation to replace_draft workflow - Fix NULL constraint errors by ensuring empty strings for required fields - Update related_tickets validation: accept INC + 9-12 digits (was fixed at 12) Frontend: - Add Text component import to KBADrafterPage.jsx (fix TypeError) - Add full-screen blur overlay with centered spinner during KBA generation - Show overlay for both new draft creation and replacement operations - Update styles: loadingOverlay with backdrop-filter blur effect Documentation: - Update kba_prompts.py: clarify related_tickets format with examples - Update GENERAL.md: correct related_tickets format specification Fixes #1 - KBA drafts not loading (missing DB column) Fixes #2 - Replace endpoint not found (405 error) Fixes #3 - Ticket ID validation too strict * tickets in popup ansehen * feat(kba-drafter): add ability to reset reviewed KBAs back to draft - Add "Zurück zu Entwurf" button for reviewed status KBAs - Add handleUnreview() handler to update status from "reviewed" to "draft" - Import ArrowUndo24Regular icon for the unreview action - Allow users to continue editing KBAs after review without deletion This enables editing of reviewed KBAs that need changes before publishing. * feat(kba-drafter): add ticket viewer, unreview, status filter, and UI improvements - Add ticket viewer dialog to display original incident details * New "Ticket" button in KBA header with DocumentSearch icon * Modal dialog showing incident data (ID, summary, status, priority, assignee, notes, resolution) * Backend endpoint /api/csv-tickets/by-incident/<incident_id> for incident ID lookup * Frontend API function getCSVTicketByIncident() - Add unreview functionality for reviewed KBAs * "Zurück zu Entwurf" button with ArrowUndo icon * Allows resetting reviewed KBAs back to draft status for further editing - Redesign KBA overview list * Replace corner delete button with professional overflow menu (⋮) * Horizontal layout: content left, status badge right-aligned, menu button * Menu component with delete option - Add status filter dropdown to KBA overview * Filter options: All, draft, reviewed, published * Dropdown in card header for easy filtering - Align EditableList "Add" button width with input fields * Use invisible placeholder buttons for exact width matching * Ensures consistent layout regardless of allowReorder setting Files modified: - frontend/src/features/kba-drafter/KBADrafterPage.jsx - frontend/src/features/kba-drafter/components/EditableList.jsx - frontend/src/services/api.js - backend/app.py * fix(kba): fix draft deletion bug and add collapsible AutoGenSettings - Fix delete draft error: use response.items instead of response.drafts - Make AutoGenSettings card collapsible with chevron icon - Starts collapsed to reduce visual dominance - Smooth slide-down animation when expanded - Status badge visible in collapsed header - Clickable header with keyboard support (Enter key) * fix(kba): auto-scroll to top when opening draft When clicking on a draft from the list after scrolling down, the page now automatically scrolls to the top with a smooth animation. This ensures users always start at the beginning of the draft content. * feat: replace browser confirms with custom modal dialogs for unsaved changes Replace native window.confirm() with ConfirmDialog component for better UX consistency and modern appearance. Adds centered warning modal when user attempts to discard unsaved changes (close draft, switch to preview, or load different draft). Changes: - Add unsavedChangesDialogOpen and pendingAction states - Update toggleEditMode, loadDraft, and handleClose to trigger modal - Add handleDiscardChanges and handleCancelDiscard handlers - Add ConfirmDialog with warning intent at end of component * fix: address code review issues and add KBA drafter e2e tests Fixes: - Fix CSV folder case mismatch (CSV -> csv) in app.py and operations.py - Remove duplicate get_ticket_by_incident_id method in csv_data.py - Replace inefficient len(session.exec().all()) with SQL COUNT(*) in kba_service.py - Replace hardcoded placeholder credentials with env var lookups in kba_service.py - Fix scheduler swallowing exceptions (remove bare raise, return None) - Add settings reload at start of each scheduler run to fix race condition - Add generation_warnings field to surface search questions failures to users - Add schema migration for generation_warnings column Tests: - Add 19 Playwright e2e tests for KBA Drafter feature covering: page load, navigation, LLM health status, draft generation, draft display, draft list, editing, review workflow, duplicate handling, and backend API integration Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add LiteLLM fallback, Playwright tests, and remove OpenAI hard dependency - LiteLLM is now the default LLM backend (no .env or API key needed) - Multistage model fallback chain: claude-sonnet-4 → gpt-4o → gpt-4o-mini - OpenAI SDK still used when OPENAI_API_KEY is explicitly set - agents.py and workbench service use ChatLiteLLM when no OpenAI key - Added csv_ticket_stats and csv_sla_breach_tickets to agent tools - Added KBA Drafter to Playwright nav tests and menu screenshots - Added e2e tests: publish, delete, status filter, ticket viewer - 32 unit tests + 5 live integration tests for LLM service - Updated .env.example with LiteLLM-first documentation Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: SubSonic731 <alessandro.roschi@bit.admin.ch> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent eb824e6 commit 7ac9bb4

72 files changed

Lines changed: 18705 additions & 136 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.env.example

Lines changed: 30 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,36 @@
1-
# OpenAI Configuration
1+
# LiteLLM Configuration (Default LLM backend)
2+
# Works out of the box with GitHub Copilot — no API keys needed.
3+
# Supports 100+ providers: GitHub Copilot, Ollama, Anthropic, etc.
4+
# Docs: https://docs.litellm.ai/docs/providers
5+
6+
# Primary model (default: github_copilot/gpt-4o)
7+
# LITELLM_MODEL=github_copilot/gpt-4o
8+
9+
# Fallback chain (comma-separated, tried in order if primary fails)
10+
# LITELLM_FALLBACK_MODELS=github_copilot/claude-sonnet-4,github_copilot/gpt-4o,github_copilot/gpt-4o-mini
11+
12+
# OpenAI Configuration (Optional — overrides LiteLLM when set)
213
# Get your API key from: https://platform.openai.com/api-keys
14+
# If set, uses OpenAI SDK directly with beta.chat.completions.parse()
315

4-
OPENAI_API_KEY=your-openai-api-key-here
5-
OPENAI_MODEL=gpt-4o-mini
6-
# Optional override
16+
# OPENAI_API_KEY=your-openai-api-key-here
17+
# OPENAI_MODEL=gpt-4o-mini
718
# OPENAI_BASE_URL=https://api.openai.com/v1
819

20+
# Knowledge Base Publishing Configuration (for KBA Drafter)
21+
# FileSystem Adapter (MVP - writes markdown files)
22+
KB_FILE_BASE_PATH=./kb_published
23+
KB_FILE_CREATE_CATEGORIES=true
24+
25+
# SharePoint Adapter (future - not yet implemented)
26+
# KB_SHAREPOINT_SITE_URL=https://company.sharepoint.com/sites/KB
27+
# KB_SHAREPOINT_CLIENT_ID=your-client-id
28+
# KB_SHAREPOINT_CLIENT_SECRET=your-client-secret
29+
30+
# ITSM/ServiceNow Adapter (future - not yet implemented)
31+
# KB_ITSM_INSTANCE_URL=https://company.service-now.com
32+
# KB_ITSM_USERNAME=your-username
33+
# KB_ITSM_PASSWORD=your-password
34+
935
# Optional: Frontend build path override
1036
# FRONTEND_DIST=/path/to/custom/frontend/dist

CSV/data.csv

Lines changed: 207 additions & 0 deletions
Large diffs are not rendered by default.

README.md

Lines changed: 28 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@
1616
## Tech stack at a glance
1717
- Backend: Quart, Pydantic 2, MCP JSON-RPC, Async SSE (`backend/app.py`)
1818
- Business logic: `TaskService` + models in `backend/tasks.py`
19-
- LLM Integration: Ollama with local models (`backend/ollama_service.py`)
19+
- LLM Integration: OpenAI (`backend/llm_service.py`)
2020
- Frontend: React 18, Vite, FluentUI components, feature-first structure under `frontend/src/features`
21-
- Tests: Playwright E2E (`tests/e2e/app.spec.js`, `tests/e2e/ollama.spec.js`)
21+
- Tests: Playwright E2E (`tests/e2e/app.spec.js`)
2222

2323
## Documentation
2424

@@ -33,18 +33,27 @@ All deep-dive guides now live under `docs/` for easier discovery:
3333
- [Troubleshooting](docs/TROUBLESHOOTING.md) – common issues and fixes for setup, dev, and tests
3434
- [CSV AI Guidance](docs/CSV_AI_GUIDANCE.md) – how AI agents should query and reason over CSV ticket data
3535

36+
### KBA Drafter Documentation
37+
38+
> **NEW:** LLM-powered Knowledge Base Article generator with OpenAI integration
39+
40+
- **[Feature Overview](docs/KBA_DRAFTER_OVERVIEW.md)** – Architecture, components, API endpoints, testing
41+
- **[Quick Start](docs/KBA_DRAFTER_QUICKSTART.md)** – Fastest path to generating your first KBA
42+
- **[Technical Guide](docs/KBA_DRAFTER.md)** – Complete implementation details
43+
- **[Publishing Guide](docs/KBA_PUBLISHING.md)** – How to publish KBAs to different KB systems
44+
3645

3746

3847

3948

4049

4150
## 5-minute quick start (TL;DR)
4251
1. Clone the repo: `git clone <your-fork-url> && cd python-quart-vite-react`
43-
2. Run the automated bootstrap: `./setup.sh` (creates the repo-level `.venv`, installs frontend deps, installs Playwright, checks for Ollama)
44-
3. (Optional) Install Ollama for LLM features: `curl -fsSL https://ollama.com/install.sh | sh && ollama pull llama3.2:1b`
52+
2. Run the automated bootstrap: `./setup.sh` (creates the repo-level `.venv`, installs frontend deps, installs Playwright)
53+
3. Configure OpenAI API key in `.env` for LLM features (see KBA Drafter documentation)
4554
4. Start all servers: `./start-dev.sh` *(or)* use the VS Code "Full Stack: Backend + Frontend" launch config
4655
5. Open `http://localhost:3001/usecase_demo_1` and start documenting your usecase demo idea on that page
47-
6. (Optional) Test Ollama integration: `curl -X POST http://localhost:5001/api/ollama/chat -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Say hello"}]}'`
56+
6. Test KBA health endpoint: `curl http://localhost:5001/api/kba/health`
4857
7. (Optional) Run the Playwright suite from the repo root: `npm run test:e2e`
4958

5059
## Detailed setup (first-time users)
@@ -66,34 +75,28 @@ npx playwright install chromium
6675
```
6776
> Debian/Ubuntu users may also need `npx playwright install-deps` for browser libs.
6877
69-
### 4. Ollama (optional - for LLM features)
70-
```bash
71-
# Install Ollama
72-
curl -fsSL https://ollama.com/install.sh | sh
78+
### 4. OpenAI API Key (for KBA Drafter)
7379

74-
# Pull the lightweight model
75-
ollama pull llama3.2:1b
80+
Add your OpenAI API key to `.env`:
7681

77-
# Verify installation
78-
ollama list
82+
```bash
83+
OPENAI_API_KEY=sk-proj-your-key-here
84+
OPENAI_MODEL=gpt-4o-mini
7985
```
8086

81-
The app works without Ollama, but LLM endpoints (`/api/ollama/*`) will return 503 errors. For production use, consider:
82-
- **llama3.2:1b** (~1.3GB) — Fast, good for testing and simple tasks
83-
- **llama3.2:3b** (~2GB) — Better quality, still fast
84-
- **qwen2.5:3b** (~2GB) — Alternative with strong performance
87+
Get your API key from [platform.openai.com/api-keys](https://platform.openai.com/api-keys).
8588

86-
> The `setup.sh` script checks for Ollama and provides installation instructions if not found.
89+
> The KBA Drafter requires OpenAI configured in `.env` to function.
8790
8891
## Run & verify
8992

9093
### Option A — Manual terminals
9194
1. **Backend:** `source .venv/bin/activate && cd backend && python app.py` → serves REST + MCP on `http://localhost:5001`
9295
2. **Frontend:** `cd frontend && npm run dev` → launches Vite dev server on `http://localhost:3001`
93-
3. **Ollama (optional):** `ollama serve`runs LLM server on `http://localhost:11434`
96+
3. **OpenAI (for KBA Drafter):** Configure `.env` with `OPENAI_API_KEY`enables LLM-powered KBA generation
9497

9598
### Option B — Helper script
96-
`./start-dev.sh` (verifies dependencies, starts backend + frontend + Ollama if available, stops all on Ctrl+C)
99+
`./start-dev.sh` (verifies dependencies, starts backend + frontend, stops all on Ctrl+C)
97100

98101
### Option C — VS Code
99102
Use the “Full Stack: Backend + Frontend” launch config to start backend + frontend with attached debuggers.
@@ -122,10 +125,7 @@ docker run --rm -p 5001:5001 quart-react-demo
122125
- **Usecase Demo tab (`/usecase_demo_1`):** Main demo page for documenting usecase demo ideas with editable prompts and background agent runs.
123126
- **Fields tab (`/fields`):** Lists mapped CSV schema fields available to UI/MCP/agent flows.
124127
- **Agent tab (`/agent`):** Chat-style agent interface for CSV ticket analysis.
125-
- **Ollama API (backend only):**
126-
- `POST /api/ollama/chat` — Chat with local LLM (supports conversation history)
127-
- `GET /api/ollama/models` — List available models
128-
- Also exposed via MCP tools: `ollama_chat`, `list_ollama_models`
128+
- **KBA Drafter tab (`/kba-drafter`):** Generate Knowledge Base Articles from tickets using OpenAI
129129

130130
## Architecture cheat sheet
131131
- Shows how to keep REST and MCP JSON-RPC in a single Quart process
@@ -162,57 +162,6 @@ TaskService + Pydantic models (backend/tasks.py)
162162
| `npm run test:e2e` | Run all Playwright E2E tests |
163163
| `npm run test:e2e:ui` | Run tests in interactive UI mode |
164164
| `npm run test:e2e:report` | View test results report |
165-
| `npm run ollama:pull` | Download llama3.2:1b model |
166-
| `npm run ollama:start` | Start Ollama server manually |
167-
| `npm run ollama:status` | Check if Ollama is running |
168-
169-
## Example Ollama API calls
170-
171-
```bash
172-
# List available models
173-
curl http://localhost:5001/api/ollama/models
174-
175-
# Simple chat
176-
curl -X POST http://localhost:5001/api/ollama/chat \
177-
-H "Content-Type: application/json" \
178-
-d '{
179-
"messages": [
180-
{"role": "user", "content": "What is Python?"}
181-
],
182-
"model": "llama3.2:1b",
183-
"temperature": 0.7
184-
}'
185-
186-
# Conversation with history
187-
curl -X POST http://localhost:5001/api/ollama/chat \
188-
-H "Content-Type: application/json" \
189-
-d '{
190-
"messages": [
191-
{"role": "user", "content": "My name is Alice"},
192-
{"role": "assistant", "content": "Nice to meet you, Alice!"},
193-
{"role": "user", "content": "What is my name?"}
194-
],
195-
"model": "llama3.2:1b"
196-
}'
197-
198-
# Via MCP JSON-RPC
199-
curl -X POST http://localhost:5001/mcp \
200-
-H "Content-Type: application/json" \
201-
-d '{
202-
"jsonrpc": "2.0",
203-
"method": "tools/call",
204-
"params": {
205-
"name": "ollama_chat",
206-
"arguments": {
207-
"messages": [{"role": "user", "content": "Hello!"}]
208-
}
209-
},
210-
"id": 1
211-
}'
212-
```
213-
214-
- Node.js 18+
215-
- `cd frontend && npm install`
216165

217166
## Testing
218167

@@ -234,13 +183,11 @@ npm run test:e2e:report
234183

235184
**Test suites:**
236185
- `tests/e2e/app.spec.js` — Dashboard, tasks, SSE streaming
237-
- `tests/e2e/ollama.spec.js` — LLM chat, model listing, validation (requires Ollama)
238186

239187
Tests rely on:
240188
- Sample tasks being present
241189
- Stable `data-testid` attributes in the React components
242190
- SSE payload shape `{ time, date, timestamp }`
243-
- Ollama running on `localhost:11434` with `llama3.2:1b` model (for Ollama tests)
244191

245192
1. **Backend:** `source .venv/bin/activate && cd backend && python app.py` → serves REST + MCP on `http://localhost:5001`
246193
2. **Frontend:** `cd frontend && npm run dev` → launches Vite dev server on `http://localhost:3001`
@@ -253,9 +200,7 @@ Tests rely on:
253200
| `source .venv/bin/activate` fails | Recreate the env: `rm -rf .venv && python3 -m venv .venv && pip install -r backend/requirements.txt` |
254201
| `npm install` errors | `npm cache clean --force && rm -rf node_modules package-lock.json && npm install` |
255202
| Playwright browser install fails | `sudo npx playwright install-deps && npx playwright install` |
256-
| Ollama not found | Install: `curl -fsSL https://ollama.com/install.sh \| sh` then `ollama pull llama3.2:1b` |
257-
| Ollama connection error | Start server: `ollama serve` or check if running: `curl http://localhost:11434/api/tags` |
258-
| LLM responses are slow | Try a smaller model (`llama3.2:1b` is fastest) or ensure GPU acceleration is enabled |
203+
| OpenAI API errors | Check `.env` has valid `OPENAI_API_KEY`, verify at `curl http://localhost:5001/api/kba/health` |
259204

260205
See [docs/TROUBLESHOOTING.md](docs/TROUBLESHOOTING.md) for more detailed solutions.
261206

@@ -264,9 +209,8 @@ See [docs/TROUBLESHOOTING.md](docs/TROUBLESHOOTING.md) for more detailed solutio
264209
2. Extend the SSE stream to broadcast task stats (remember to update `connectToTimeStream` consumers)
265210
3. Persist data with SQLite or Postgres instead of `_tasks_db`
266211
4. Add more Playwright specs (filters, SSE error handling, MCP flows)
267-
5. **Build a chat UI:** Create `frontend/src/features/ollama/OllamaChat.jsx` with FluentUI components and connect to `/api/ollama/chat`
268-
6. **Smart task descriptions:** Use Ollama to auto-generate task descriptions from titles
269-
7. **Task summarization:** Summarize completed tasks using LLM
270-
8. **Multi-model comparison:** Let users select different Ollama models and compare responses
212+
5. **Smart task descriptions:** Use OpenAI to auto-generate task descriptions from titles
213+
6. **Task summarization:** Summarize completed tasks using LLM
214+
7. **KBA enhancements:** Add multi-language support, SharePoint integration
271215

272216
Happy coding! 🎉

backend/=3.10.4

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
Collecting APScheduler
2+
Downloading apscheduler-3.11.2-py3-none-any.whl.metadata (6.4 kB)
3+
Collecting tzlocal>=3.0 (from APScheduler)
4+
Downloading tzlocal-5.3.1-py3-none-any.whl.metadata (7.6 kB)
5+
Downloading apscheduler-3.11.2-py3-none-any.whl (64 kB)
6+
Downloading tzlocal-5.3.1-py3-none-any.whl (18 kB)
7+
Installing collected packages: tzlocal, APScheduler
8+
9+
Successfully installed APScheduler-3.11.2 tzlocal-5.3.1

backend/agent_workbench/service.py

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,21 @@ def on_tool_error(self, error: BaseException, *, run_id: Any, **kwargs: Any) ->
7575
# ============================================================================
7676

7777
def _build_llm(model: str, api_key: str, base_url: str = "") -> Any:
78-
from langchain_openai import ChatOpenAI
79-
return ChatOpenAI(
80-
model=model,
81-
api_key=api_key,
82-
base_url=base_url or None,
83-
temperature=0.0,
84-
)
78+
if api_key:
79+
from langchain_openai import ChatOpenAI
80+
return ChatOpenAI(
81+
model=model,
82+
api_key=api_key,
83+
base_url=base_url or None,
84+
temperature=0.0,
85+
)
86+
else:
87+
from langchain_litellm import ChatLiteLLM
88+
litellm_model = os.getenv("LITELLM_MODEL", "github_copilot/gpt-4o")
89+
return ChatLiteLLM(
90+
model=litellm_model,
91+
temperature=0.0,
92+
)
8593

8694

8795
def _build_react_agent(llm: Any, tools: list[Any], system_prompt: str) -> Any:
@@ -147,11 +155,6 @@ def __init__(
147155
@property
148156
def llm(self) -> Any:
149157
if self._llm is None:
150-
if not self._api_key:
151-
raise ValueError(
152-
"OPENAI_API_KEY is required to run agents. "
153-
"Set it via environment variable or pass openai_api_key."
154-
)
155158
self._llm = _build_llm(self._model, self._api_key, self._base_url)
156159
return self._llm
157160

0 commit comments

Comments
 (0)