- Python 3.11+ installed
- Node.js 18+ installed
- OpenAI API Key (platform.openai.com/api-keys)
# Navigate to backend directory
cd backend
# Copy environment template
cp ../.env.example ../.env
# Install Python dependencies
pip install -r requirements.txtEdit .env and add your OpenAI credentials:
# OpenAI API Key (required)
OPENAI_API_KEY=sk-proj-your-key-here
# Model (required - must support structured output)
OPENAI_MODEL=gpt-4o-miniGet API Key:
- Go to platform.openai.com/api-keys
- Create new secret key
- Copy key to
.env
Supported Models:
gpt-4o-mini(recommended - cost-effective)gpt-4o(higher quality, more expensive)
# Navigate to frontend directory
cd ../frontend
# Install Node dependencies
npm installcd backend
python app.pyBackend will start on: http://localhost:5000
cd frontend
npm run devFrontend will start on: http://localhost:5173
- Open Browser: Navigate to http://localhost:5173
- Click "KBA Drafter" Tab: Second tab in navigation
- Check LLM Status: Green badge = ready, Yellow = not available
- Enter Ticket UUID or INC Number: Use a UUID or INC number from
csv/data.csv - Generate KBA: Click "KBA Generieren" button
- Review & Edit: Edit the generated draft
- Mark as Reviewed: Click "Als geprüft markieren"
- Publish: Click "Veröffentlichen" once reviewed
curl http://localhost:5000/api/kba/healthExpected response:
{
"llm_available": true,
"llm_provider": "openai",
"model": "gpt-4o-mini"
}If llm_available is false:
- Check
OPENAI_API_KEYin.env - Verify key is valid at platform.openai.com
- Check OpenAI API status
# OpenAI Configuration (required)
OPENAI_API_KEY=sk-proj-your-key-here
OPENAI_MODEL=gpt-4o-mini
# Database (optional)
KBA_DATABASE_URL=sqlite:///./data/kba.dbEdit .env:
# Cost-effective (recommended)
OPENAI_MODEL=gpt-4o-mini
# Higher quality
OPENAI_MODEL=gpt-4oModel Requirements:
- Must support OpenAI Structured Output
- Released after August 2024
- See KBA_OPENAI_INTEGRATION.md for details
# Health check
curl http://localhost:5000/api/kba/health
# List guidelines
curl http://localhost:5000/api/kba/guidelines
# Generate draft (replace UUID with real ticket UUID)
curl -X POST http://localhost:5000/api/kba/drafts \
-H "Content-Type: application/json" \
-d '{
"ticket_id": "550e8400-e29b-41d4-a716-446655440000",
"user_id": "test@example.com"
}'- Open DevTools Console (F12)
- Generate a KBA draft
- Check Network tab for API calls
- Verify no errors in Console
Solution:
- Check
.envfile hasOPENAI_API_KEY=sk-proj-... - Restart backend after editing
.env - Verify key is valid at platform.openai.com
Solution:
# Check OpenAI API health
curl http://localhost:5000/api/kba/health
# Check OpenAI status page
# https://status.openai.com/Solution:
- Wait 60 seconds and retry
- Check your OpenAI usage at platform.openai.com/usage
- Consider upgrading your OpenAI plan
Check:
- Backend logs for error details
- OpenAI API key is valid
- Ticket UUID exists in
csv/data.csv - Guidelines files in
docs/kba_guidelines/
Solution:
# Check backend is running
curl http://localhost:5000/api/health
# If not, start backend
cd backend && python app.pyCheck:
# Check npm is running
ps aux | grep vite
# If not, start frontend
cd frontend && npm run dev
# Check browser console for errors (F12)┌─────────────────┐
│ Frontend │ React + FluentUI
│ (Port 5173) │
└────────┬────────┘
│ HTTP REST
│
┌────────▼────────┐
│ Backend │ Quart (async Flask)
│ (Port 5000) │
└────────┬────────┘
│
┌────┴────┬──────────┐
│ │ │
┌───▼───┐ ┌──▼──────┐ ┌─▼────────┐
│ CSV │ │ SQLite │ │ OpenAI │
│ Data │ │ (KBA) │ │ (LLM) │
└───────┘ └─────────┘ └──────────┘
Guidelines are markdown files that provide context to the LLM:
docs/kba_guidelines/
├── GENERAL.md # Always included
├── VPN.md # VPN issues
├── PASSWORD_RESET.md # Password/account
└── NETWORK.md # Network issues
Adding New Guidelines:
- Create
.mdfile indocs/kba_guidelines/ - Add frontmatter:
--- category: EMAIL priority: 10 tags: [outlook, email] --- # Email Troubleshooting Guide ...
- Update
CATEGORY_MAPinbackend/guidelines_loader.py
- Use gpt-4o-mini for cost-effectiveness
- Guidelines caching: Already implemented
- Database: SQLite fine for <1000 drafts, use PostgreSQL for more
Edit backend/kba_prompts.py:
def build_kba_prompt(...):
return f"""
# YOUR CUSTOM PROMPT
...
"""-
Update
backend/kba_models.py:class KBADraft(BaseModel): new_field: str = Field(description="...")
-
Update
backend/kba_schemas.pyJSON schema -
Update prompt in
kba_prompts.py -
Update frontend
KBADrafterPage.jsx
# Use curl for quick tests
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.2:1b",
"prompt": "Generate a KBA for VPN issues",
"stream": false
}'- Use PostgreSQL instead of SQLite
- Set proper CORS origins
- Use production WSGI server (hypercorn)
- Enable HTTPS
- Set up monitoring (audit logs)
- Configure backup for database
- Use larger LLM model for quality
- Add authentication/authorization
- Rate limiting on API
# .env.production
OPENAI_API_KEY=sk-proj-your-production-key
OPENAI_MODEL=gpt-4o
KBA_DATABASE_URL=postgresql://user:pass@db:5432/kba- Customize Guidelines: Edit files in
docs/kba_guidelines/ - Test with Real Data: Use your actual ticket CSV
- Tune Prompts: Adjust
kba_prompts.pyfor your needs - Add Categories: Create category-specific guidelines
- Integrate Publishing: Implement SharePoint/Confluence export
- Full Documentation: docs/KBA_DRAFTER.md
- OpenAI Docs: https://platform.openai.com/docs
- Quart Docs: https://quart.palletsprojects.com/
- FluentUI: https://react.fluentui.dev/
For issues:
- Check backend terminal for errors
- Check browser console (F12)
- Review audit trail:
GET /api/kba/drafts/{id}/audit