An AI-powered full-stack application that generates high-quality project documentation from source code repositories. Connect a GitHub repo, let specialized micro-agents analyze the codebase, architecture, dependencies, and APIs, and get structured README documentation in minutes, powered by multi-provider LLMs, OpenAI-compatible endpoints, or locally hosted models such as Ollama.
- Project Overview
- Architecture
- Get Started
- Project Structure
- Usage Guide
- LLM Provider Configuration
- Inference Benchmarks
- Model Capabilities
- Environment Variables
- Technology Stack
- Troubleshooting
- License
DocuBot shows how agentic AI can be applied to one of the most time-consuming software tasks: documentation. The applicatoin analyzes real project evidence from a repository and uses specialized micro-agents to generate structured, context-aware README documentation that is more accurate and maintainable than traditional single-prompt generation.
The application supports a flexible inference layer, allowing it to work with OpenAI, Groq, OpenRouter, custom OpenAI-compatible APIs, and local Ollama deployments. This makes it practical for cloud-based teams, enterprise environments, and privacy-sensitive local setups alike.
This makes DocuBot suitable for:
- Enterprise teams — integrate with internal gateways, hosted APIs, or private inference infrastructure
- Local experimentation — run documentation generation with self-hosted models through Ollama
- Hardware benchmarking — measure SLM throughput on Apple Silicon, CUDA, or Intel Gaudi hardware
- Repository Analysis: Users provide a GitHub repository URL. The system clones and analyzes the codebase structure, dependencies, and configuration files.
- Multi-Agent Processing: 9 specialized micro-agents work in parallel to extract different aspects: project overview, features, architecture, API endpoints, error handling, configuration, deployment, and troubleshooting.
- Evidence-Based Generation: The system collects concrete evidence from the codebase (dependencies, Docker files, config files) to ensure factually accurate documentation.
- Quality Validation: A QA agent validates all sections against evidence to prevent hallucinations and ensure documentation quality.
- Automated PR Creation: Optionally creates a GitHub Pull Request with the generated README using the Model Context Protocol (MCP).
The platform supports multiple LLM providers (OpenAI, Groq, Ollama, OpenRouter, or any OpenAI-compatible API), allowing teams to choose the best option for their deployment needs. The backend uses LangGraph for workflow orchestration and provides real-time processing updates via Server-Sent Events.
This application uses a micro-agent architecture where specialized agents collaborate to generate comprehensive documentation. The React frontend communicates with a FastAPI backend that orchestrates the multi-agent workflow through LangGraph. The backend integrates with multiple LLM providers through a universal client, enabling flexible deployment options across cloud APIs and local models.
graph TB
subgraph "Client Layer"
A[React Web UI<br/>Port 3000]
end
subgraph "Backend Layer"
B[FastAPI Backend<br/>Port 5001]
end
subgraph "Workflow Orchestration"
C[LangGraph Workflow]
D[State Management]
end
subgraph "Micro-Agents (9 Agents)"
E1[Code Explorer<br/>Overview & Features]
E2[API Reference<br/>Endpoint Extraction]
E3[Call Graph<br/>Architecture]
E4[Error Analysis<br/>Troubleshooting]
E5[Env Config<br/>Configuration]
E6[Dependency Analyzer<br/>Prerequisites & Deploy]
E7[Planner<br/>Section Planning]
E8[Mermaid<br/>Diagram Generation]
E9[QA Validator<br/>Quality Check]
end
subgraph "Services"
F[LLM Service]
G[Git Service]
H[Evidence Aggregator]
end
subgraph "External Services"
I[LLM Providers<br/>OpenAI/Groq/Ollama/OpenRouter/Custom]
J[GitHub<br/>PR Creation via MCP]
end
A -->|Submit Repo URL| B
B -->|Initialize Workflow| C
C -->|Clone Repo| G
C -->|Execute Agents| E1
E1 --> E2 --> E3 --> E4 --> E5 --> E6
E6 --> H
H --> E7
E7 --> E8 --> E9
E1 -.->|Request LLM| F
E2 -.->|Request LLM| F
E3 -.->|Request LLM| F
E4 -.->|Request LLM| F
E5 -.->|Request LLM| F
E6 -.->|Request LLM| F
E7 -.->|Request LLM| F
E8 -.->|Request LLM| F
E9 -.->|Request LLM| F
F -->|API Request| I
E9 -->|Final README| D
D -->|Stream Progress| B
B -->|SSE Updates| A
B -->|Create PR| J
G -->|Repo Files| H
style A fill:#e1f5ff
style B fill:#fff4e1
style C fill:#ffe1f5
style D fill:#e8f5e9
style E1 fill:#f0f0ff
style E2 fill:#f0f0ff
style E3 fill:#f0f0ff
style E4 fill:#f0f0ff
style E5 fill:#f0f0ff
style E6 fill:#f0f0ff
style E7 fill:#f0f0ff
style E8 fill:#f0f0ff
style E9 fill:#f0f0ff
style F fill:#ffe1f5
style G fill:#ffe1f5
style H fill:#ffe1f5
style I fill:#e1ffe1
style J fill:#e1ffe1
Service Components:
-
React Web UI (Port 3000) - Provides repository URL input, real-time agent progress tracking with Server-Sent Events, generated README preview with syntax highlighting, and PR creation interface
-
FastAPI Backend (Port 5001) - Handles API routing, orchestrates workflow execution, manages job state, and serves JSON/SSE responses to the frontend
-
LangGraph Workflow - Orchestrates sequential execution of 9 micro-agents, manages state transitions, handles interrupts for monorepo project selection, and checkpoints workflow state
-
Micro-Agents (9 Specialized Agents):
- Code Explorer: Analyzes project structure to write Overview & Features sections
- API Reference: Extracts API endpoints and routes from code
- Call Graph: Maps component relationships for Architecture section
- Error Analysis: Identifies error handlers for Troubleshooting section
- Env Config: Discovers configuration files for Configuration section
- Dependency Analyzer: Extracts dependencies for Prerequisites & Deployment sections
- Planner: Decides which sections to include based on project type
- Mermaid: Generates architecture diagrams with semantic validation
- QA Validator: Validates documentation against evidence to prevent hallucinations
-
LLM Service - Universal adapter supporting multiple LLM providers (OpenAI, Groq, Ollama, OpenRouter, custom APIs, enterprise inference) with retry logic and SSL verification
-
Git Service - Handles repository cloning, branch detection, monorepo analysis, and cleanup
-
Evidence Aggregator - Collects concrete evidence from filesystem (dependencies, Docker files, config files, languages) to ensure factual accuracy
Typical Flow:
- User submits GitHub repository URL through the web UI
- Backend initializes workflow and clones the repository
- System detects if repository is a monorepo (multiple projects)
- If monorepo, user selects which project to document (interrupt point)
- Six section writer agents execute in sequence, analyzing code and generating sections
- Evidence aggregator collects filesystem evidence (dependencies, Docker, configs)
- Planner agent decides which sections to include based on project type
- Mermaid agent generates architecture diagram
- QA agent validates all sections against collected evidence
- Assembly node combines sections into final README
- User can download README or create a GitHub PR with one click
Before you begin, ensure you have the following installed and configured:
- Docker and Docker Compose (v20.10+)
- LLM Provider Access (choose one):
- OpenAI API Key (Recommended)
- Groq API Key (Fast & Free Tier)
- Ollama Local Installation (Private/Local)
- OpenRouter API Key (Multi-Model)
- Any OpenAI-compatible API endpoint
# Check Docker
docker --version
docker compose version
# Verify Docker is running
docker psRecommended for most users - runs everything in containers
# If cloning:
git clone https://github.com/cld2labs/DocuBot.git
cd DocuBotCopy the example configuration and add your API key:
# Copy backend environment template
cd api
cp api/.env.example api/.env
# Edit the file and add your API key
nano api/.envUpdate api/.env with your LLM provider credentials:
LLM_PROVIDER=openai
LLM_API_KEY=your_actual_api_key_here
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL=gpt-4oFor other providers, see LLM Provider Configuration section.
# Build and start all services
docker compose up -d --build
# View logs (optional)
docker compose logs -fOnce containers are running:
- Frontend UI: http://localhost:3000
- Backend API: http://localhost:5001
- API Documentation: http://localhost:5001/docs
# Check health status
curl http://localhost:5001/health
# Check all services are running
docker compose psdocker compose downFor developers who want to run services locally without Docker
- Python 3.11+
- Node.js 20+
- Your chosen LLM provider API key
cd api
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
nano .env # Add your API key
# Start backend
uvicorn server:app --reload --port 5001Backend will run on http://localhost:5001
Open a new terminal:
cd ui
# Install dependencies
npm install
# Configure environment for local development
cp .env.example .env
# Edit .env and set:
# VITE_API_URL=http://localhost:5001
nano .env
# Start frontend
npm run devFrontend will run on http://localhost:3000
- Frontend: http://localhost:3000
- Backend API: http://localhost:5001
- API Docs: http://localhost:5001/docs
Note: For local development, the frontend .env file must contain:
VITE_API_URL=http://localhost:5001This tells the frontend where to find the backend API.
DocuBot/
├── api/
│ ├── agents/
│ │ ├── code_explorer_agent.py # Overview & Features writer
│ │ ├── api_reference_agent.py # API endpoint extractor
│ │ ├── call_graph_agent.py # Architecture writer
│ │ ├── error_analysis_agent.py # Troubleshooting writer
│ │ ├── env_config_agent.py # Configuration writer
│ │ ├── dependency_analyzer_agent.py # Prerequisites & Deployment writer
│ │ ├── planner_agent.py # Section planner
│ │ ├── mermaid_agent.py # Diagram generator
│ │ ├── qa_validator_agent.py # Quality validator
│ │ └── pr_agent_mcp.py # PR creation via MCP
│ ├── services/
│ │ ├── llm_service.py # Universal LLM provider client
│ │ └── git_service.py # Git operations
│ ├── models/
│ │ ├── schemas.py # Pydantic data models
│ │ ├── state.py # Workflow state
│ │ ├── evidence.py # Evidence structures
│ │ └── log_manager.py # SSE logging
│ ├── tools/
│ │ ├── repo_tools.py # Repository analysis tools
│ │ └── new_analysis_tools.py # Code analysis utilities
│ ├── utils/
│ │ ├── project_detector.py # Monorepo detection
│ │ └── metrics_extractor.py # Token usage metrics
│ ├── core/
│ │ ├── metrics_collector.py # Performance tracking
│ │ └── agent_event_logger.py # ReAct event logging
│ ├── mcp_client/
│ │ └── github_mcp_client.py # GitHub MCP integration
│ ├── workflow.py # LangGraph workflow definition
│ ├── server.py # FastAPI application entry point
│ ├── config.py # Environment configuration
│ ├── requirements.txt # Python dependencies
│ └── Dockerfile # Backend container
├── ui/
│ ├── src/
│ │ ├── pages/
│ │ │ └── HomePage.tsx # Main documentation generation page
│ │ ├── components/
│ │ │ └── ui/ # Reusable UI components
│ │ ├── services/
│ │ │ └── api.ts # API client utilities
│ │ └── types/ # TypeScript type definitions
│ ├── package.json # npm dependencies
│ ├── vite.config.ts # Vite configuration
│ └── Dockerfile # Frontend container
├── docs/
│ └── assets/ # Documentation assets
├── docker-compose.yml # Service orchestration
├── .env.example # Environment variable template
├── README.md # Project documentation
├── TROUBLESHOOTING.md # Troubleshooting guide
├── CONTRIBUTING.md # Contribution guidelines
├── SECURITY.md # Security policy
├── DISCLAIMER.md # Usage disclaimer
├── LICENSE.md # MIT License
└── TERMS_AND_CONDITIONS.md # Terms of use
-
Open the Application
- Navigate to
http://localhost:3000
- Navigate to
-
Enter Repository URL
- Paste a GitHub repository URL (e.g.,
https://github.com/owner/repo) - Supports branch-specific URLs (e.g.,
https://github.com/owner/repo/tree/dev) - Supports subfolder URLs (e.g.,
https://github.com/owner/repo/tree/main/backend)
- Paste a GitHub repository URL (e.g.,
-
Start Documentation Generation
- Click "Generate Documentation" button
- Watch real-time agent progress in the activity panel
- See which agent is currently running and what it's doing
-
Handle Monorepo Selection (if needed)
- If the repository contains multiple projects, you'll be prompted to select one
- Choose the project you want to document
- System will focus analysis on that specific project
-
Review Generated README
- Once complete, the README preview appears with syntax highlighting
- Review all sections: Overview, Features, Architecture, Prerequisites, Deployment, etc.
- Check the architecture diagram generated by the Mermaid agent
-
Download or Create PR
- Download: Click "Download README.md" to save locally
- Create PR: Click "Create Pull Request" to automatically:
- Create a new branch (docs/update-readme-{timestamp})
- Commit the README
- Open a PR against the repository's default branch
- Use the largest model your hardware can sustain.
qwen3:14bproduces the best documentation quality;qwen3:4bis faster and good for benchmarking. - Lower
LLM_TEMPERATURE(e.g.,0.1) for more factual, evidence-grounded documentation. Raise it slightly (e.g.,0.3–0.5) for more descriptive, narrative-style README prose. - Keep repositories focused. The agents analyze up to
MAX_FILES_TO_SCANfiles (default: 500). For large monorepos, use the built-in project selector to target a specific subproject rather than letting agents scan the entire repo. - On Apple Silicon, always run Ollama natively — never inside Docker. The Metal GPU backend delivers significantly higher throughput for sequential multi-agent workloads compared to CPU-only inference.
- On Linux with an NVIDIA GPU, set
CUDA_VISIBLE_DEVICESbefore starting Ollama to target a specific GPU. - For enterprise remote APIs, choose a model with a large context window (≥16k tokens) to avoid truncation on longer inputs.
DocuBot supports multiple LLM providers. All providers are configured via the .env file. Set INFERENCE_PROVIDER=ollama for local inference.
- Get API Key: https://platform.openai.com/account/api-keys
- Models:
gpt-4o,gpt-4-turbo,gpt-4o-mini - Pricing: Pay-per-use (check OpenAI Pricing)
- Configuration:
LLM_PROVIDER=openai LLM_API_KEY=sk-... LLM_BASE_URL=https://api.openai.com/v1 LLM_MODEL=gpt-4o
Groq provides OpenAI-compatible endpoints with extremely fast inference (LPU hardware).
- Get API Key: https://console.groq.com/keys
- Models:
llama-3.2-90b-text-preview,llama-3.1-70b-versatile - Free Tier: 30 requests/min, 6,000 tokens/min
- Pricing: Very competitive paid tiers
- Configuration:
LLM_PROVIDER=groq LLM_API_KEY=gsk_... LLM_BASE_URL=https://api.groq.com/openai/v1 LLM_MODEL=llama-3.2-90b-text-preview
Runs inference locally on the host machine with full GPU acceleration.
- Install Ollama: https://ollama.com/download
- Pull Model:
ollama pull qwen3:14b - Models:
qwen3:4b,llama3.1:8b,llama3.2:3b - Configuration:
LLM_PROVIDER=ollama LLM_API_KEY= # Leave empty - no API key needed LLM_BASE_URL=http://localhost:11434/v1 LLM_MODEL=qwen2.5:7b - Setup:
# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Pull model ollama pull qwen3:14b # Verify Ollama is running: curl http://localhost:11434/api/tags
OpenRouter provides a unified API across hundreds of models from different providers.
- Get API Key: https://openrouter.ai/keys
- Models: Claude, Gemini, GPT-4, Llama, and 100+ others
- Pricing: Varies by model
- Configuration:
LLM_PROVIDER=openrouter LLM_API_KEY=sk-or-... LLM_BASE_URL=https://openrouter.ai/api/v1 LLM_MODEL=anthropic/claude-3-haiku
Best for: Custom deployments, internal APIs, alternative providers
Any API that implements the OpenAI chat completions format will work:
LLM_PROVIDER=custom
LLM_API_KEY=your_api_key
LLM_BASE_URL=https://your-custom-endpoint.com/v1
LLM_MODEL=your-model-nameIf the endpoint uses a private domain mapped in /etc/hosts, also set:
LOCAL_URL_ENDPOINT=your-private-domain.internalTo switch providers, simply update api/.env and restart:
# Edit configuration
nano api/.env
# Restart backend only
docker compose restart api
# Or restart all services
docker compose down
docker compose up -dThe table below compares inference performance across different providers, deployment modes, and hardware profiles using a standardized DocuBot's full 9-agent documentation pipeline.
| Provider | Model | Deployment | Context Window | Avg Input Tokens | Avg Output Tokens | Avg Total Tokens / Request | P50 Latency (ms) | P95 Latency (ms) | Throughput (req/sec) | Hardware |
|---|---|---|---|---|---|---|---|---|---|---|
| vLLM | Qwen3-4B-Instruct-2507 | Local | 262.1K | 3,040 | 307.7 | 5809 | 15,864 | 40,809 | 0.0580 | Apple Silicon (Metal)(Macbook Pro M4) |
| Intel OPEA EI | Qwen3-4B-Instruct-2507 | CPU (Xeon) | 8.1K | 4,211.9 | 270 | 4481 | 10,540 | 32,205 | 0.076 | CPU-only |
| OpenAI (Cloud) | gpt-4o-mini | API (Cloud) | 128K | 3,820.11 | 316.41 | 4136.52 | 7,760 | 23,535 | 0.108 | N/A |
Notes:
- All benchmarks use the same Documentation generation workflow. Token counts may vary slightly per run due to non-deterministic model output.
- vLLM on Apple Silicon uses Metal (MPS) GPU acceleration.
- Intel OPEA Enterprise Inference runs on Intel Xeon CPUs without GPU acceleration.
A 4-billion-parameter open-weight code model from Alibaba's Qwen team (July 2025 release), designed for on-prem and edge deployment.
| Attribute | Details |
|---|---|
| Parameters | 4.0B total (3.6B non-embedding) |
| Architecture | Transformer with Grouped Query Attention (GQA) — 36 layers, 32 Q-heads / 8 KV-heads |
| Context Window | 262,144 tokens (256K) native |
| Reasoning Mode | Non-thinking only (Instruct-2507 variant). Separate Thinking-2507 variant available with always-on chain-of-thought |
| Tool / Function Calling | Supported; MCP (Model Context Protocol) compatible |
| Structured Output | JSON-structured responses supported |
| Multilingual | 100+ languages and dialects |
| Code Benchmarks | MultiPL-E: 76.8%, LiveCodeBench v6: 35.1%, BFCL-v3 (tool use): 61.9 |
| Quantization Formats | GGUF (Q4_K_M ~2.5 GB, Q8_0 ~4.3 GB), AWQ (int4), GPTQ (int4), MLX (4-bit ~2.3 GB) |
| Inference Runtimes | Ollama, vLLM, llama.cpp, LMStudio, SGLang, KTransformers |
| Fine-Tuning | Full fine-tuning and adapter-based (LoRA); 5,000+ community adapters on HuggingFace |
| License | Apache 2.0 |
| Deployment | Local, on-prem, air-gapped, cloud — full data sovereignty |
OpenAI's cost-efficient multimodal model, accessible exclusively via cloud API.
| Attribute | Details |
|---|---|
| Parameters | Not publicly disclosed |
| Architecture | Multimodal Transformer (text + image input, text output) |
| Context Window | 128,000 tokens input / 16,384 tokens max output |
| Reasoning Mode | Standard inference (no explicit chain-of-thought toggle) |
| Tool / Function Calling | Supported; parallel function calling |
| Structured Output | JSON mode and strict JSON schema adherence supported |
| Multilingual | Broad multilingual support |
| Code Benchmarks | MMMLU: ~87%, strong HumanEval and MBPP scores |
| Pricing | $0.15 / 1M input tokens, $0.60 / 1M output tokens (Batch API: 50% discount) |
| Fine-Tuning | Supervised fine-tuning via OpenAI API |
| License | Proprietary (OpenAI Terms of Use) |
| Deployment | Cloud-only — OpenAI API or Azure OpenAI Service. No self-hosted or on-prem option |
| Knowledge Cutoff | October 2023 |
| Capability | Qwen3-4B-Instruct-2507 | GPT-4o-mini |
|---|---|---|
| Code Analysis & Documentation Generation | Yes | Yes |
| Multi-agent / agentic task execution | Yes | Yes |
| Mermaid / architecture diagram Generation | Yes | Yes |
| Function / tool calling | Yes | Yes |
| JSON structured output | Yes | Yes |
| On-prem / air-gapped deployment | Yes | No |
| Data sovereignty | Full (weights run locally) | No (data sent to cloud API) |
| Open weights | Yes (Apache 2.0) | No (proprietary) |
| Custom fine-tuning | Full fine-tuning + LoRA adapters | Supervised fine-tuning (API only) |
| Quantization for edge devices | GGUF / AWQ / GPTQ / MLX | N/A |
| Multimodal (image input) | No | Yes |
| Native context window | 256K | 128K |
Both models support Code Analysis & Documentation Generation, Multi-agent / agentic task execution, Mermaid diagram generation, function calling, and JSON-structured output. However, only Qwen3-4B offers open weights, data sovereignty, and local deployment flexibility — making it suitable for air-gapped, regulated, or cost-sensitive environments. GPT-4o-mini offers lower latency and higher throughput via OpenAI's cloud infrastructure, with added multimodal capabilities.
Configure the application behavior using environment variables in api/.env:
| Variable | Description | Default | Type |
|---|---|---|---|
LLM_PROVIDER |
LLM provider name (openai, groq, ollama, openrouter, custom) | openai |
string |
LLM_API_KEY |
API key for the provider (empty for Ollama) | - | string |
LLM_BASE_URL |
Base URL for the LLM API | https://api.openai.com/v1 |
string |
LLM_MODEL |
Model name to use | gpt-4o |
string |
| Variable | Description | Default | Type |
|---|---|---|---|
TEMPERATURE |
Model creativity level (0.0–1.0, lower = deterministic) | 0.7 |
float |
MAX_TOKENS |
Maximum tokens per response | 1000 |
integer |
MAX_RETRIES |
Number of retry attempts for API failures | 3 |
integer |
REQUEST_TIMEOUT |
Request timeout in seconds | 300 |
integer |
| Variable | Description | Default | Type |
|---|---|---|---|
TEMP_REPO_DIR |
Temporary directory for cloned repositories | ./tmp/repos |
string |
MAX_REPO_SIZE |
Maximum repository size in bytes | 10737418240 (10GB) |
integer |
MAX_FILE_SIZE |
Maximum file size to analyze | 1000000 (1MB) |
integer |
MAX_FILES_TO_SCAN |
Maximum files to analyze | 500 |
integer |
| Variable | Description | Default | Type |
|---|---|---|---|
GITHUB_TOKEN |
Personal access token for PR creation | - | string |
| Variable | Description | Default | Type |
|---|---|---|---|
API_PORT |
Backend service port | 5001 |
integer |
HOST |
Server host binding | 0.0.0.0 |
string |
CORS_ORIGINS |
Allowed CORS origins | ["http://localhost:3000"] |
list |
Example .env file is available at api/.env.example in the repository.
- Framework: FastAPI (Python web framework with async support)
- Workflow Orchestration: LangGraph with memory checkpointing
- AI Framework: LangChain for agent tools and abstractions
- LLM Providers:
- OpenAI GPT-4o (text generation)
- Groq Llama (fast inference)
- Ollama (local deployment)
- OpenRouter (multi-model access)
- Custom OpenAI-compatible APIs
- Multi-Agent System:
- 9 specialized micro-agents
- Evidence-based generation
- Quality validation with guardrails
- Semantic Mermaid diagram validation
- Git Operations: GitPython for repository management
- GitHub Integration: MCP (Model Context Protocol) for PR creation
- Code Analysis: AST parsing with astroid
- Async Server: Uvicorn (ASGI)
- Config Management: Pydantic Settings with python-dotenv
- Framework: React 18 with TypeScript
- Build Tool: Vite (fast bundler)
- Styling: Tailwind CSS + PostCSS
- UI Components: Custom design system with Lucide React icons
- State Management: React hooks (useState, useEffect)
- API Communication:
- Axios for REST calls
- Fetch API for Server-Sent Events (SSE)
- Markdown Rendering: react-markdown with syntax highlighting
- Containerization: Docker + Docker Compose
- Frontend Server: Nginx (unprivileged)
- Health Checks: Docker health monitoring
- Networking: Docker bridge network
For comprehensive troubleshooting guidance, common issues, and solutions, refer to:
Troubleshooting Guide - TROUBLESHOOTING.md
Check service health:
curl http://localhost:5001/health
docker compose psView logs:
docker compose logs api --tail 50
docker compose logs ui --tail 50Enable debug mode:
# Update api/.env
LOG_LEVEL=DEBUG
# Restart backend
docker compose restart apiThis project is licensed under the terms specified in LICENSE.md file.
DocuBot is provided as-is for documentation generation purposes. While we strive for accuracy:
- Always review AI-generated documentation before publication
- Verify technical details and implementation specifics
- Do not rely solely on AI for critical documentation
- Test thoroughly before using in production environments
- Consult subject matter experts for domain-specific accuracy
For full disclaimer details, see DISCLAIMER.md
