Watch your AI agent think in real-time.
A plug-and-play visualization tool that transforms LangChain agent execution into an animated mind map. Built for the new create_agent API (LangChain v1.0+). Perfect for demos, debugging, presentations, and social media content.
When you run a LangChain agent through this visualizer, you see:
- ๐ง Thought nodes bloom when the LLM starts reasoning
- ๐ง Tool nodes appear when tools are invoked
- โ Result nodes show tool outputs
- ๐ Final answer displays prominently when complete
- Animated connections draw between related steps
- Real-time updates via WebSocket streaming
Instead of staring at terminal logs, you get a beautiful, animated graph that shows exactly how your agent thinks โ perfect for LinkedIn/Twitter demos, client presentations, or understanding complex agent behavior.
- Python 3.9+
- Node.js 18+
- An OpenAI API key (or Anthropic key)
# Clone or unzip the project
cd agent-mind-map-visualizer
# Copy environment template
cp .env.example .env
# Add your API key to .env
echo "OPENAI_API_KEY=sk-your-key-here" >> .envOption 1: Using the start script (Recommended)
chmod +x start.sh
./start.shThis automatically:
- Creates a Python virtual environment
- Installs all dependencies
- Starts both backend and frontend servers
Option 2: Manual startup
# Terminal 1: Backend
cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
python main.py
# Terminal 2: Frontend
cd frontend
npm install
npm run devNavigate to http://localhost:5173 and start chatting!
agent-mind-map-visualizer/
โ
โโโ config.yaml # ๐ Configure your agent here
โโโ .env # API keys (create from .env.example)
โโโ start.sh # One-command startup script
โ
โโโ backend/
โ โโโ main.py # FastAPI WebSocket server (uses stream_mode="updates")
โ โโโ agent_loader.py # Dynamic agent loading
โ โโโ requirements.txt # Python dependencies
โ
โโโ frontend/
โ โโโ src/
โ โ โโโ App.tsx # Main application
โ โ โโโ components/
โ โ โ โโโ MindMap.tsx # Graph visualization
โ โ โ โโโ Node.tsx # Animated node component
โ โ โ โโโ InputPanel.tsx # Chat input UI
โ โ โโโ hooks/
โ โ โ โโโ useAgentSocket.ts # WebSocket connection
โ โ โโโ types.ts # TypeScript definitions
โ โโโ package.json
โ โโโ vite.config.ts
โ
โโโ agents/ # ๐ Drop your agents here
โ โโโ rag_agent/ # Example agent included
โ โโโ __init__.py
โ โโโ agent.py # Uses new create_agent API
โ
โโโ docs/
โโโ BRING_YOUR_OWN_AGENT.md # Detailed integration guide
The visualizer works with any LangChain agent built using the new create_agent API.
Copy your agent folder into the agents/ directory:
agents/
โโโ rag_agent/ # Example agent
โโโ my_custom_agent/ # ๐ Your agent here
โโโ __init__.py
โโโ agent.py
Your agent should use LangChain's new create_agent function:
# agents/my_custom_agent/agent.py
from langchain.agents import create_agent
from langchain.tools import tool
@tool
def my_tool(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
def create_my_agent(model: str = "openai:gpt-4o-mini"):
"""
Factory function that creates the agent.
Args:
model: Model identifier (e.g., "openai:gpt-4o", "anthropic:claude-sonnet-4-6")
Returns:
A compiled LangGraph agent
"""
return create_agent(
model=model,
tools=[my_tool],
system_prompt="You are a helpful assistant.",
name="my_agent",
)Edit config.yaml:
agent:
module_path: "agents/my_custom_agent"
entry_point: "create_my_agent"./start.shThat's it! The visualizer will now use your agent.
# Agent Configuration (New create_agent API)
agent:
module_path: "agents/rag_agent" # Path to agent (relative or absolute)
entry_point: "create_rag_agent" # Function that returns the agent
env_vars: # Optional: inject environment variables
# OPENAI_API_KEY: "sk-..."
# Server Settings
server:
host: "0.0.0.0"
port: 8000
cors_origins:
- "http://localhost:5173"
- "http://localhost:3000"
# Visualization Theme
visualization:
nodes:
thought_color: "#8B5CF6" # Purple - LLM reasoning
tool_color: "#F59E0B" # Amber - Tool calls
result_color: "#10B981" # Green - Results
error_color: "#EF4444" # Red - Errors
input_color: "#3B82F6" # Blue - User input
output_color: "#EC4899" # Pink - Final output
animation:
node_appear: 300 # ms
connection_draw: 200 # ms
pulse_interval: 1500 # ms
# Logging
logging:
level: "INFO" # DEBUG, INFO, WARNING, ERROR# Required: At least one LLM API key
OPENAI_API_KEY=sk-your-openai-key
# Optional: For Anthropic-based agents
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
# Optional: LangSmith tracing
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-langsmith-key
LANGCHAIN_PROJECT=agent-mind-mapThe new create_agent uses string identifiers in the format provider:model:
# OpenAI models
create_agent(model="openai:gpt-4o", ...)
create_agent(model="openai:gpt-4o-mini", ...)
# Anthropic models
create_agent(model="anthropic:claude-sonnet-4-6", ...)
create_agent(model="anthropic:claude-opus-4-5", ...)
# Auto-inference (provider detected from model name)
create_agent(model="gpt-4o", ...) # Infers openai:| Node Type | Color | Icon | When It Appears |
|---|---|---|---|
| Input | Blue | ๐ | User sends a message |
| Thought | Purple | ๐ง | LLM starts processing |
| Tool | Amber | ๐ง | Tool is invoked |
| Result | Green | โ | Tool returns result |
| Action | Cyan | ๐ฏ | Agent decides on action |
| Output | Pink | ๐ | Final answer ready |
| Error | Red | โ | Something went wrong |
โโโโโโโโโโโโโโโ WebSocket โโโโโโโโโโโโโโโ
โ Frontend โโโโโโโโโโโโโโโโโโโโโบโ Backend โ
โ (React) โ Real-time โ (FastAPI) โ
โโโโโโโโโโโโโโโ Events โโโโโโโโฌโโโโโโโ
โ
โโโโโโโโผโโโโโโโ
โ Stream โ
โ Processor โ
โโโโโโโโฌโโโโโโโ
โ stream_mode="updates"
โโโโโโโโผโโโโโโโ
โ LangChain โ
โ create_agentโ
โโโโโโโโโโโโโโโ
- User sends message โ Frontend sends via WebSocket
- Backend receives โ Creates agent, calls
agent.stream()withstream_mode="updates" - Agent executes โ LangGraph emits updates after each step (
model,tools) - Events stream โ Backend processes chunks and sends to frontend
- Frontend renders โ Nodes animate into the mind map
- Agent completes โ Final answer displays prominently
# No callbacks needed! Uses built-in streaming
for chunk in agent.stream(
{"messages": [{"role": "user", "content": query}]},
stream_mode="updates",
version="v2",
):
if chunk["type"] == "updates":
for step_name, step_data in chunk["data"].items():
# step_name: "model" or "tools"
messages = step_data["messages"]
# Process and send to WebSocket...Every step of agent execution appears instantly โ no waiting for completion. Uses LangChain's native stream_mode="updates".
- Drag the canvas to pan
- Scroll to zoom in/out
- Reset button returns to default view
Visual indicator shows WebSocket connection state with auto-reconnect.
Clickable example prompts to quickly test the agent.
The agent's response displays in a highlighted card below the graph.
Tips for creating social media content:
- Use a clean query that demonstrates multiple tool calls
- Record in 1080p or higher for clarity
- Slow down โ the animations are the star
- Add captions explaining what's happening
- Keep it under 60 seconds for Twitter/X
- Dark theme looks great on social feeds
Recommended screen recording tools:
- macOS: Built-in (Cmd+Shift+5) or OBS
- Windows: OBS or ShareX
- Linux: OBS or SimpleScreenRecorder
- Check your
config.yamlpaths are correct - Verify the entry point function exists and returns a
create_agentresult - Run the loader test:
cd backend python agent_loader.py
- Ensure backend is running on port 8000
- Check CORS origins in
config.yamlinclude your frontend URL - Look for errors in the backend terminal
- Ensure your agent is created with
create_agent()(not oldAgentExecutor) - Check backend logs for streaming events
- Verify your tools are decorated with
@tool
- Use relative imports:
from .tools import my_tool - Ensure
__init__.pyexists in your agent folder - Check all dependencies are in
requirements.txt
Real-time agent interaction.
Send:
{
"message": "What is the vacation policy?",
"session_id": "optional-session-id"
}Receive (stream of events):
{
"type": "llm_start",
"timestamp": "2024-01-15T10:30:00Z",
"run_id": "session-123",
"data": {
"node_id": "thought_1",
"parent_id": "input_0",
"label": "๐ง Reasoning",
"preview": "Calling: search_documents"
}
}Health check endpoint.
Returns visualization configuration.
Non-streaming query (for simple integrations).
{
"message": "Hello",
"session_id": "optional"
}# Dockerfile example structure
FROM python:3.11-slim
WORKDIR /app
COPY backend/ ./backend/
RUN pip install -r backend/requirements.txt
# ...# Required
OPENAI_API_KEY=sk-...
# Server config
HOST=0.0.0.0
PORT=8000
# Frontend URL for CORS
CORS_ORIGINS=https://your-app.comThis project implements the Mind Map style. Other styles that can be built:
| Style | Best For | Description |
|---|---|---|
| Mind Map โ | Reasoning chains | Nodes bloom outward showing thought process |
| Mission Control | Multi-agent systems | NASA-style dashboard with status panels |
| Pipeline | ETL/RAG workflows | Conveyor belt showing data flow |
| Code Theater | Code generation | Split-screen with live code + output |
| Workspace Canvas | Computer-use agents | Virtual desktop with windows |
Want another style? Check the docs/ folder or open an issue!
If you have agents using the old AgentExecutor pattern:
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", callbacks=callbacks)
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, callbacks=callbacks)
result = executor.invoke({"input": query})from langchain.agents import create_agent
agent = create_agent(
model="openai:gpt-4o-mini",
tools=tools,
system_prompt="Your prompt here",
)
# Streaming is built-in, no callbacks needed
for chunk in agent.stream({"messages": [...]}, stream_mode="updates"):
...Key Differences:
- No separate
AgentExecutorโcreate_agentreturns a ready-to-use graph - No callbacks โ use
stream_mode="updates"for real-time events - Model as string โ
"openai:gpt-4o"instead ofChatOpenAI(...) - Messages format โ
{"messages": [...]}instead of{"input": "..."}
Contributions welcome! Areas of interest:
- Additional visualization styles (Dashboard, Pipeline, etc.)
- LangGraph-specific visualization with state machines
- Docker deployment configuration
- Recording/replay functionality
- Custom theming UI
- Multi-agent visualization
- Mobile-optimized view
MIT License โ use freely for personal and commercial projects.
Built with:
- LangChain โ Agent framework (v1.0+ with
create_agent) - LangGraph โ Agent runtime
- FastAPI โ Backend API
- React โ Frontend UI
- Framer Motion โ Animations
- Tailwind CSS โ Styling
- Vite โ Build tool
Built for AI Engineers who want their demos to shine. โจ
Questions? Issues? Open an issue or reach out!