Skip to content

coderonfleek/agent-visualizer-langchain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1 Commit
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿง  Agent Mind Map Visualizer

Watch your AI agent think in real-time.

A plug-and-play visualization tool that transforms LangChain agent execution into an animated mind map. Built for the new create_agent API (LangChain v1.0+). Perfect for demos, debugging, presentations, and social media content.

Python React LangChain LangGraph License


๐ŸŽฌ What It Does

When you run a LangChain agent through this visualizer, you see:

  • ๐Ÿง  Thought nodes bloom when the LLM starts reasoning
  • ๐Ÿ”ง Tool nodes appear when tools are invoked
  • โœ… Result nodes show tool outputs
  • ๐ŸŽ‰ Final answer displays prominently when complete
  • Animated connections draw between related steps
  • Real-time updates via WebSocket streaming

Instead of staring at terminal logs, you get a beautiful, animated graph that shows exactly how your agent thinks โ€” perfect for LinkedIn/Twitter demos, client presentations, or understanding complex agent behavior.


๐Ÿš€ Quick Start

Prerequisites

  • Python 3.9+
  • Node.js 18+
  • An OpenAI API key (or Anthropic key)

Installation

# Clone or unzip the project
cd agent-mind-map-visualizer

# Copy environment template
cp .env.example .env

# Add your API key to .env
echo "OPENAI_API_KEY=sk-your-key-here" >> .env

Running

Option 1: Using the start script (Recommended)

chmod +x start.sh
./start.sh

This automatically:

  • Creates a Python virtual environment
  • Installs all dependencies
  • Starts both backend and frontend servers

Option 2: Manual startup

# Terminal 1: Backend
cd backend
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install -r requirements.txt
python main.py

# Terminal 2: Frontend
cd frontend
npm install
npm run dev

Open the App

Navigate to http://localhost:5173 and start chatting!


๐Ÿ“ Project Structure

agent-mind-map-visualizer/
โ”‚
โ”œโ”€โ”€ config.yaml                 # ๐Ÿ‘ˆ Configure your agent here
โ”œโ”€โ”€ .env                        # API keys (create from .env.example)
โ”œโ”€โ”€ start.sh                    # One-command startup script
โ”‚
โ”œโ”€โ”€ backend/
โ”‚   โ”œโ”€โ”€ main.py                 # FastAPI WebSocket server (uses stream_mode="updates")
โ”‚   โ”œโ”€โ”€ agent_loader.py         # Dynamic agent loading
โ”‚   โ””โ”€โ”€ requirements.txt        # Python dependencies
โ”‚
โ”œโ”€โ”€ frontend/
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ App.tsx             # Main application
โ”‚   โ”‚   โ”œโ”€โ”€ components/
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ MindMap.tsx     # Graph visualization
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ Node.tsx        # Animated node component
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ InputPanel.tsx  # Chat input UI
โ”‚   โ”‚   โ”œโ”€โ”€ hooks/
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ useAgentSocket.ts  # WebSocket connection
โ”‚   โ”‚   โ””โ”€โ”€ types.ts            # TypeScript definitions
โ”‚   โ”œโ”€โ”€ package.json
โ”‚   โ””โ”€โ”€ vite.config.ts
โ”‚
โ”œโ”€โ”€ agents/                     # ๐Ÿ‘ˆ Drop your agents here
โ”‚   โ””โ”€โ”€ rag_agent/              # Example agent included
โ”‚       โ”œโ”€โ”€ __init__.py
โ”‚       โ””โ”€โ”€ agent.py            # Uses new create_agent API
โ”‚
โ””โ”€โ”€ docs/
    โ””โ”€โ”€ BRING_YOUR_OWN_AGENT.md # Detailed integration guide

๐Ÿ”Œ Using Your Own Agent

The visualizer works with any LangChain agent built using the new create_agent API.

Step 1: Add Your Agent

Copy your agent folder into the agents/ directory:

agents/
โ”œโ”€โ”€ rag_agent/          # Example agent
โ””โ”€โ”€ my_custom_agent/    # ๐Ÿ‘ˆ Your agent here
    โ”œโ”€โ”€ __init__.py
    โ””โ”€โ”€ agent.py

Step 2: Create Your Agent with create_agent

Your agent should use LangChain's new create_agent function:

# agents/my_custom_agent/agent.py

from langchain.agents import create_agent
from langchain.tools import tool


@tool
def my_tool(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"


def create_my_agent(model: str = "openai:gpt-4o-mini"):
    """
    Factory function that creates the agent.
    
    Args:
        model: Model identifier (e.g., "openai:gpt-4o", "anthropic:claude-sonnet-4-6")
    
    Returns:
        A compiled LangGraph agent
    """
    return create_agent(
        model=model,
        tools=[my_tool],
        system_prompt="You are a helpful assistant.",
        name="my_agent",
    )

Step 3: Update Config

Edit config.yaml:

agent:
  module_path: "agents/my_custom_agent"
  entry_point: "create_my_agent"

Step 4: Run

./start.sh

That's it! The visualizer will now use your agent.


โš™๏ธ Configuration

config.yaml

# Agent Configuration (New create_agent API)
agent:
  module_path: "agents/rag_agent"     # Path to agent (relative or absolute)
  entry_point: "create_rag_agent"     # Function that returns the agent
  env_vars:                           # Optional: inject environment variables
    # OPENAI_API_KEY: "sk-..."

# Server Settings
server:
  host: "0.0.0.0"
  port: 8000
  cors_origins:
    - "http://localhost:5173"
    - "http://localhost:3000"

# Visualization Theme
visualization:
  nodes:
    thought_color: "#8B5CF6"    # Purple - LLM reasoning
    tool_color: "#F59E0B"       # Amber - Tool calls
    result_color: "#10B981"     # Green - Results
    error_color: "#EF4444"      # Red - Errors
    input_color: "#3B82F6"      # Blue - User input
    output_color: "#EC4899"     # Pink - Final output
  
  animation:
    node_appear: 300            # ms
    connection_draw: 200        # ms
    pulse_interval: 1500        # ms

# Logging
logging:
  level: "INFO"                 # DEBUG, INFO, WARNING, ERROR

Environment Variables (.env)

# Required: At least one LLM API key
OPENAI_API_KEY=sk-your-openai-key

# Optional: For Anthropic-based agents
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key

# Optional: LangSmith tracing
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-langsmith-key
LANGCHAIN_PROJECT=agent-mind-map

Model Identifiers

The new create_agent uses string identifiers in the format provider:model:

# OpenAI models
create_agent(model="openai:gpt-4o", ...)
create_agent(model="openai:gpt-4o-mini", ...)

# Anthropic models
create_agent(model="anthropic:claude-sonnet-4-6", ...)
create_agent(model="anthropic:claude-opus-4-5", ...)

# Auto-inference (provider detected from model name)
create_agent(model="gpt-4o", ...)  # Infers openai:

๐ŸŽจ Node Types & Colors

Node Type Color Icon When It Appears
Input Blue ๐Ÿ“ User sends a message
Thought Purple ๐Ÿง  LLM starts processing
Tool Amber ๐Ÿ”ง Tool is invoked
Result Green โœ… Tool returns result
Action Cyan ๐ŸŽฏ Agent decides on action
Output Pink ๐ŸŽ‰ Final answer ready
Error Red โŒ Something went wrong

๐Ÿ”„ How It Works

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     WebSocket      โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Frontend  โ”‚โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บโ”‚   Backend   โ”‚
โ”‚  (React)    โ”‚    Real-time       โ”‚  (FastAPI)  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    Events          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                          โ”‚
                                   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”
                                   โ”‚   Stream    โ”‚
                                   โ”‚  Processor  โ”‚
                                   โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                          โ”‚ stream_mode="updates"
                                   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”
                                   โ”‚ LangChain   โ”‚
                                   โ”‚ create_agentโ”‚
                                   โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

New Streaming Architecture (LangChain v1.0+)

  1. User sends message โ†’ Frontend sends via WebSocket
  2. Backend receives โ†’ Creates agent, calls agent.stream() with stream_mode="updates"
  3. Agent executes โ†’ LangGraph emits updates after each step (model, tools)
  4. Events stream โ†’ Backend processes chunks and sends to frontend
  5. Frontend renders โ†’ Nodes animate into the mind map
  6. Agent completes โ†’ Final answer displays prominently

Streaming Code Pattern

# No callbacks needed! Uses built-in streaming
for chunk in agent.stream(
    {"messages": [{"role": "user", "content": query}]},
    stream_mode="updates",
    version="v2",
):
    if chunk["type"] == "updates":
        for step_name, step_data in chunk["data"].items():
            # step_name: "model" or "tools"
            messages = step_data["messages"]
            # Process and send to WebSocket...

๐Ÿ“ฑ Features

Real-Time Streaming

Every step of agent execution appears instantly โ€” no waiting for completion. Uses LangChain's native stream_mode="updates".

Pan & Zoom

  • Drag the canvas to pan
  • Scroll to zoom in/out
  • Reset button returns to default view

Connection Status

Visual indicator shows WebSocket connection state with auto-reconnect.

Example Queries

Clickable example prompts to quickly test the agent.

Final Answer Card

The agent's response displays in a highlighted card below the graph.


๐ŸŽฅ Recording Demos for Social Media

Tips for creating social media content:

  1. Use a clean query that demonstrates multiple tool calls
  2. Record in 1080p or higher for clarity
  3. Slow down โ€” the animations are the star
  4. Add captions explaining what's happening
  5. Keep it under 60 seconds for Twitter/X
  6. Dark theme looks great on social feeds

Recommended screen recording tools:

  • macOS: Built-in (Cmd+Shift+5) or OBS
  • Windows: OBS or ShareX
  • Linux: OBS or SimpleScreenRecorder

๐Ÿ› ๏ธ Troubleshooting

"Agent not loaded" error

  1. Check your config.yaml paths are correct
  2. Verify the entry point function exists and returns a create_agent result
  3. Run the loader test:
    cd backend
    python agent_loader.py

WebSocket won't connect

  1. Ensure backend is running on port 8000
  2. Check CORS origins in config.yaml include your frontend URL
  3. Look for errors in the backend terminal

Nodes not appearing

  1. Ensure your agent is created with create_agent() (not old AgentExecutor)
  2. Check backend logs for streaming events
  3. Verify your tools are decorated with @tool

Import errors in your agent

  1. Use relative imports: from .tools import my_tool
  2. Ensure __init__.py exists in your agent folder
  3. Check all dependencies are in requirements.txt

๐Ÿ”ง API Endpoints

WebSocket: /ws/agent

Real-time agent interaction.

Send:

{
  "message": "What is the vacation policy?",
  "session_id": "optional-session-id"
}

Receive (stream of events):

{
  "type": "llm_start",
  "timestamp": "2024-01-15T10:30:00Z",
  "run_id": "session-123",
  "data": {
    "node_id": "thought_1",
    "parent_id": "input_0",
    "label": "๐Ÿง  Reasoning",
    "preview": "Calling: search_documents"
  }
}

REST: GET /health

Health check endpoint.

REST: GET /config

Returns visualization configuration.

REST: POST /api/query

Non-streaming query (for simple integrations).

{
  "message": "Hello",
  "session_id": "optional"
}

๐Ÿšข Deployment

Docker (Coming Soon)

# Dockerfile example structure
FROM python:3.11-slim
WORKDIR /app
COPY backend/ ./backend/
RUN pip install -r backend/requirements.txt
# ...

Environment Variables for Production

# Required
OPENAI_API_KEY=sk-...

# Server config
HOST=0.0.0.0
PORT=8000

# Frontend URL for CORS
CORS_ORIGINS=https://your-app.com

๐Ÿ”ฎ Other Visualization Styles

This project implements the Mind Map style. Other styles that can be built:

Style Best For Description
Mind Map โœ… Reasoning chains Nodes bloom outward showing thought process
Mission Control Multi-agent systems NASA-style dashboard with status panels
Pipeline ETL/RAG workflows Conveyor belt showing data flow
Code Theater Code generation Split-screen with live code + output
Workspace Canvas Computer-use agents Virtual desktop with windows

Want another style? Check the docs/ folder or open an issue!


๐Ÿ“Š Migration from Old API

If you have agents using the old AgentExecutor pattern:

Before (Old API)

from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", callbacks=callbacks)
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, callbacks=callbacks)
result = executor.invoke({"input": query})

After (New API)

from langchain.agents import create_agent

agent = create_agent(
    model="openai:gpt-4o-mini",
    tools=tools,
    system_prompt="Your prompt here",
)
# Streaming is built-in, no callbacks needed
for chunk in agent.stream({"messages": [...]}, stream_mode="updates"):
    ...

Key Differences:

  • No separate AgentExecutor โ€” create_agent returns a ready-to-use graph
  • No callbacks โ€” use stream_mode="updates" for real-time events
  • Model as string โ€” "openai:gpt-4o" instead of ChatOpenAI(...)
  • Messages format โ€” {"messages": [...]} instead of {"input": "..."}

๐Ÿค Contributing

Contributions welcome! Areas of interest:

  • Additional visualization styles (Dashboard, Pipeline, etc.)
  • LangGraph-specific visualization with state machines
  • Docker deployment configuration
  • Recording/replay functionality
  • Custom theming UI
  • Multi-agent visualization
  • Mobile-optimized view

๐Ÿ“„ License

MIT License โ€” use freely for personal and commercial projects.


๐Ÿ™ Acknowledgments

Built with:


๐Ÿ“ฌ Contact

Built for AI Engineers who want their demos to shine. โœจ

Questions? Issues? Open an issue or reach out!

About

Visualizer for Demos of Agents built with LangChain

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors