Version: 0.1.29
LeanIX Agent MCP Server + A2A Agent
Agent package for communicating with LeanIX Enterprise Architecture Management via REST APIs and GraphQL.
This repository is actively maintained - Contributions are welcome!
The MCP Server can be run in two modes: stdio (for local testing) or http (for networked access).
LEANIX_WORKSPACE: The URL of the target service.LEANIX_API_TOKEN: The API token or access token.
export LEANIX_WORKSPACE="http://localhost:8080"
export LEANIX_API_TOKEN="your_token"
leanix-mcp --transport "stdio"export LEANIX_WORKSPACE="http://localhost:8080"
export LEANIX_API_TOKEN="your_token"
leanix-mcp --transport "http" --host "0.0.0.0" --port "8000"export LEANIX_WORKSPACE="http://localhost:8080"
export LEANIX_API_TOKEN="your_token"
leanix-agent --provider openai --model-id gpt-4o --api-key sk-...docker build -t leanix-agent .docker run -d \
--name leanix-agent \
-p 8000:8000 \
-e TRANSPORT=http \
-e LEANIX_WORKSPACE="http://your-service:8080" \
-e LEANIX_API_TOKEN="your_token" \
knucklessg1/leanix-agent:latestservices:
leanix-agent:
image: knucklessg1/leanix-agent:latest
environment:
- HOST=0.0.0.0
- PORT=8000
- TRANSPORT=http
- LEANIX_WORKSPACE=http://your-service:8080
- LEANIX_API_TOKEN=your_token
ports:
- 8000:8000{
"mcpServers": {
"leanix": {
"command": "uv",
"args": [
"run",
"--with",
"leanix-agent",
"leanix-mcp"
],
"env": {
"LEANIX_WORKSPACE": "http://your-service:8080",
"LEANIX_API_TOKEN": "your_token"
}
}
}
}python -m pip install leanix-agentuv pip install leanix-agentThis agent uses pydantic-graph orchestration for intelligent routing and optimal context management.
---
title: LeanIX Agent Graph Agent
---
stateDiagram-v2
[*] --> RouterNode: User Query
RouterNode --> DomainNode: Classified Domain
RouterNode --> [*]: Low confidence / Error
DomainNode --> [*]: Domain Result
- RouterNode: A fast, lightweight LLM (e.g.,
nvidia/nemotron-3-super) that classifies the user's query into one of the specialized domains. - DomainNode: The executor node. For the selected domain, it dynamically sets environment variables to temporarily enable ONLY the tools relevant to that domain, creating a highly focused sub-agent (e.g.,
gpt-4o) to complete the request. This preserves LLM context and prevents tool hallucination.