Skip to content

xhwSkhizein/agiwo

Agiwo

Open-source streaming-first Python AI agent framework and control plane

Build, orchestrate, trace, and operate tool-using LLM agents with streaming execution, scheduler-based orchestration, persistence, and observability.

Python 3.10+ License CI

Quick Links: GitHub Repo · Website · Docs · Getting Started · Comparison · Repository Overview

Public Docs

  • Website: https://docs.agiwo.o-ai.tech
  • Getting started: https://docs.agiwo.o-ai.tech/docs/getting-started/
  • Comparison: https://docs.agiwo.o-ai.tech/docs/compare/agiwo-vs-langgraph-openai-agents-autogen/
  • Repository overview: https://docs.agiwo.o-ai.tech/docs/repo-overview/

Repository Structure

Agiwo has three main areas:

  • agiwo/ — the SDK runtime, including agent execution, tools, scheduler orchestration, model abstraction, memory, workspace, and observability
  • console/ — the FastAPI control plane and internal web UI
  • docs/ — design notes, concepts, and repository-native documentation

What Is Agiwo?

Agiwo has two parts:

  • SDK: an async, streaming-first Python framework for building LLM agents with tools, hooks, storage, observability, skills, and scheduler-based orchestration.
  • Console: an optional self-hosted FastAPI + Next.js control plane for managing agent configs, chatting over SSE, inspecting scheduler state, and integrating channels. It is currently best suited for internal deployments, supports Feishu as its only built-in channel integration, and is not yet production-ready.

The project favors explicit runtime wiring over hidden global state. Agent execution, tool execution, scheduler orchestration, and persistence are all separate layers.

Current Capabilities

  • Streaming-first agent execution through one runtime pipeline surfaced as start()
  • Tool calling with builtin tools, custom BaseTool implementations, and agent-as-tool composition via Agent.as_tool()
  • Scheduler orchestration for roots and child agents, including route_root_input (unified entry point), enqueue_input, wait_for, steer, cancel, and shutdown
  • Run and step persistence plus trace collection with memory, SQLite, and MongoDB-backed storage options
  • Global skill discovery with per-agent allowlisting through explicit allowed_skills
  • Optional Console package for control-plane operations, trace inspection, session chat, and Feishu channel integration

Quick Start

Install

# SDK
pip install agiwo

For development from source:

git clone https://github.com/xhwSkhizein/agiwo.git
cd agiwo
uv sync
uv run python scripts/install_git_hooks.py

For SDK usage, export provider credentials in your shell or place them in a local .env. Set only the credentials for the providers you actually use.

Example:

export OPENAI_API_KEY=...

Minimal SDK Example

import asyncio

from agiwo.agent import Agent, AgentConfig
from agiwo.llm import OpenAIModel


async def main() -> None:
    agent = Agent(
        AgentConfig(
            name="assistant",
            description="A helpful assistant",
            system_prompt="You are a concise assistant.",
        ),
        model=OpenAIModel(name="gpt-5.4"),
    )

    # Start the agent and get the execution handle
    handle = await agent.start("What is 2 + 2?")
    result = await handle.wait_for_completion()
    print(result.response)

    # For streaming output, use the handle's stream
    handle = await agent.start("Give me a one-line summary of recursion.")
    async for event in handle.stream():
        if event.type == "step_delta" and event.delta.content:
            print(event.delta.content, end="", flush=True)

    await agent.close()


asyncio.run(main())

Custom Tool Example

from agiwo.tool import BaseTool, ToolResult, ToolContext


class WeatherTool(BaseTool):
    name = "get_weather"
    description = "Get the current weather for a city"

    def get_parameters(self) -> dict:
        return {
            "type": "object",
            "properties": {
                "city": {"type": "string"},
            },
            "required": ["city"],
        }

    async def execute(
        self,
        parameters: dict,
        context: ToolContext,
        abort_signal=None,
    ) -> ToolResult:
        city = parameters["city"]
        return ToolResult.success(
            tool_name=self.name,
            content=f"Weather in {city}: sunny, 25C",
            content_for_user=f"{city}: sunny, 25C",
            output={"city": city, "condition": "sunny", "temp_c": 25},
        )

Agent As Tool

from agiwo.agent import Agent, AgentConfig
from agiwo.llm import DeepseekModel

researcher = Agent(
    AgentConfig(
        name="researcher",
        description="Research specialist",
        system_prompt="You are strong at collecting and summarizing evidence.",
    ),
    model=DeepseekModel(id="deepseek-chat"),
)

orchestrator = Agent(
    AgentConfig(
        name="orchestrator",
        description="Delegates focused research tasks",
        system_prompt="Delegate independent research tasks when useful.",
    ),
    model=DeepseekModel(id="deepseek-chat"),
    tools=[researcher.as_tool()],
)

Scheduler Example

import asyncio

from agiwo.agent import Agent, AgentConfig
from agiwo.scheduler import Scheduler
from agiwo.llm import DeepseekModel


async def main() -> None:
    agent = Agent(
        AgentConfig(
            name="orchestrator",
            description="Can delegate and wait",
            system_prompt="Use spawned agents only for truly independent sub-tasks.",
        ),
        model=DeepseekModel(id="deepseek-chat"),
    )

    async with Scheduler() as scheduler:
        from agiwo.scheduler.commands import RouteStreamMode

        route_result = await scheduler.route_root_input(
            "Research two competing approaches and summarize them.",
            agent=agent,
            stream_mode=RouteStreamMode.RUN_END,
        )
        # Consume stream to get result
        async for item in route_result.stream:
            if item.type == "run_completed":
                print(item.response)
                break


asyncio.run(main())

For long-running roots, the scheduler API also supports enqueue_input, wait_for, steer, cancel, and shutdown.

Console

The Console is a separately published control-plane package and is intentionally positioned below the SDK in scope and readiness.

  • Package: pip install agiwo-console
  • Recommended deployment model: internal/self-hosted use
  • Built-in channel integrations today: Feishu only
  • Readiness: useful for operators, not yet production-ready

Start The API Server (Host Mode)

The Console environment template lives at console/.env.example.full.

pip install agiwo-console
cat > .env <<'EOF'
OPENAI_API_KEY=...
EOF
agiwo-console serve --env-file .env

If you are running from source instead of the published package, the full template lives at console/.env.example.full.

The API server defaults to http://localhost:8422.

Useful routes:

  • GET /api/health
  • GET /api/overview
  • GET /api/agents
  • POST /api/agents/{agent_id}/sessions
  • POST /api/sessions/{session_id}/input
  • GET /api/scheduler/states
  • GET /api/traces

Start The Complete Console In Docker

The Docker path starts the backend, Web UI, Agent runtime, and Bash execution inside one managed container.

pip install agiwo-console
cat > .env <<'EOF'
OPENAI_API_KEY=...
EOF
agiwo-console container up \
  --data-dir "$HOME/agiwo-data" \
  --env-file .env

Optional agent-visible host directories must be declared explicitly:

agiwo-console container up \
  --data-dir "$HOME/agiwo-data" \
  --env-file .env \
  --mount "$HOME/projects:projects" \
  --mount "$HOME/media:media"

The managed container exposes one default public entrypoint at http://localhost:8422. All default persistence is rooted under the mounted data directory. Host directories are not visible to the Agent runtime unless they are passed with --mount. Each --mount <host-path>:<alias> appears inside the container as /mnt/host/<alias>, so the examples above are available to agents as /mnt/host/projects and /mnt/host/media. The managed CLI path starts the container with the invoking host UID/GID so /data and explicit mounts remain writable without falling back to root inside the container.

Start The Web UI

cd console/web
npm install
npm run dev

The frontend reads NEXT_PUBLIC_API_URL from console/web/.env.local.

Example:

echo 'NEXT_PUBLIC_API_URL=http://localhost:8422' > console/web/.env.local

Configuration Model

There are two configuration layers:

  • SDK config in agiwo/config/settings.py
  • Console config in console/server/config.py

Environment variable namespaces:

  • SDK-owned keys: AGIWO_*
  • Console-owned keys: AGIWO_CONSOLE_*
  • Provider credentials: canonical external names such as OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY, AWS_REGION

Important current rules:

  • Compatible providers (openai-compatible, anthropic-compatible) must be configured with explicit base_url and api_key_env_name
  • Builtin tools that create their own models read shared defaults from AGIWO_TOOL_DEFAULT_MODEL_*

Repository Discoverability Notes

Recommended GitHub repository description:

Open-source Python AI agent framework and control plane for streaming, tool use, orchestration, tracing, and persistence.

Recommended GitHub topics:

ai-agents, python, llm, agent-framework, multi-agent, tool-calling, observability, agent-orchestration, fastapi

  • Agent config includes allowed_skills: list[str] | None to filter available skills (global skill discovery is configured via AGIWO_SKILLS_DIRS)
  • Console agent config writes are full replace, not patch merge
  • Scheduler state storage is owned by the Scheduler; Console storage wiring separately assembles run-step, trace, session, and citation-related resources

Architecture At A Glance

SDK

  • agiwo/agent/: canonical agent runtime
    • models/: data models (config, run, stream, input, step)
    • runtime/: session runtime, run context, state helpers
    • nested/: child-agent adapter (AgentTool, as_tool())
    • retrospect/: tool result retrospect optimization
    • storage/: run/step persistence (memory, SQLite, MongoDB)
  • agiwo/llm/: model abstractions, providers, config policy, factory (create_model), token usage estimation
  • agiwo/tool/: tool abstractions, ToolContext, builtin tools, authz domain types, process registry, citation storage
  • agiwo/scheduler/: orchestration facade, engine, runner, commands, runtime state, tool control, store, runtime tools
  • agiwo/workspace/: workspace layout, bootstrap, and workspace document loading
  • agiwo/memory/: shared MEMORY indexing/search plus WorkspaceMemoryService
  • agiwo/observability/: trace/span models and storage backends; agiwo.agent.trace_writer.AgentTraceCollector bridges agent runs into traces
  • agiwo/skill/: skill discovery, loading, registry, SkillTool, allowlist handling
  • agiwo/embedding/: embedding abstractions and factory (local/OpenAI-style)
  • agiwo/config/: SDK global settings, provider enums, termination config
  • agiwo/utils/: cross-module runtime tools, abort signals, logging, storage support

Console

  • console/server/routers/: HTTP and SSE API boundary
  • console/server/services/: runtime services (runtime/), registry (agent_registry/), session store (session_store/), storage wiring (storage_wiring.py), tool catalog, metrics, runtime config
  • console/server/models/: shared Console data models (views, session, config)
  • console/server/channels/: channel runtime adapters and Feishu integration
  • console/server/config.py: ConsoleConfig (pydantic-settings, env prefix: AGIWO_CONSOLE_)
  • console/web/: Next.js frontend

Development Workflow

Install dependencies once:

uv sync
uv run python scripts/install_git_hooks.py

Low-noise lint is the default workflow while you are iterating:

uv run python scripts/lint.py changed

The commit gate uses the CI-equivalent lightweight lint command:

uv run python scripts/lint.py ci

The push gate runs the full local verification workflow:

uv run python scripts/check.py pre-push

If the worktree is already dirty, lint only the files you touched:

uv run python scripts/lint.py files path/to/file.py path/to/other.py

Run tests:

uv run pytest tests/ -v
uv run python scripts/check.py console-tests

Current Status

The project is usable for experimentation and development. Core SDK APIs (Agent, Tool, Scheduler) are stabilizing. The Console remains an internal-use control plane that is still evolving and should not yet be treated as production-ready.

Areas that still change:

  • Console channel/session wiring and Feishu integration details
  • Scheduler orchestration edge cases and operator-facing controls
  • Trace query APIs and visualization

If you are changing the architecture or developer workflow, update both AGENTS.md and this README so the repo-level guidance stays aligned with the code.

About

Open-source streaming-first Python AI agent framework and console for building, orchestrating, tracing, and operating tool-using LLM agents.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors