Skip to content

AshDevFr/4shClaw

Repository files navigation

4shClaw

A personal 24/7 AI assistant that leads a team of specialized agents. Built for a single user, running on your own hardware.

4shClaw runs a lead agent that handles all user communication (via Telegram) and delegates tasks to sub-agents with scoped capabilities. Each agent runs in an isolated Docker container with its own secrets, MCP tools, and memory.

Quick Start

cd 4shClaw/
claude   # or: opencode

Then ask: "set up 4shClaw" (the /setup skill walks you through everything).

See OPENCODE.md for OpenCode-specific setup.

Architecture

User <-> Telegram <-> Host Orchestrator <-> Docker Containers (one per agent invocation)
                           |
                      SQLite + IPC (MCP over filesystem)
  • Host orchestrator: Always-on Node.js process. Handles Telegram relay, scheduling, container spawning, IPC, and persistence.
  • Agent containers: Ephemeral. Spin up on trigger (message, schedule, delegation), execute, exit. Agents communicate with the host through MCP tools backed by filesystem JSON.
  • Lead agent: Routes requests, synthesizes information, manages the user relationship. Only agent that talks to the user.
  • Sub-agents: Specialized workers (developer, reviewer, PM, etc.) with scoped capabilities. Write results to the shared ledger or return via delegation.

Agent Runtimes

4shClaw supports two agent runtimes. Each agent declares its runtime in agent.yaml.

Claude Agent SDK (default)

Uses the Claude Agent SDK inside Docker containers. This is the default and recommended runtime.

# agents/developer/agent.yaml
name: developer
model: sonnet        # opus | sonnet | haiku
role: sub-agent

The Claude SDK supports multiple model providers out of the box via environment variables. See Model Providers below.

OpenCode

Uses OpenCode, a provider-agnostic agent runtime. Choose this if you want to run non-Claude models that aren't available through Anthropic-compatible APIs, or if you want to avoid any dependency on Anthropic's SDK.

# agents/summarizer/agent.yaml
name: summarizer
runtime: opencode
model: amazon-bedrock/anthropic.claude-sonnet-4-20250514-v1:0
role: sub-agent
provider:
  id: amazon-bedrock
  options:
    region: us-east-1

OpenCode agents use a separate container image (4shclaw-agent-opencode:latest) and require the provider/modelID format for the model field.

Known limitation: OpenCode injects its own system prompt (~50K tokens, including tool schemas and coding conventions) that starts with "You are opencode". Custom agent identity instructions from AGENT.md are appended via the system field in the prompt API, but they come after this built-in prompt. Small models (8B and under) tend to latch onto the first identity they see and may ignore or partially follow the agent persona. Larger models handle the override better. This project is Claude SDK first; OpenCode support is best-effort. The Claude SDK runtime already supports multiple providers (Anthropic, Bedrock, Vertex AI, Ollama via base URL, LiteLLM), so OpenCode is only needed for providers not reachable through an Anthropic-compatible API. See OPENCODE.md for details.

Model Providers

The Claude Agent SDK supports several providers natively. You configure them per-agent via environment variables in the agent's .env file.

Anthropic API (default)

No configuration needed. The host passes ANTHROPIC_API_KEY (or CLAUDE_CODE_OAUTH_TOKEN) to all containers automatically.

# agents/my-agent/agent.yaml
model: sonnet

Amazon Bedrock

Access Claude models through AWS without an Anthropic API key.

# agents/my-agent/.env
CLAUDE_CODE_USE_BEDROCK=1
AWS_REGION=us-east-1

# Option 1: IAM credentials
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...

# Option 2: Named profile (requires ~/.aws/ accessible)
# AWS_PROFILE=my-profile
# agents/my-agent/agent.yaml
model: sonnet   # same model aliases work

Google Vertex AI

Access Claude models through Google Cloud.

# agents/my-agent/.env
CLAUDE_CODE_USE_VERTEX=1
CLOUD_ML_REGION=us-east5
ANTHROPIC_VERTEX_PROJECT_ID=my-gcp-project
model: sonnet

Ollama (local models)

Run open-weight models locally. Requires Ollama running on the host.

# agents/my-agent/.env
ANTHROPIC_BASE_URL=http://host.docker.internal:11434/v1
model: sonnet   # Ollama maps this to whatever model you've configured

Note: host.docker.internal resolves to the host machine from inside Docker containers. Agents using Ollama automatically get bridge networking (the host routes this).

LiteLLM Proxy

Use any of 100+ model providers through a LiteLLM proxy.

# agents/my-agent/.env
ANTHROPIC_BASE_URL=http://host.docker.internal:4000
# ANTHROPIC_API_KEY=your-litellm-key   # if your proxy requires auth
model: sonnet

Azure AI Foundry

Access Claude models through Microsoft Azure.

# agents/my-agent/.env
CLAUDE_CODE_USE_FOUNDRY=1

How It Works

All environment variables from an agent's .env file are passed directly to the Docker container with no filtering. The Claude Agent SDK inside the container reads these variables and routes API calls accordingly. No code changes or special configuration are needed on the host side.

The provider priority is:

  1. If CLAUDE_CODE_USE_BEDROCK=1 is set, use Bedrock
  2. If CLAUDE_CODE_USE_VERTEX=1 is set, use Vertex AI
  3. If CLAUDE_CODE_USE_FOUNDRY=1 is set, use Azure Foundry
  4. If ANTHROPIC_BASE_URL is set, use that endpoint
  5. Otherwise, use the Anthropic API directly

Creating Agents

Use the /add-agent skill interactively:

claude
> /add-agent

Or create files manually:

agents/my-agent/
  agent.yaml    # Manifest: name, model, role, capabilities, schedule
  AGENT.md      # Agent personality and instructions
  .env          # Secrets (gitignored)
  .mcp.json     # External MCP servers (optional)

See agents/lead/agent.yaml for a working example.

Capabilities

Agents declare capabilities in agent.yaml. The host resolves these to container mounts, env vars, and MCP servers at spawn time.

Capability Description
github Read/write GitHub access via MCP (requires GITHUB_TOKEN)
github-readonly Read-only GitHub access
docker Container build/run via host-proxied MCP tools
web Internet access for web fetching
filesystem:/path Mount a host directory read-only
scratchpad Shared filesystem for inter-agent file handoff
memory_consolidation Cross-scope memory access (lorekeeper only)

Full definitions in config/capabilities.yaml.

Configuration

Copy and edit the host config:

cp config/host.yaml.example config/host.yaml

Key settings: Telegram bot token, allowed user IDs, heartbeat interval, quiet hours, concurrency limits, container runtime. See the example file for all options.

Development

npm install
npm run build
npm run dev      # start with hot-reload
npm test         # run tests
npm run lint     # lint + format check

Project Structure

4shClaw/
  src/              # Host orchestrator source
  container/        # Agent container images and runners
  agents/           # Agent definitions (one dir per agent)
  config/           # Host config and capability definitions
  data/             # SQLite DB, IPC dirs, agent home dirs
  .claude/skills/   # Interactive skills (/setup, /add-agent, etc.)

Full structure documented in the project CLAUDE.md.

License

This project is licensed under the GNU Affero General Public License v3.0.

For commercial licensing options, contact @AshDevFr.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors