Anima is a command-line AI agent interface designed to evolve with you. It features persistent memory, tool execution capabilities, a unique "parturition" (birth) process, and a flexible Plugin-based LLM Provider architecture with manifest-level security.
🚀 Get Started in 5 Minutes | 🛠️ Developer Guide: Adding Functionality
- Persistent Memory:
- Short-term: Session history is saved automatically to JSON files in
Memory/. - Long-term: Important facts and insights are consolidated into
Memory/memory.jsonafter a User Review step, where you can accept or reject individual items to prevent prompt injection persistence.
- Short-term: Session history is saved automatically to JSON files in
- Plugin-based LLM Providers: Supports multiple AI providers (OpenAI, Anthropic, Gemini, DeepSeek, OpenRouter, Ollama) and AI agent platforms (OpenClaw) through a modular plugin system. Providers run in isolated separate processes with restricted environment access for maximum security.
- A2A (Agent-to-Agent) Collaboration: Anima instances can discover and collaborate with each other and with OpenClaw agents across the network.
- UDP Auto-Discovery: Instances automatically announce themselves via UDP broadcast for fast, zero-config pairing.
- Auth & Consent: Peer connections require a secure token exchange and explicit user approval (Pairing Flow).
- Tiered Disclosure: Selectively share "Public" vs "Full" Identity and Soul information based on the peer's trust level.
- Task Delegation: Offload sub-tasks to remote agents using a token-efficient protocol or the OpenClaw Bridge.
- Shadow Testing & Rollback: When the agent evolves its own Identity, changes are automatically validated against the full regression test suite. If tests fail, the system performs an instant rollback to the last known stable state.
- Manifest-level Security: Tools and filesystem access are governed by provider-specific manifests, ensuring safe execution environments.
- Core Directory Protection: The agent's "spinal cord" (
Plugins/,Memory/,Personality/) is Read-Only by default. Any attempt to modify these files requires an explicit justification and user confirmation, regardless of manifest settings. - Explainable Confirmations: All dangerous operations require the agent to provide a Justification, show exactly what will be Touched, and provide a Diff Preview for file changes before user approval.
- Tool Execution: The agent can interact with your system (read/write/replace files, run shell commands, execute code, search the web) with user confirmation and dry-run support.
- Parturition Service: On the first run, the agent generates its own Identity (
Identity.md) and Soul (Soul.md) based on user input. - Flexible Configuration: Select providers and models via config files in the
Settings/directory or CLI arguments.
Anima is designed with a Deny-by-Default security posture to prevent common AI agent pitfalls such as unintended system destruction or persistent prompt injection.
Key protections include:
- Process Isolation: LLM Providers run in separate, isolated processes.
- Workspace Root: All filesystem operations are locked to a configurable
workspaceDir(defaulting to the project root), preventing access to the host system even if the CLI is launched from a sensitive directory. - Taint Mode: If the agent performs a
web_search, the current turn is marked as "tainted." In this state, command execution and code execution are strictly limited to explicit manifest allowlists to prevent remote prompt injection attacks. Users can explicitly clear this state using thedecontaminatetool after reviewing search results. - Hardened Code Execution:
execute_coderuns in a dedicated.tempdirectory within the workspace and features a 10s timeout and automatic cleanup. - No Shell: Commands run directly (spawn), avoiding shell injection attacks.
- Spinal Cord Protection: Core files (
Plugins/,Memory/,Personality/) are read-only by default. - Evolution Safety: Proposed updates to the agent's identity are shadow-tested against regression suites before being committed.
- Human-in-the-loop: Structured memory and tool justifications require explicit approval.
For full details on our security architecture, reporting instructions, and sandboxing recommendations, see SECURITY.md.
-
Clone the repository:
git clone <repository-url> cd Anima
-
Install dependencies:
npm install
Note: Node.js 18+ is required (tested on v22) for native fetch and module support.
On the first run, Anima will guide you through an automated Setup Wizard to configure your preferred LLM provider and API details.
If you prefer to configure manually:
- Primary Config: Create
Settings/Anima.config.json.{ "LLMProvider": "openrouter", "heartbeatInterval": 300, "workspaceDir": "./my-workspace", "memoryMode": "session", "advisoryCouncil": { "enabled": true, "mode": "on_demand", "advisers": [ { "name": "Architect", "role": "System Architect", "promptFile": "Personality/Advisers/Architect.md" } ] } } - Provider Settings: Create a settings file named after your provider (e.g.,
openai.json,anthropic.json,gemini.json,deepseek.json,openrouter.json,ollama.json,openclaw.json) in theSettings/directory.Note: Standard endpoints are automatically provided for major services.{ "apiKey": "YOUR_API_KEY_HERE", "model": "gpt-4-turbo-preview", "endpoint": "https://api.openai.com/v1/chat/completions" }
Start the CLI:
node cli.js--model <name>: Override the model defined in the provider settings for this session.--add-plugin <path|url>: Install a plugin from a local JS file or a URL to a.ziparchive.--hash <sha256>: (Optional) Verify the SHA-256 hash of a remote plugin archive before installation. Highly recommended for production stability.--safe: Disable all dangerous tools (run_command, write_file, etc.) for this session.--read-only: Restrict the agent to only use read-only inspection tools.--council <mode>: Set the Advisory Council mode (off,always,on_demand,risk_based) for this session.--council-advisers <list>: Comma-separated list of names of advisers to use.--no-council: Completely disable the Advisory Council for this session.--help,-h: Display help information.
/save: Force a memory consolidation update immediately./new: Reset the current conversation context (starts a fresh session).Ctrl+C: Press once to see exit options, press again to save memory and exit.
The agent has access to a variety of tools. Dangerous operations require user confirmation (y to allow, N to deny, d for a simulated dry-run).
write_file: Create or overwrite files.read_file: Read file contents.replace_in_file: Perform regex-based text replacements within a file.run_command: Execute a system command without a shell (e.g.,git,ls). Features a 30s timeout, 100KB output limit, and strict denylist/allowlist enforcement.list_files: List directory contents.search_files: Grep-style search within files.execute_code: Run Python, JavaScript, or Bash code in a temporary environment.new_session: Request to start a fresh session with optional context carry-over. Useful for task completion or major topic switches.web_search: Search the web using DuckDuckGo.decontaminate: Explicitly clear the "tainted" security state after manual review of search results.file_info: Get metadata about a file.delete_file: Remove a file.add_plugin: Install new plugins (agent-initiated).advisory_council: Request on-demand structured feedback from the advisory council on a specific question or plan.discover_agents: Scan local network interfaces and major private IP ranges (192.168.0.0/16,172.16.0.0/12,10.0.0.0/8) for active Anima or OpenClaw agents. Uses fast UDP broadcast auto-discovery with port-scan fallback.manage_peers: List, approve, or deny agent-to-agent pairing requests and manage trusted peers and disclosure levels.get_local_endpoint: Get your own A2A endpoint and Agent ID to share with other agents for pairing.learn_from_agent: Fetch the Identity and Soul of another agent to assist in local evolution and knowledge sharing. Supports tiered disclosure (Public vs Full).delegate_task: Send a specific sub-task or question to another Anima agent. Uses a token-efficient "sub-agent" protocol.openclaw_delegate: Delegate high-level or long-running tasks to a remote OpenClaw agent (e.g., Jennifer). Supports sync/async modes and context snippets.
To ensure system integrity, Anima provides multiple layers of plugin security:
- Isolated Execution: Providers run in separate processes with restricted environment variables.
- Provenance Tracking: Every installed plugin stores its origin (source URL/path, date, and content hash) in a
.provenance.jsonfile. - Audit Logging: An append-only log (
Memory/audit.log) records every tool execution, including redacted arguments, user confirmation results, and cryptographic hashes of tool outputs for forensics. - XML Input Delimiters: All untrusted user input is wrapped in
<user_input>tags. The system prompt instructs the agent to treat content within these tags strictly as data, providing a robust defense against prompt injection and instruction hijacking. - Verification: Remote plugins can be verified against a known SHA-256 hash using the
--hashargument. - A2A Security:
- Auth & Consent: Peer connections require a cryptographically secure token exchange and explicit user approval via the
manage_peerstool. - Tiered Disclosure: Personality sharing is tiered (Public vs Full) based on peer trust levels.
- Auth & Consent: Peer connections require a cryptographically secure token exchange and explicit user approval via the
- Evolution Safety: Proposed updates to the agent's identity are Shadow Tested against regression suites before being committed, with automatic rollback on test failure.
- Security-First Development: We follow a strict policy of keeping all documentation and tests up to date with every change, with a continuous focus on system hardening.
Anima implements a Reasoning and Action (ReAct) loop. When a user provides input, the agent enter a cycle of thought and execution:
- Thought: The agent analyzes the current context and decides if a tool call is necessary.
- Action: If a tool is needed, the agent requests execution.
- Observation: The result of the tool execution is fed back into the context. This loop continues until a final answer is produced or a hard limit of 10 iterations is reached to prevent runaway processes.
Anima can consult a council of specialist "Advisers" to review its proposed actions.
- Always Mode: Every response is reviewed by the council before the agent "acts" (Draft -> Review -> Act).
- Risk-Based Mode: The council is automatically triggered when a turn is deemed risky (e.g., destructive keywords like
rmorsudo, tool-heavy turns, or "tainted" sessions after a web search). - On-Demand Mode: The agent can explicitly call the council using the
advisory_counciltool.
Advisory council discussions and memos are excluded from long-term memory by default to keep the agent's identity clean. This can be changed by setting storeCouncilMemos: true in the configuration.
To manage the LLM's token limit and maintain performance during long-running tasks, Anima uses a Sliding Window strategy for conversation history:
- Fixed Context: The initial System Prompt and the Original User Prompt that started the current task are always preserved.
- Intermediary History: Only the most recent 5 conversational turns (approx. 10 messages) of tool calls and observations are retained in the active context window.
- Auditability: While the active window is pruned, the full session history is always preserved in
Memory/*.jsonfiles.
Plugins are accompanied by a .manifest.json file which defines:
- Capabilities: Which tools the provider is allowed to use.
- Permissions: Filesystem access restrictions (read/write paths).
- Security: The CLI enforces these constraints at runtime. If a manifest is missing, Anima defaults to a "Read-Only" mode, allowing only safe inspection tools.
For a deep dive into extending Anima, see the Adding Functionality Guide.
To add a new capability to Anima, follow these steps in app/Tools.js:
- Define the Schema: Add a new tool definition to the
toolsarray. Follow the OpenAI function calling format, includingname,description, andparameters.{ type: 'function', function: { name: 'my_new_tool', description: 'Does something useful', parameters: { type: 'object', properties: { arg1: { type: 'string' } }, required: ['arg1'] } } }
- Implement the Logic: Add a corresponding async function to the
availableToolsobject.my_new_tool: async ({ arg1 }, permissions) => { // Your implementation here return `Result of my_new_tool with ${arg1}`; }
- Validation: The
ToolDispatcherautomatically validates LLM input against your schema before execution. If validation fails, an error is returned to the agent for self-correction.
The system automatically manages context:
- Loading: On startup, it loads personality files from
Personality/*.mdand long-term memory fromMemory/memory.md. - Consolidation: When the session ends, the AI analyzes the conversation to extract important facts, appending them to
Memory/memory.md.
If Personality/Soul.md or Personality/Identity.md are missing, the system enters "Parturition Mode":
- It asks "Who am I?".
- Based on your answer and the genetic configuration in
Personality/Parturition.md, it generates its own name, role, and core directives. - It saves these to the
Personality/directory and removes the bootstrap file.
cli.js: Main entry point and CLI loop.app/: Core services (Config.js,Tools.js,ParturitionService.js,Utils.js).Plugins/: LLM provider implementations and manifests.Skills/: Tool-based plugins (e.g., Google Calendar) that extend the agent's capabilities.Settings/: Configuration and provider settings.Memory/: Stores session logs and consolidated memory.Personality/: Stores system prompts, identity files, and birth configuration.Advisers/: Markdown files containing prompts for the Advisory Council.
- Corpus — Android-native port of Anima via React Native (Expo). Shares the same memory file format, provider plugin interface, and security model. Memory and Personality files are compatible between desktop and mobile instances, enabling future sync.