Hachimi is an Agent-First Roleplay Chatbot that goes beyond traditional chat interfaces. Unlike conventional ChatGPT-like experience, Hachimi treats AI agents as first-class concepts, enabling sophisticated multi-agent storytelling and roleplay experiences.
Hachimi defines three distinct types of agents, each with specialized responsibilities:
- Role: Embody individual character personas and states
- Tool:
talkβ generates in-character dialogue based on the character's identity, relationships, and current story context - Context: Receives character-specific information including appearance, personality, abilities, background, and relationship rules
- Role: Manage and update story progression
- Tool:
update_storyβ executes story state modifications as command objects - Operations: Add future plots, add/replace/delete current plots, update plot settings
- Triggers: Called periodically or when the Oracle determines a plot update is needed
- Role: High-level story direction and content generation
- Tools:
create_character_from_descriptionβ generates fully detailed character profiles from text descriptionscreate_world_setting_from_descriptionβ generates complete world settings from descriptionsdecide_plot_updateβ analyzes conversations and determines if plot updates are warrantedgive_titleβ generates session titles based on conversation content
All agent tool calls yield Pydantic model command objects, enabling type-safe execution of story modifications.
Hachimi/
βββ hachimi-server/ # Python/FastAPI backend
β βββ src/
β βββ domain/ # Core domain models (StoryState, ChatSession, Message)
β βββ application/ # Business logic (Agents, Services, Commands)
β βββ infrastructure/ # External integrations (Google AI, SQLite, Caching)
β βββ interface_adapters/ # HTTP controllers
β
βββ hachimi-client/ # React/TypeScript frontend
βββ src/
βββ features/ # Feature modules (dashboard, session, create-world)
βββ routes/ # TanStack Router file-based routes
βββ client/ # Auto-generated API client
Backend:
- Python 3.12+
- FastAPI for REST API
- Google Gemini AI (2.5 Pro & Flash models)
- Pydantic for data validation and structured outputs
- SQLAlchemy with SQLite for persistence
- Clean Architecture pattern
Frontend:
- React 19 with TypeScript
- TanStack Router (file-based routing)
- TanStack Query for server state management
- Tailwind CSS for styling
- Auto-generated API client from OpenAPI spec
- Multi-Agent Roleplay: Characters maintain individual personalities, memories, and interaction rules
- Dynamic Plot Management: Three-tier plot system (past events, current situation, future hooks)
- World Building: Detailed world settings with locations, rules, and atmosphere
- Branching Conversations: Tree-structured message history supports story branching
- Intelligent Plot Updates: Oracle analyzes conversations for significant developments
- AI-Powered Content Generation: Create characters and worlds from natural language descriptions
- Session Management: Multiple concurrent roleplay sessions with persistence
The fastest way to get Hachimi running:
-
Clone and configure:
git clone <repository-url> cd Hachimi cp docker.env.example .env
-
Set your API key in
.env:GOOGLE_API_KEY=your_google_ai_api_key_here
-
Build and run:
docker compose up --build
-
Access the application:
- Frontend: http://localhost:80
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
For production deployment with a public URL:
VITE_API_BASE_URL=https://api.yourdomain.com docker compose up --build -d- Python 3.12+
- Node.js 18+
- uv (Python package manager)
- A Google AI API key
-
Navigate to the server directory:
cd hachimi-server -
Install dependencies:
uv sync
-
Create a
.envfile with your API key:GOOGLE_API_KEY=your_google_ai_api_key_here
-
Start the development server:
uv run fastapi dev src/main.py
The API will be available at
http://localhost:8000with:- Swagger docs:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
- Swagger docs:
-
Navigate to the client directory:
cd hachimi-client -
Install dependencies:
npm install
-
Create a
.envfile:VITE_API_BASE_URL=http://localhost:8000
-
Start the development server:
npm run dev
The frontend will be available at
http://localhost:5173
When the server API changes, regenerate the TypeScript client:
cd hachimi-client
npm run generate-clientThis script will:
- Start the FastAPI server
- Fetch the OpenAPI specification
- Generate TypeScript types and SDK
Stories are configured with a rich state model:
StoryState {
roleplay_config: { main_characters, user_character }
world_setting: { basic_scenario, main_locations[] }
characters[]: { name, gender, age, identity, appearance,
personality, abilities, background, address_rules }
core_story: string
plot_settings: string[] // Long-term facts about the world
past_plot: Event[] // Completed story events
future_plots: string[] // Planned future events
current_plots: string[] // Currently active situations
special_notes: string[] // Writer's notes and rules
}See hachimi-server/src/story_config.toml for a complete example configuration.
Server Domain Models:
StoryStateβ Complete story configuration and stateChatSessionβ Session with tree-structured messagesCharacterβ Detailed character profilesWorldSettingβ World configuration with locations
Server Services:
StoryProgressionServiceβ Manages chat sessions and character interactionsStoryCreationServiceβ AI-powered character and world generation
Client Routes:
/β Dashboard with session list/sessions/$sessionIdβ Active roleplay session/create-worldβ World and character creation wizard
Server: Python with Ruff formatting (line length 120)
Client: TypeScript with Prettier (tabs, width 4)
npm run format # Format code
npm run lint # Run ESLintHachimi/
βββ docker-compose.yml # Orchestration for both services
βββ docker.env.example # Environment template
βββ hachimi-server/
β βββ Dockerfile # Multi-stage Python build with uv
β βββ .dockerignore
βββ hachimi-client/
βββ Dockerfile # Multi-stage Node.js build β Nginx
βββ nginx.conf # SPA routing & compression
βββ .dockerignore
Services:
hachimi-serverβ FastAPI backend on port 8000hachimi-clientβ Nginx serving React SPA on port 80
Volumes:
hachimi-dataβ Persists SQLite database
Useful commands:
docker compose up --build # Build and start
docker compose up -d # Run in background
docker compose down # Stop services
docker compose logs -f # Follow logs
docker volume rm hachimi-data # Reset database