This guide explains how Mehaisi CodeSwarm handles model selection and provider routing.
Mehaisi CodeSwarm uses a ModelResolver system that intelligently selects the appropriate model and provider for each agent execution. This ensures consistency while allowing flexibility when needed.
The system follows a clear priority hierarchy (highest to lowest):
1. Runtime Override → codeswarm run agent --model <model>
2. Global Config → codeswarm init --model <model>
3. Agent Default → model: ... in agent YAML
4. Provider Default → From provider configuration
# Set global model during initialization
codeswarm init --model kimi-k2.5:cloud
# All agents will use kimi-k2.5:cloud by default
# Override for a specific run
codeswarm run api-detective --model qwen3-coder
# This agent will use qwen3-coder, others still use kimi-k2.5:cloudProviders are automatically selected based on your model choice:
| Model Pattern | Provider | Authentication Required |
|---|---|---|
*:cloud |
ollama-cloud |
OLLAMA_CLOUD_API_KEY |
*:local |
ollama-local |
None (local Ollama) |
claude* |
claude-code |
CLAUDE_CODE_SESSION_ACCESS_TOKEN |
gpt-* |
openai |
OPENAI_API_KEY |
| Other | ollama-cloud |
OLLAMA_CLOUD_API_KEY |
Edit .mehaisi/config.json to configure providers:
{
"model": "kimi-k2.5:cloud",
"llm": {
"default_provider": "ollama-cloud",
"providers": {
"ollama-cloud": {
"type": "ollama",
"url": "https://api.ollama.com",
"model": "kimi-k2.5:cloud",
"api_key": "your-key-here",
"priority": 1
},
"ollama-local": {
"type": "ollama",
"url": "http://localhost:11434",
"model": "kimi-k2.5",
"priority": 2,
"fallback": true
}
}
}
}Individual agents can specify their preferred model in their YAML file:
name: API Detective
type: investigator
risk_level: low
model: qwen3-coder # Optional: Override global model for this agent
priority: 1Note: Agent-specific models are optional and have lower priority than your global config. Leave them commented out to use your configured model.
Mehaisi CodeSwarm provides an interactive credentials setup command that prompts you for API keys when needed, instead of requiring manual environment variable exports.
# After initialization, run the credentials command
codeswarm credentials
# This will interactively prompt for any missing API keys
# and optionally save them to your config fileOption 1: Interactive Setup (Recommended)
codeswarm credentials
# When prompted, enter your API key
# Choose whether to save it to configOption 2: Environment Variable
export OLLAMA_CLOUD_API_KEY="your-api-key-here"Option 3: Config File
Edit .mehaisi/config.json:
{
"llm": {
"providers": {
"ollama-cloud": {
"api_key": "your-api-key-here"
}
}
}
}No manual exports needed! When you run any command that requires an API key, Mehaisi CodeSwarm will automatically prompt you if it's missing.
# Start Ollama server
ollama serve
# Pull your model
ollama pull kimi-k2.5export CLAUDE_CODE_SESSION_ACCESS_TOKEN="your-session-token"The ModelResolver performs compatibility checks:
- ✅ Cloud models with cloud provider
- ✅ Local models with local provider
⚠️ Warns about mismatches (e.g.,:cloudsuffix with local provider)⚠️ Warns about missing authentication
Enable verbose mode to see resolution details:
export MEHAISI_VERBOSE=1
codeswarm run api-detectiveOutput:
🎯 Model Resolution:
Model: kimi-k2.5:cloud
Provider: ollama-cloud
Source: global-config
Agent: API Detective
codeswarm init --model kimi-k2.5:cloud
export OLLAMA_CLOUD_API_KEY="your-key"
codeswarm pipeline cautiousAll agents use kimi-k2.5:cloud via Ollama Cloud.
# Start Ollama
ollama serve
# Initialize with local model
codeswarm init --model kimi-k2.5:local
# Run without API keys
codeswarm pipeline balanced# Use cloud by default
codeswarm init --model kimi-k2.5:cloud
export OLLAMA_CLOUD_API_KEY="your-key"
# Override for specific agents
codeswarm run code-janitor --model qwen3-coder:localEdit .mehaisi/agents/api-detective.yml:
name: API Detective
model: specialized-api-model:cloud # This agent uses a special modelAll other agents use your global config model.
Check your config has the provider defined:
cat .mehaisi/config.json | grep -A 5 providersSet the appropriate API key:
export OLLAMA_CLOUD_API_KEY="your-key"The model isn't available on your provider. Either:
- Use a different model
- Check model name spelling
- Pull the model locally (
ollama pull model-name)
Check resolution with verbose mode:
export MEHAISI_VERBOSE=1
codeswarm run agent-name- Set global model at init - Ensures consistency
- Use model naming conventions -
:cloudand:localsuffixes help routing - Avoid agent-specific models - Unless there's a specific reason
- Set environment variables - More secure than config file
- Check compatibility - Use
MEHAISI_VERBOSE=1to verify settings
Add custom providers to config:
{
"llm": {
"providers": {
"my-custom-provider": {
"type": "ollama",
"url": "https://my-ollama-instance.com",
"model": "my-custom-model",
"api_key": "my-key",
"priority": 1
}
}
}
}The ModelResolver provides:
✅ Consistent - One model choice affects all agents
✅ Flexible - Override when needed
✅ Validated - Warns about misconfigurations
✅ Transparent - Verbose mode shows decisions
✅ Secure - Supports multiple authentication methods
For questions, check the main documentation or open an issue.