A unified Python SDK for calling LLM, vision, image, audio, speech, video, and model aggregation APIs.
Model API Hub helps Python developers call many AI model providers through a smaller, more predictable surface. Instead of wiring a different SDK, request shape, environment variable, and command-line test for every provider, you can start with a common set of provider modules, public exports, CLI commands, and configuration helpers.
The project is useful when you want to evaluate models quickly, build demos that can switch providers, keep provider keys outside your application code, or add new model endpoints without changing the rest of your app.
- What It Solves
- Quick Start
- Demo Gallery
- Installation
- Configuration
- CLI Usage
- Python Recipes
- Supported Capabilities
- Project Layout
- Testing
- Contributing
| Problem | How Model API Hub helps |
|---|---|
| Every provider has a different SDK style | Provider wrappers expose similar chat, chat_stream, analyze_image, text_to_image, and media generation patterns |
| Demos need quick provider checks | The CLI lets you list providers, send prompts, analyze images, and save generated files |
| Keys should not live in code | API keys can be loaded from .env, environment variables, or passed explicitly for one-off calls |
| AI apps are no longer only chat apps | The repository covers LLM, VLM, image generation, TTS, STT, video generation, and aggregators |
| Provider coverage changes fast | The project keeps providers isolated in small modules so support can grow incrementally |
Install the package and set at least one provider key:
pip install model-api-hub
echo 'DEEPSEEK_API_KEY=your_key_here' > .envCall a model from Python:
from model_api_hub import deepseek_chat, kimi_chat, siliconflow_chat
prompt = "Explain vector databases in one paragraph."
answer = deepseek_chat(prompt)
# answer = kimi_chat(prompt)
# answer = siliconflow_chat(prompt, model="deepseek-ai/DeepSeek-V3")
print(answer)Or test from the terminal before writing application code:
model-api-hub ls
model-api-hub deepseek "Give me a three-bullet intro to RAG."List providers from the terminal
Call an LLM provider
Analyze an image with a VLM
Generate image, audio, and video files
The provider-list screenshot is based on the local CLI command. Provider response screenshots are credential-safe examples that show the runtime shape without exposing private keys. The capture plan for replacing examples with recorded GIFs and real provider outputs lives in docs/readme-media-plan.md.
Install from PyPI:
pip install model-api-hubInstall from source for development:
git clone https://github.com/sanbuphy/model-api-hub.git
cd model-api-hub
pip install -e .Install development tools when contributing:
pip install -e ".[dev]"Create a .env file in your project root. Each provider reads the key it needs when a call is made.
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
DEEPSEEK_API_KEY=sk-...
KIMI_API_KEY=sk-...
ZHIPUAI_API_KEY=...
SILICONFLOW_API_KEY=sk-...
MINIMAX_API_KEY=...
YIYAN_API_KEY=...
DASHSCOPE_API_KEY=sk-...
MODELSCOPE_API_KEY=ms-...
XUNFEI_SPARK_API_KEY=...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
MISTRAL_API_KEY=...
COHERE_API_KEY=...
PERPLEXITY_API_KEY=pplx-...
AZURE_OPENAI_API_KEY=...
STEP_API_KEY=...
ELEVENLABS_API_KEY=...
AZURE_SPEECH_KEY=...
STABILITY_API_KEY=...
RECRAFT_API_KEY=...
RUNWAY_API_KEY=...
LUMA_API_KEY=...For one-off tests, pass api_key directly:
from model_api_hub import deepseek_chat
response = deepseek_chat("Hello!", api_key="sk-...")Use YAML when your application wants named defaults for models and parameters. Load those values with ConfigManager and pass them to provider calls.
llm:
deepseek:
model: deepseek-chat
temperature: 0.7
max_tokens: 4096
vlm:
openai:
model: gpt-4o
image:
siliconflow:
model: Kwai-Kolors/Kolors
size: 1024x1024from model_api_hub import ConfigManager, deepseek_chat
config = ConfigManager("config.yaml")
llm_config = config.get_llm_config("deepseek")
response = deepseek_chat(
"Summarize the project in one sentence.",
model=llm_config.get("model", "deepseek-chat"),
temperature=llm_config.get("temperature", 0.7),
)The CLI is designed for quick provider checks and demo workflows.
# List available providers
model-api-hub ls
# Call an LLM provider
model-api-hub deepseek "Give me a three-bullet intro to RAG."
# Pass a key directly for a one-off call
model-api-hub deepseek "Hello!" --api-key sk-...
# Analyze an image
model-api-hub siliconflow-vlm "Describe this image" --image ./photo.jpg
# Generate an image
model-api-hub siliconflow-image "A clean app icon for an AI gateway" --output icon.png
# Generate speech
model-api-hub openai-tts "Model switching is simple." --output intro.mp3
# Generate video
model-api-hub luma-video "Camera flies over a data hub" --output demo.mp4from model_api_hub import deepseek_chat, kimi_chat, siliconflow_chat
prompt = "Write a Python function that chunks a long string."
print(deepseek_chat(prompt, system_prompt="You are a concise coding assistant."))
print(kimi_chat(prompt, temperature=0.4))
print(siliconflow_chat(prompt, model="deepseek-ai/DeepSeek-V3"))from model_api_hub import deepseek_chat_stream
for chunk in deepseek_chat_stream("Write a short story about a tiny database."):
print(chunk, end="", flush=True)from model_api_hub.api.llm.deepseek_llm import create_client, get_completion
client = create_client()
messages = [
{"role": "system", "content": "You are a concise technical assistant."},
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a high-level programming language."},
{"role": "user", "content": "What is it best at?"},
]
print(get_completion(client, messages))from model_api_hub import siliconflow_analyze_image
result = siliconflow_analyze_image(
image_path="photo.jpg",
prompt="Extract the visible text and summarize the image.",
)
print(result)from model_api_hub import siliconflow_text_to_image
success = siliconflow_text_to_image(
prompt="A minimal logo for a unified AI model gateway",
output_path="logo.png",
)
print("saved" if success else "failed")from model_api_hub import openai_tts
openai_tts(
text="Model API Hub makes provider switching simple.",
output_path="intro.mp3",
)from model_api_hub import whisper_stt
transcript = whisper_stt("meeting.wav", language="en")
print(transcript)from model_api_hub import luma_generate_video
luma_generate_video(
prompt="A camera flies over a glowing data hub.",
output_path="demo.mp4",
)| Capability | Included examples |
|---|---|
| LLM | OpenAI, Anthropic, DeepSeek, Kimi, ZhipuAI, Gemini, Groq, Mistral, Cohere, StepFun |
| VLM | OpenAI, Gemini, DashScope, ModelScope, SiliconFlow, Yiyan |
| Image | OpenAI, Stability, DashScope, Recraft, Dreamina, SiliconFlow |
| Audio and STT | OpenAI TTS, ElevenLabs, Azure TTS, Baidu TTS, Whisper |
| Video | Runway, Luma, Dreamina |
| Aggregators | OpenRouter, SiliconFlow, ModelScope, Together, Perplexity, Groq, Volcengine Ark, and more |
See support_model.md for the full model list.
model_api_hub/
├── api/
│ ├── llm/ # Language model providers
│ ├── vlm/ # Vision-language providers
│ ├── image/ # Image generation providers
│ ├── audio/ # Text-to-speech providers
│ ├── stt/ # Speech-to-text providers
│ ├── video/ # Video generation providers
│ └── aggregators/ # Model aggregation platforms
├── utils/
│ └── config.py # Configuration and API key loading
├── cli.py # Command-line interface
└── __init__.py # Public exports
Provider modules are intentionally small. A typical provider file contains client creation, one or more high-level helper functions, and provider-specific request handling.
Run focused tests while developing:
python tests/test_llm.py
python tests/test_llm_streaming.py
python tests/test_vlm.py
python tests/test_image.py
python tests/test_audio.py
python tests/test_video.pySome tests require real provider keys. Keep secrets in .env and avoid committing generated media outputs.
New providers, documentation fixes, examples, and tests are welcome.
- Add a provider module under
model_api_hub/api/<capability>/. - Follow the closest existing provider wrapper for function names and parameters.
- Export the helper from
model_api_hub/__init__.pyor the relevant package when it is part of the public surface. - Add tests or a small smoke-test script.
- Update support_model.md, docs/api_reference.md, and README examples when behavior changes.
Before opening a pull request, check that examples do not expose API keys and that provider-specific behavior is clearly named.
Apache License 2.0. See LICENSE.