Skip to content

Community-maintained registry of AI/LLM model configurations - pricing, features, and limits across 17 providers and 650+ models

License

Notifications You must be signed in to change notification settings

truefoundry/models

TrueFoundry Models

License: MIT PRs Welcome

A comprehensive, community-maintained registry of AI/LLM model configurations. This repository provides standardized model metadata including pricing, features, and token limits across all major AI providers.

Why Use This?

LLM model configs change often — prices drop, features expand, limits shift. This repository provides up-to-date information across providers and makes updating stale data easy.

  • Unified Schema — Consistent model configuration format across 19 providers
  • Up-to-Date Pricing — Current cost information for input/output tokens, batch processing, and caching
  • Feature Tracking — Know exactly what each model supports (vision, tools, structured output, etc.)
  • Open Source — Community-driven updates ensure accuracy and coverage

Supported Providers

Provider Models Description
OpenRouter 411 Unified API for open source models
AWS Bedrock 211 Claude, Llama, Titan, Mistral on AWS
Google Vertex AI 136 Gemini, PaLM on GCP
OpenAI 111 GPT-4, GPT-4o, GPT-5, o1, o3, DALL-E, Whisper, TTS
DeepInfra 87 Open source model hosting
Azure OpenAI 78 OpenAI models on Azure
Azure AI Foundry 68 Azure AI models
Together AI 56 Open source model hosting
Mistral AI 55 Mistral, Mixtral, Codestral
xAI 42 Grok models
Google Gemini 41 Gemini Pro, Ultra, Flash
Perplexity 37 Search-augmented models
Databricks 28 Databricks-hosted models
Anthropic 23 Claude 3, Claude 3.5, Claude 4
Cohere 22 Command, Embed models
SambaNova 22 Enterprise AI models
Groq 15 Fast inference models
AI21 12 Jamba models
Cerebras 7 Fast inference models

Installation

Direct Clone

git clone https://github.com/truefoundry/models.git

Model Configuration Schema

Each model YAML file follows this schema:

# Required
model: gpt-4o                          # Model identifier

# Pricing
costs:
  input_cost_per_token: 0.0000025
  output_cost_per_token: 0.00001
  cache_read_input_token_cost: 0.00000125

# Token limits
limits:
  max_input_tokens: 128000
  max_output_tokens: 16384

# Features (array of strings)
features: [chat, vision, function_calling, tools]

# Metadata
mode: chat
original_provider: openai
is_deprecated: false

Directory Structure

providers/
├── <provider>/
│   ├── default.yaml        # Default params for all models under this provider
│   ├── <model>.yaml
│   └── ...

Example:

providers/
├── openai/
│   ├── default.yaml
│   ├── gpt-4o.yaml
│   ├── gpt-4o-mini.yaml
│   └── ...
├── anthropic/
│   ├── default.yaml
│   ├── claude-3-5-sonnet.yaml
│   └── ...
└── ...

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Quick Start

  1. Clone the repository
  2. Create a new branch (git checkout -b add-new-model)
  3. Add or update model configurations
  4. Validate your YAML files
  5. Submit a pull request

Adding a New Model

# Copy an existing model as a template
cp providers/openai/gpt-4o.yaml providers/openai/new-model.yaml

# Edit with your model's configuration
# Submit a PR!

Updating Pricing

Model pricing changes frequently. If you notice outdated pricing:

  1. Check the provider's official pricing page
  2. Update the relevant YAML file
  3. Submit a PR with a link to the source

Validation

Validate your YAML files before submitting:

# Using Python
python -c "import yaml; yaml.safe_load(open('providers/openai/gpt-4o.yaml'))"

# Using yq
yq eval '.' providers/openai/gpt-4o.yaml

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Community-maintained registry of AI/LLM model configurations - pricing, features, and limits across 17 providers and 650+ models

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 6