diff --git a/docs.json b/docs.json
index d7af84da..e63b1da6 100644
--- a/docs.json
+++ b/docs.json
@@ -168,6 +168,7 @@
"server/services/llm/grok",
"server/services/llm/groq",
"server/services/llm/nvidia",
+ "server/services/llm/novita",
"server/services/llm/ollama",
"server/services/llm/openai",
"server/services/llm/openpipe",
diff --git a/server/services/llm/novita.mdx b/server/services/llm/novita.mdx
new file mode 100644
index 00000000..2ebfae8b
--- /dev/null
+++ b/server/services/llm/novita.mdx
@@ -0,0 +1,150 @@
+---
+title: "Novita AI"
+description: "LLM service implementation using Novita AI's OpenAI-compatible API"
+---
+
+## Overview
+
+`NovitaLLMService` provides access to Novita AI's language models through an OpenAI-compatible interface. It inherits from `OpenAILLMService` and supports streaming responses, function calling, and context management with competitive pricing and a wide selection of open-source models.
+
+
+
+ Pipecat's API methods for Novita AI integration
+
+
+ Complete example with function calling
+
+
+ Official Novita AI API documentation and features
+
+
+ Access models and manage API keys
+
+
+
+## Installation
+
+To use Novita AI services, install the required dependency:
+
+```bash
+pip install "pipecat-ai[novita]"
+```
+
+## Prerequisites
+
+### Novita AI Account Setup
+
+Before using Novita AI LLM services, you need:
+
+1. **Novita AI Account**: Sign up at [Novita AI](https://novita.ai)
+2. **API Key**: Generate an API key from your account dashboard
+3. **Model Selection**: Choose from a wide selection of open-source models
+
+### Required Environment Variables
+
+- `NOVITA_API_KEY`: Your Novita AI API key for authentication
+
+## Configuration
+
+
+ Novita AI API key for authentication.
+
+
+
+ Base URL for Novita AI API endpoint.
+
+
+
+ Runtime-configurable settings. See [Settings](#settings) below.
+
+
+### Settings
+
+Runtime-configurable settings passed via the `settings` constructor argument using `NovitaLLMService.Settings(...)`. These can be updated mid-conversation with `LLMUpdateSettingsFrame`. See [Service Settings](/guides/fundamentals/service-settings) for details.
+
+This service uses the same settings as `OpenAILLMService`. See [OpenAI LLM Settings](/server/services/llm/openai#settings) for the full parameter reference.
+
+The default model is `"moonshotai/kimi-k2.5"`.
+
+## Usage
+
+### Basic Setup
+
+```python
+import os
+from pipecat.services.novita import NovitaLLMService
+
+llm = NovitaLLMService(
+ api_key=os.getenv("NOVITA_API_KEY"),
+ settings=NovitaLLMService.Settings(
+ model="openai/gpt-oss-120b",
+ ),
+)
+```
+
+### With Custom Settings
+
+```python
+llm = NovitaLLMService(
+ api_key=os.getenv("NOVITA_API_KEY"),
+ settings=NovitaLLMService.Settings(
+ model="moonshotai/kimi-k2.5",
+ temperature=0.7,
+ max_tokens=500,
+ ),
+)
+```
+
+### With Function Calling
+
+```python
+from pipecat.adapters.schemas.function_schema import FunctionSchema
+from pipecat.adapters.schemas.tools_schema import ToolsSchema
+from pipecat.processors.aggregators.llm_context import LLMContext
+from pipecat.services.llm_service import FunctionCallParams
+
+async def get_weather(params: FunctionCallParams):
+ await params.result_callback({"temperature": "75", "conditions": "sunny"})
+
+llm = NovitaLLMService(
+ api_key=os.getenv("NOVITA_API_KEY"),
+ settings=NovitaLLMService.Settings(
+ model="openai/gpt-oss-120b",
+ ),
+)
+
+llm.register_function("get_weather", get_weather)
+
+weather_function = FunctionSchema(
+ name="get_weather",
+ description="Get the current weather",
+ properties={
+ "location": {
+ "type": "string",
+ "description": "City and state, e.g. San Francisco, CA",
+ },
+ },
+ required=["location"],
+)
+
+tools = ToolsSchema(standard_tools=[weather_function])
+context = LLMContext(tools=tools)
+```
+
+## Notes
+
+- Novita AI provides an OpenAI-compatible API, so all OpenAI features and patterns work with this service
+- The service supports streaming responses, function calling, and other OpenAI-compatible features
+- Model selection depends on your Novita AI account access and pricing tier
diff --git a/server/services/supported-services.mdx b/server/services/supported-services.mdx
index ee0b08e4..53a1ac71 100644
--- a/server/services/supported-services.mdx
+++ b/server/services/supported-services.mdx
@@ -73,6 +73,7 @@ LLMs receive text or audio based input and output a streaming text response.
| [Grok](/server/services/llm/grok) | `pip install "pipecat-ai[grok]"` |
| [Groq](/server/services/llm/groq) | `pip install "pipecat-ai[groq]"` |
| [NVIDIA](/server/services/llm/nvidia) | `pip install "pipecat-ai[nvidia]"` |
+| [Novita AI](/server/services/llm/novita) | `pip install "pipecat-ai[novita]"` |
| [Ollama](/server/services/llm/ollama) | `pip install "pipecat-ai[ollama]"` |
| [OpenAI](/server/services/llm/openai) | `pip install "pipecat-ai[openai]"` |
| [OpenPipe](/server/services/llm/openpipe) | `pip install "pipecat-ai[openpipe]"` |