-
Notifications
You must be signed in to change notification settings - Fork 5
feat: multi-provider LLM support (#545–549) #552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
8749493
5a75972
eb457c8
a7a9743
7facf6f
b6e0aff
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -78,6 +78,7 @@ def __init__( | |
| ) | ||
|
|
||
| self._client = None | ||
| self._async_client = None | ||
|
|
||
| def get_model(self, purpose: Purpose) -> str: | ||
| """Return the model for a given purpose. | ||
|
|
@@ -146,6 +147,55 @@ def complete( | |
|
|
||
| return self._parse_response(response) | ||
|
|
||
| async def async_complete( | ||
| self, | ||
| messages: list[dict], | ||
| purpose: Purpose = Purpose.EXECUTION, | ||
| tools: Optional[list[Tool]] = None, | ||
| max_tokens: int = 4096, | ||
| temperature: float = 0.0, | ||
| system: Optional[str] = None, | ||
| ) -> LLMResponse: | ||
| """True async completion via openai.AsyncOpenAI. | ||
|
|
||
| Raises LLMAuthError / LLMRateLimitError / LLMConnectionError on failure. | ||
| """ | ||
| import openai as _openai | ||
| from codeframe.adapters.llm.base import ( | ||
| LLMAuthError, | ||
| LLMRateLimitError, | ||
| LLMConnectionError, | ||
| ) | ||
|
|
||
| if self._async_client is None: | ||
| self._async_client = _openai.AsyncOpenAI( | ||
| api_key=self.api_key, base_url=self.base_url | ||
| ) | ||
|
|
||
| converted = self._convert_messages(messages) | ||
| if system: | ||
| converted = [{"role": "system", "content": system}] + converted | ||
|
|
||
| kwargs: dict = { | ||
| "model": self.get_model(purpose), | ||
| "max_tokens": max_tokens, | ||
| "messages": converted, | ||
| "temperature": temperature, | ||
| } | ||
| if tools: | ||
| kwargs["tools"] = self._convert_tools(tools) | ||
| kwargs["tool_choice"] = "auto" | ||
|
|
||
| try: | ||
| response = await self._async_client.chat.completions.create(**kwargs) | ||
| return self._parse_response(response) | ||
| except _openai.AuthenticationError as exc: | ||
| raise LLMAuthError(str(exc)) from exc | ||
| except _openai.RateLimitError as exc: | ||
| raise LLMRateLimitError(str(exc)) from exc | ||
| except _openai.APIConnectionError as exc: | ||
| raise LLMConnectionError(str(exc)) from exc | ||
|
|
||
|
Comment on lines
+150
to
+198
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: # First, let's examine the complete() method in openai.py
rg -A 50 "def complete\(" codeframe/adapters/llm/openai.pyRepository: frankbria/codeframe Length of output: 1820 🏁 Script executed: # Also check the base adapter interface to understand the contract
cat -n codeframe/adapters/llm/base.py | head -100Repository: frankbria/codeframe Length of output: 4170 🏁 Script executed: # Let's get the full structure of the openai.py file to understand the class
wc -l codeframe/adapters/llm/openai.pyRepository: frankbria/codeframe Length of output: 99 🏁 Script executed: # Let's check the base class structure to see if there are abstract method definitions
rg -B 5 -A 20 "def complete|def async_complete" codeframe/adapters/llm/base.pyRepository: frankbria/codeframe Length of output: 1840 🏁 Script executed: # Let's also verify the OpenAIProvider class definition to see if it implements the interface
rg -B 2 "class OpenAIProvider" codeframe/adapters/llm/openai.pyRepository: frankbria/codeframe Length of output: 99 🏁 Script executed: # Let's verify what the LLMProvider base class looks like to confirm the contract
rg -B 5 -A 30 "class LLMProvider" codeframe/adapters/llm/base.pyRepository: frankbria/codeframe Length of output: 1109 🏁 Script executed: # Let's check if there are other adapter implementations to see how they handle exceptions
fd "\.py$" codeframe/adapters/llm/ | grep -v base.py | grep -v openai.pyRepository: frankbria/codeframe Length of output: 164 🏁 Script executed: # Check the Anthropic adapter's exception handling for both sync and async
rg -B 5 -A 30 "def complete\(|def async_complete\(" codeframe/adapters/llm/anthropic.py | head -150Repository: frankbria/codeframe Length of output: 2277 🏁 Script executed: # Check the mock adapter as well to understand the pattern
rg -B 5 -A 30 "def complete\(|def async_complete\(" codeframe/adapters/llm/mock.pyRepository: frankbria/codeframe Length of output: 1209 🏁 Script executed: # Let's also check if there are any callers of the sync complete() method to understand the impact
rg "\.complete\(" codeframe/ --type py | grep -v "async_complete" | head -20Repository: frankbria/codeframe Length of output: 1498 Sync and async paths have divergent exception contracts.
🤖 Prompt for AI Agents |
||
| def stream( | ||
| self, | ||
| messages: list[dict], | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 79
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 5551
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 1003
🏁 Script executed:
# Check the base adapter interface to understand the contract cat codeframe/adapters/llm/base.pyRepository: frankbria/codeframe
Length of output: 10579
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 459
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 3215
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 1830
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 664
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 599
🏁 Script executed:
Repository: frankbria/codeframe
Length of output: 3896
Apply the same exception mapping to
complete().The sync
complete()method (lines 77–121) does not normalize Anthropic SDK exceptions to the base adapter interface, whileasync_complete()implements full exception mapping (AuthenticationError→LLMAuthError,RateLimitError→LLMRateLimitError,APIConnectionError→LLMConnectionError).This creates divergent error contracts between sync and async paths.
SupervisorResolver._classify_with_supervision()callscomplete()and catches all exceptions broadly, falling back to a heuristic—behavior that masks authentication and rate-limit failures, which should fail fast rather than silently degrade. Per coding guidelines, adapters must implement the base interface uniformly for integration consistency.🤖 Prompt for AI Agents