[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)#551
[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)#551
Conversation
…nd runtime (#544) - get_provider() now accepts openai/ollama/vllm/compatible → OpenAIProvider - Reads CODEFRAME_LLM_MODEL and OPENAI_BASE_URL from environment - runtime.py reads CODEFRAME_LLM_PROVIDER (default: anthropic) instead of hardcoding - ANTHROPIC_API_KEY check is skipped when provider != anthropic - Tests cover all three paths: anthropic, openai, mock
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
Walkthroughget_provider() now accepts kwargs and routes OpenAI-compatible types ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Runtime
participant Env
participant ProviderFactory
participant LLMProvider
User->>Runtime: request execute_agent()
Runtime->>Env: read CODEFRAME_LLM_PROVIDER
Env-->>Runtime: provider_type
Runtime->>ProviderFactory: get_provider(provider_type, **kwargs)
ProviderFactory->>Env: read OPENAI_API_KEY / CODEFRAME_LLM_MODEL / OPENAI_BASE_URL
Env-->>ProviderFactory: api_key, model, base_url
ProviderFactory->>LLMProvider: instantiate (OpenAIProvider or AnthropicProvider)
LLMProvider-->>Runtime: provider instance
Runtime->>LLMProvider: execute calls
LLMProvider-->>Runtime: responses
Runtime-->>User: results
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Review: [4.D.2] Provider factory + env var wiringThe core implementation is clean and correct — routing via Test quality: conditional assertion in if mock_get_provider.called:
assert mock_get_provider.call_args[0][0] == "openai"If Test quality: Unused variable: Missing OPENAI_API_KEY validation in runtime: The anthropic path in Env var documentation: The routing logic and the auth-check refactor are correct. The main ask before merge is removing the conditional guard in |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
codeframe/adapters/llm/__init__.py (1)
55-75: Consider passingapi_keyvia kwargs to support local providers.The current implementation doesn't forward
api_keyfromkwargstoOpenAIProvider, which would allow callers to explicitly pass a dummy key for local providers that don't need authentication.Support api_key in kwargs
def get_provider(provider_type: str = "anthropic", **kwargs) -> LLMProvider: """Get a configured LLM provider. Args: provider_type: Provider type ("anthropic", "openai", "ollama", "vllm", "compatible", or "mock"). OpenAI-compatible types are all routed to OpenAIProvider. **kwargs: Optional overrides passed to the provider constructor. - Supported keys: model, base_url. + Supported keys: api_key, model, base_url. Returns: Configured LLMProvider instance Raises: ValueError: If provider type is unknown """ if provider_type in _OPENAI_COMPATIBLE: return OpenAIProvider( + api_key=kwargs.get("api_key"), model=kwargs.get("model", os.environ.get("CODEFRAME_LLM_MODEL", "gpt-4o")), base_url=kwargs.get("base_url", os.environ.get("OPENAI_BASE_URL")), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@codeframe/adapters/llm/__init__.py` around lines 55 - 75, get_provider currently routes openai-compatible types to OpenAIProvider but does not forward an api_key from kwargs, preventing callers from supplying a local/dummy key; update the OpenAIProvider call inside get_provider (the branch that checks _OPENAI_COMPATIBLE) to pass api_key=kwargs.get("api_key", os.environ.get("OPENAI_API_KEY")) (or your project env var if different), keeping the existing model and base_url logic so callers can override api_key via kwargs when needed.tests/core/test_runtime_adapters.py (1)
141-142: Missing@pytest.mark.v2marker on new test class.Per coding guidelines, new v2 tests should be marked appropriately.
Add v2 marker
+@pytest.mark.v2 class TestRuntimeProviderSelection: """Tests for CODEFRAME_LLM_PROVIDER env var routing in execute_agent."""As per coding guidelines: "New v2 Python tests must be marked with
@pytest.mark.v2decorator orpytestmark = pytest.mark.v2"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` around lines 141 - 142, The new test class TestRuntimeProviderSelection in tests/core/test_runtime_adapters.py is missing the v2 marker; add the pytest marker by either decorating the class with `@pytest.mark.v2` above the class definition or adding pytestmark = pytest.mark.v2 at module level so the TestRuntimeProviderSelection class is marked as a v2 test per guidelines.tests/adapters/test_llm.py (1)
206-249: New tests are comprehensive but missing@pytest.mark.v2marker.The test coverage is thorough, validating:
- Provider routing for all OpenAI-compatible types (
openai,ollama,vllm,compatible)- Environment variable overrides (
CODEFRAME_LLM_MODEL,OPENAI_BASE_URL)- Default model fallback (
gpt-4o)However, per coding guidelines, new tests should be marked with
@pytest.mark.v2. Consider adding the marker at class or module level.Add pytest marker for v2 tests
class TestGetProvider: """Tests for get_provider factory function.""" + + pytestmark = pytest.mark.v2 def test_get_mock_provider(self):Or at module level if all tests in this file are v2:
pytestmark = pytest.mark.v2As per coding guidelines: "New v2 Python tests must be marked with
@pytest.mark.v2decorator orpytestmark = pytest.mark.v2"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/adapters/test_llm.py` around lines 206 - 249, The new tests (e.g., test_get_openai_provider, test_get_ollama_provider, test_get_vllm_provider, test_get_compatible_provider, test_codeframe_llm_model_sets_model, test_openai_base_url_env_var, test_openai_default_model_is_gpt4o) need the pytest v2 marker; add pytest.mark.v2 to them either by decorating the test functions or the containing test class, or add a module-level marker by defining pytestmark = pytest.mark.v2 near the top of the file so all tests in this file are marked v2.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@codeframe/adapters/llm/__init__.py`:
- Around line 71-75: get_provider currently builds an OpenAIProvider for any
provider_type in _OPENAI_COMPATIBLE but never passes api_key from kwargs,
causing OpenAIProvider (and its validation) to require OPENAI_API_KEY even for
local providers like "ollama" and "vllm"; fix by accepting an "api_key" kwarg
(and documenting it in the get_provider docstring) and forward it into the
OpenAIProvider constructor (e.g., pass api_key=kwargs.get("api_key",
os.environ.get("OPENAI_API_KEY"))) so callers can supply a key or leave it blank
for local backends, or alternatively add provider-specific bypass logic in
get_provider for "ollama" and "vllm" to skip API key validation inside
OpenAIProvider.
In `@tests/core/test_runtime_adapters.py`:
- Around line 186-202: The test currently patches
codeframe.adapters.llm.get_provider which won't intercept the name imported into
runtime; update the patch target to patch "codeframe.core.runtime.get_provider"
(the symbol looked up by execute_agent) and keep the existing patch for
codeframe.core.engine_registry.get_builtin_adapter; also remove the conditional
silent check and instead assert that mock_get_provider.was_called (or
mock_get_provider.called) is True and that mock_get_provider.call_args[0][0] ==
"openai" after calling execute_agent to ensure the patch actually intercepted
the call.
---
Nitpick comments:
In `@codeframe/adapters/llm/__init__.py`:
- Around line 55-75: get_provider currently routes openai-compatible types to
OpenAIProvider but does not forward an api_key from kwargs, preventing callers
from supplying a local/dummy key; update the OpenAIProvider call inside
get_provider (the branch that checks _OPENAI_COMPATIBLE) to pass
api_key=kwargs.get("api_key", os.environ.get("OPENAI_API_KEY")) (or your project
env var if different), keeping the existing model and base_url logic so callers
can override api_key via kwargs when needed.
In `@tests/adapters/test_llm.py`:
- Around line 206-249: The new tests (e.g., test_get_openai_provider,
test_get_ollama_provider, test_get_vllm_provider, test_get_compatible_provider,
test_codeframe_llm_model_sets_model, test_openai_base_url_env_var,
test_openai_default_model_is_gpt4o) need the pytest v2 marker; add
pytest.mark.v2 to them either by decorating the test functions or the containing
test class, or add a module-level marker by defining pytestmark = pytest.mark.v2
near the top of the file so all tests in this file are marked v2.
In `@tests/core/test_runtime_adapters.py`:
- Around line 141-142: The new test class TestRuntimeProviderSelection in
tests/core/test_runtime_adapters.py is missing the v2 marker; add the pytest
marker by either decorating the class with `@pytest.mark.v2` above the class
definition or adding pytestmark = pytest.mark.v2 at module level so the
TestRuntimeProviderSelection class is marked as a v2 test per guidelines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f986c040-8bf2-4881-9e95-0a0c5c6f0fbf
📒 Files selected for processing (4)
codeframe/adapters/llm/__init__.pycodeframe/core/runtime.pytests/adapters/test_llm.pytests/core/test_runtime_adapters.py
| try: | ||
| # Patch at the source module level since runtime imports locally | ||
| with patch("codeframe.adapters.llm.get_provider") as mock_get_provider: | ||
| mock_get_provider.return_value = MagicMock() | ||
| with patch( | ||
| "codeframe.core.engine_registry.get_builtin_adapter", | ||
| create=True, | ||
| ) as mock_adapter: | ||
| mock_adapter.return_value = MagicMock() | ||
| mock_adapter.return_value.run.return_value = iter([]) | ||
| with patch.dict("os.environ", env, clear=True): | ||
| try: | ||
| execute_agent(mock_workspace, mock_run, engine="react") | ||
| except Exception: | ||
| pass | ||
| if mock_get_provider.called: | ||
| assert mock_get_provider.call_args[0][0] == "openai" |
There was a problem hiding this comment.
Patch target may not intercept the actual import; assertion is weak.
Two concerns with this test:
-
Patch location: The patch at
codeframe.adapters.llm.get_providermay not intercept the call sinceruntime.pyusesfrom codeframe.adapters.llm import get_provider. The function is looked up inruntime's namespace, so the patch should targetcodeframe.core.runtime.get_provider. -
Silent pass on uncalled mock: Lines 201-202 only assert if
mock_get_provider.calledisTrue. If the patch fails to intercept (due to issue#1), the test passes silently without verifying anything.
Suggested fix
- with patch("codeframe.adapters.llm.get_provider") as mock_get_provider:
+ with patch("codeframe.core.runtime.get_provider") as mock_get_provider:
mock_get_provider.return_value = MagicMock()
with patch(
"codeframe.core.engine_registry.get_builtin_adapter",
create=True,
) as mock_adapter:
mock_adapter.return_value = MagicMock()
mock_adapter.return_value.run.return_value = iter([])
with patch.dict("os.environ", env, clear=True):
try:
execute_agent(mock_workspace, mock_run, engine="react")
except Exception:
pass
- if mock_get_provider.called:
- assert mock_get_provider.call_args[0][0] == "openai"
+ assert mock_get_provider.called, "get_provider was not called"
+ assert mock_get_provider.call_args[0][0] == "openai"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/core/test_runtime_adapters.py` around lines 186 - 202, The test
currently patches codeframe.adapters.llm.get_provider which won't intercept the
name imported into runtime; update the patch target to patch
"codeframe.core.runtime.get_provider" (the symbol looked up by execute_agent)
and keep the existing patch for
codeframe.core.engine_registry.get_builtin_adapter; also remove the conditional
silent check and instead assert that mock_get_provider.was_called (or
mock_get_provider.called) is True and that mock_get_provider.call_args[0][0] ==
"openai" after calling execute_agent to ensure the patch actually intercepted
the call.
- get_provider() for local providers (ollama, vllm, compatible) no longer requires OPENAI_API_KEY; uses 'not-required' placeholder for openai SDK - Pass api_key through kwargs for caller override support - Strengthen test_get_provider_called_with_env_provider_type assertion: unconditionally assert mock was called rather than conditionally - Add tests: ollama/vllm work without OPENAI_API_KEY, openai still requires it
Follow-up ReviewGood progress since the last round. Two of the main prior concerns are resolved:
Three items from previous reviews still open: 1. Patch target is still wrong ( with patch("codeframe.adapters.llm.get_provider") as mock_get_provider:This patches the function in the 2. Unused patchers = [p.start() for p in patches] # patchers is never referencedThe 3. except Exception:
pass # Other errors (db, OPENAI_API_KEY, etc.) are fineIf Still missing: |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/core/test_runtime_adapters.py (1)
1-6:⚠️ Potential issue | 🟡 MinorMissing
pytestmark = pytest.mark.v2for v2 test module.This test file is in
tests/core/and adds new tests, but lacks the required v2 marker at module level.Proposed fix
"""Tests for runtime execute_agent with external engine adapters.""" import pytest from unittest.mock import MagicMock, patch +pytestmark = pytest.mark.v2 + from codeframe.core.adapters.agent_adapter import AgentResultAs per coding guidelines:
tests/core/**/*.py: v2 tests in core should be marked with@pytest.mark.v2orpytestmark = pytest.mark.v2at module level.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` around lines 1 - 6, This test module is missing the v2 marker; add a module-level marker by defining pytestmark = pytest.mark.v2 at the top of the file (after imports or at module scope) so pytest treats these tests as v2; reference the pytestmark symbol to locate where to add it in tests/core/test_runtime_adapters.py and ensure pytest is imported if not already.
🧹 Nitpick comments (2)
tests/core/test_runtime_adapters.py (2)
150-161: Redundantos.environ.popafterpatch.dict(..., clear=True).Using
clear=Trueinpatch.dictalready removes all environment variables within the context. The subsequentos.environ.pop("ANTHROPIC_API_KEY", None)on line 153 is unnecessary.Suggested simplification
env = {"CODEFRAME_LLM_PROVIDER": "openai"} with patch.dict("os.environ", env, clear=True): - import os - os.environ.pop("ANTHROPIC_API_KEY", None) try: execute_agent(mock_workspace, mock_run, engine="react")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` around lines 150 - 161, The os.environ.pop("ANTHROPIC_API_KEY", None) call is redundant because patch.dict("os.environ", env, clear=True) already clears env vars; remove that pop call in the test block that calls execute_agent(mock_workspace, mock_run, engine="react") so the environment setup relies solely on patch.dict and the test remains correct and simpler.
169-175: Same redundantos.environ.poppattern.Lines 170-172 have the same redundancy—
patch.dict({}, clear=True)already clears the environment.Suggested simplification
with patch.dict("os.environ", {}, clear=True): - import os - os.environ.pop("ANTHROPIC_API_KEY", None) - os.environ.pop("CODEFRAME_LLM_PROVIDER", None) - with pytest.raises(ValueError, match="ANTHROPIC_API_KEY"): execute_agent(mock_workspace, mock_run, engine="react")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` around lines 169 - 175, The test uses patch.dict("os.environ", {}, clear=True) which already clears env vars, so remove the redundant os.environ.pop("ANTHROPIC_API_KEY", None) and os.environ.pop("CODEFRAME_LLM_PROVIDER", None) calls in the test block; keep the patch.dict and the call to execute_agent(mock_workspace, mock_run, engine="react") (and retain the local import os only if other statements in the block still reference os), ensuring the ValueError assertion for "ANTHROPIC_API_KEY" still runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/core/test_runtime_adapters.py`:
- Around line 1-6: This test module is missing the v2 marker; add a module-level
marker by defining pytestmark = pytest.mark.v2 at the top of the file (after
imports or at module scope) so pytest treats these tests as v2; reference the
pytestmark symbol to locate where to add it in
tests/core/test_runtime_adapters.py and ensure pytest is imported if not
already.
---
Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Around line 150-161: The os.environ.pop("ANTHROPIC_API_KEY", None) call is
redundant because patch.dict("os.environ", env, clear=True) already clears env
vars; remove that pop call in the test block that calls
execute_agent(mock_workspace, mock_run, engine="react") so the environment setup
relies solely on patch.dict and the test remains correct and simpler.
- Around line 169-175: The test uses patch.dict("os.environ", {}, clear=True)
which already clears env vars, so remove the redundant
os.environ.pop("ANTHROPIC_API_KEY", None) and
os.environ.pop("CODEFRAME_LLM_PROVIDER", None) calls in the test block; keep the
patch.dict and the call to execute_agent(mock_workspace, mock_run,
engine="react") (and retain the local import os only if other statements in the
block still reference os), ensuring the ValueError assertion for
"ANTHROPIC_API_KEY" still runs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3bfdbac5-751e-4e3a-bcfd-692bb3031d02
📒 Files selected for processing (3)
codeframe/adapters/llm/__init__.pytests/adapters/test_llm.pytests/core/test_runtime_adapters.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/adapters/test_llm.py
- codeframe/adapters/llm/init.py
- Add @pytest.mark.v2 to TestRuntimeProviderSelection class and new TestGetProvider methods per CLAUDE.md convention - Remove unused `patchers` variable in test_get_provider_called_with_env_provider_type - Add comment explaining why codeframe.adapters.llm.get_provider is the correct patch target (local function import reads module.__dict__ at call time)
Follow-up reviewGood progress — the main issues from my first pass were addressed:
Two remaining concerns: Patch target in
Neither is blocking if CI is green and the patch-target question is confirmed, but the test reliability concern is real. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/core/test_runtime_adapters.py (1)
186-186: Prefer a for-loop over list comprehension for side effects.Using a list comprehension to call
p.start()discards the return values and allocates an unused list. This is inconsistent with the pattern used earlier in this file (lines 70-72) which properly iterates with a for-loop.♻️ Suggested refactor
- [p.start() for p in patches] + for p in patches: + p.start()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` at line 186, Replace the list comprehension that calls p.start() for side effects with an explicit for-loop to avoid allocating an unused list and match the earlier pattern; locate the expression "[p.start() for p in patches]" in tests/core/test_runtime_adapters.py and change it to iterate with "for p in patches: p.start()" so the calls are made without creating a temporary list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Line 186: Replace the list comprehension that calls p.start() for side effects
with an explicit for-loop to avoid allocating an unused list and match the
earlier pattern; locate the expression "[p.start() for p in patches]" in
tests/core/test_runtime_adapters.py and change it to iterate with "for p in
patches: p.start()" so the calls are made without creating a temporary list.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 072073a9-f8eb-4d51-b6d6-f90c05c5b726
📒 Files selected for processing (2)
tests/adapters/test_llm.pytests/core/test_runtime_adapters.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/adapters/test_llm.py
Mock get_provider in test_openai_provider_skips_anthropic_key_check so the test is isolated to just the ANTHROPIC_API_KEY guard logic and does not depend on OPENAI_API_KEY availability. Addresses claude-review concern about broad exception swallowing obscuring test intent.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/core/test_runtime_adapters.py (2)
157-157: Avoid list comprehension for side effects.Using a list comprehension purely for side effects (
[p.start() for p in patches]) allocates an unnecessary list that's immediately discarded. A simpleforloop is more idiomatic and clearer:Proposed fix
- [p.start() for p in patches] + for p in patches: + p.start()Same applies to line 199.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` at line 157, Replace the list comprehension used for side effects with an explicit loop: locate the use of "[p.start() for p in patches]" (variable "patches" and the call "p.start()") in tests/core/test_runtime_adapters.py and change it to a standard for loop (for p in patches: p.start()) to avoid allocating an unnecessary list; do the same for the analogous usage at the later location (the call sequence around line with "p.stop()" or similar) so both start/stop invocations use explicit for loops.
159-172: Missing assertion that mock was actually called.The test mocks
get_providerto isolate the auth guard logic, but doesn't verify the mock was invoked. If the patch location is incorrect (the same concern raised in a prior review), the test could pass without the mock ever being used.Consider adding an assertion to confirm the mock was called, similar to what's done in
test_get_provider_called_with_env_provider_type:Proposed fix
try: execute_agent(mock_workspace, mock_run, engine="react") except ValueError as exc: assert "ANTHROPIC_API_KEY" not in str(exc), ( f"Should not require ANTHROPIC_API_KEY when provider=openai: {exc}" ) except Exception: pass # Non-ValueError errors are outside the scope of this test + # Verify mock was actually used to confirm test isolation + assert mock_get_provider.called, "get_provider mock was not called"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_runtime_adapters.py` around lines 159 - 172, Add an assertion that the patched get_provider was actually invoked to ensure the patch location is correct: after calling execute_agent(mock_workspace, mock_run, engine="react") (still inside the with patch("codeframe.adapters.llm.get_provider") as mock_get_provider: block), assert mock_get_provider.assert_called() or mock_get_provider.assert_called_once() so the test fails if the mock was never used; reference the mocked symbol get_provider and the call site execute_agent to locate where to insert the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Line 157: Replace the list comprehension used for side effects with an
explicit loop: locate the use of "[p.start() for p in patches]" (variable
"patches" and the call "p.start()") in tests/core/test_runtime_adapters.py and
change it to a standard for loop (for p in patches: p.start()) to avoid
allocating an unnecessary list; do the same for the analogous usage at the later
location (the call sequence around line with "p.stop()" or similar) so both
start/stop invocations use explicit for loops.
- Around line 159-172: Add an assertion that the patched get_provider was
actually invoked to ensure the patch location is correct: after calling
execute_agent(mock_workspace, mock_run, engine="react") (still inside the with
patch("codeframe.adapters.llm.get_provider") as mock_get_provider: block),
assert mock_get_provider.assert_called() or
mock_get_provider.assert_called_once() so the test fails if the mock was never
used; reference the mocked symbol get_provider and the call site execute_agent
to locate where to insert the assertion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 62a4687e-cbec-407e-90dd-9232cff0a768
📒 Files selected for processing (1)
tests/core/test_runtime_adapters.py
Review - Round 4Resolving my two remaining open items from round 3. Patch target - resolvedVerified codeframe/core/runtime.py:631 does a lazy import inside execute_agent(): A lazy import reads from the module's dict at call time, so patching codeframe.adapters.llm.get_provider is the correct target. The test comment is accurate. No change needed. except Exception: pass - resolvedtest_openai_provider_skips_anthropic_key_check - The guard intentionally narrows scope to the ANTHROPIC_API_KEY check. The comment clarifies this, and get_provider is mocked so unrelated runtime paths are isolated. Acceptable. test_get_provider_called_with_env_provider_type - The except Exception: pass block on lines 215-216 is followed by unconditional assertions on lines 217-218, so those assertions still run regardless of whether the call raises. No silent pass possible. All prior concerns are now addressed. The implementation is clean: provider routing is correct, env var wiring is consistent, and the test suite covers the key cases. LGTM - ready to merge. |
Follow-up Review: [4.D.2] Provider factory + env var wiringAll items raised in my previous review have been addressed. Here's the status: ✅ Conditional assertion removed — ✅ Unused variable removed — ✅ Env var documentation — ✅ ✅ One minor observation (not a blocker): The implementation is correct, tests are solid, and documentation is current. Good to go. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Around line 243-248: Update the documented CODEFRAME_LLM_PROVIDER options to
include the missing "mock" provider so the README matches supported runtime
values; specifically edit the block that lists provider values (the line
containing CODEFRAME_LLM_PROVIDER=anthropic and the inline comment listing
"Provider: anthropic (default), openai, ollama, vllm, compatible") to add "mock"
to that comma-separated list so it reads "...anthropic (default), openai,
ollama, vllm, compatible, mock" and ensure the example or surrounding text
references "mock" where relevant.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3e4787dc-fa15-41e9-8823-e2b706f62054
📒 Files selected for processing (2)
CLAUDE.mddocs/QUICKSTART.md
✅ Files skipped from review due to trivial changes (1)
- docs/QUICKSTART.md
| # LLM Provider selection (multi-provider support) | ||
| CODEFRAME_LLM_PROVIDER=anthropic # Provider: anthropic (default), openai, ollama, vllm, compatible | ||
| CODEFRAME_LLM_MODEL=gpt-4o # Model override (used with openai/ollama/vllm/compatible) | ||
| OPENAI_API_KEY=sk-... # Required for openai provider; not needed for local providers | ||
| OPENAI_BASE_URL=http://localhost:11434/v1 # Base URL override (for ollama, vllm, or custom endpoints) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify all valid provider types in get_provider function
# Search for provider type handling in get_provider
rg -nP -A2 'def get_provider|provider_type ==|provider_type in' codeframe/adapters/llm/__init__.pyRepository: frankbria/codeframe
Length of output: 539
🏁 Script executed:
# Verify exact content and line numbers in CLAUDE.md
sed -n '240,250p' CLAUDE.md | cat -nRepository: frankbria/codeframe
Length of output: 764
🏁 Script executed:
# Search for other mentions of provider types in documentation files
rg -n 'anthropic.*openai.*ollama|Provider.*anthropic|CODEFRAME_LLM_PROVIDER' CLAUDE.md README.md 2>/dev/null | head -20Repository: frankbria/codeframe
Length of output: 180
Add "mock" to the list of supported LLM providers.
Line 244 documents five provider values but the code accepts a sixth valid type: "mock". Since this is developer documentation, the supported provider list should be complete.
Suggested fix
-CODEFRAME_LLM_PROVIDER=anthropic # Provider: anthropic (default), openai, ollama, vllm, compatible
+CODEFRAME_LLM_PROVIDER=anthropic # Provider: anthropic (default), openai, ollama, vllm, compatible, mock🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CLAUDE.md` around lines 243 - 248, Update the documented
CODEFRAME_LLM_PROVIDER options to include the missing "mock" provider so the
README matches supported runtime values; specifically edit the block that lists
provider values (the line containing CODEFRAME_LLM_PROVIDER=anthropic and the
inline comment listing "Provider: anthropic (default), openai, ollama, vllm,
compatible") to add "mock" to that comma-separated list so it reads
"...anthropic (default), openai, ollama, vllm, compatible, mock" and ensure the
example or surrounding text references "mock" where relevant.
Closes #544
Summary
get_provider()incodeframe/adapters/llm/__init__.pynow routesopenai,ollama,vllm,compatible→OpenAIProvider, readingCODEFRAME_LLM_MODELandOPENAI_BASE_URLfrom envcodeframe/core/runtime.pyreadsCODEFRAME_LLM_PROVIDER(default:anthropic) instead of hardcoding"anthropic";ANTHROPIC_API_KEYcheck is skipped when provider is not anthropicTest plan
get_provider("openai")→OpenAIProviderwithOPENAI_API_KEYsetget_provider("ollama")/"vllm"/"compatible"→OpenAIProviderCODEFRAME_LLM_MODEL=qwen2.5-codersets model on providerOPENAI_BASE_URL=http://localhost:11434/v1sets base_urlgpt-4owhenCODEFRAME_LLM_MODELnot setCODEFRAME_LLM_PROVIDER=openaiwith noANTHROPIC_API_KEY→ no error about ANTHROPICANTHROPIC_API_KEYstill requiredSummary by CodeRabbit