Skip to content

[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)#551

Merged
frankbria merged 5 commits intomainfrom
feature/544-provider-factory-env-wiring
Apr 4, 2026
Merged

[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)#551
frankbria merged 5 commits intomainfrom
feature/544-provider-factory-env-wiring

Conversation

@frankbria
Copy link
Copy Markdown
Owner

@frankbria frankbria commented Apr 4, 2026

Closes #544

Summary

  • get_provider() in codeframe/adapters/llm/__init__.py now routes openai, ollama, vllm, compatibleOpenAIProvider, reading CODEFRAME_LLM_MODEL and OPENAI_BASE_URL from env
  • codeframe/core/runtime.py reads CODEFRAME_LLM_PROVIDER (default: anthropic) instead of hardcoding "anthropic"; ANTHROPIC_API_KEY check is skipped when provider is not anthropic

Test plan

  • get_provider("openai")OpenAIProvider with OPENAI_API_KEY set
  • get_provider("ollama") / "vllm" / "compatible"OpenAIProvider
  • CODEFRAME_LLM_MODEL=qwen2.5-coder sets model on provider
  • OPENAI_BASE_URL=http://localhost:11434/v1 sets base_url
  • Default model is gpt-4o when CODEFRAME_LLM_MODEL not set
  • CODEFRAME_LLM_PROVIDER=openai with no ANTHROPIC_API_KEY → no error about ANTHROPIC
  • No env vars set → Anthropic path unchanged, ANTHROPIC_API_KEY still required
  • All 99 adapter + runtime tests pass

Summary by CodeRabbit

  • New Features
    • Support for OpenAI-compatible LLMs (OpenAI, Ollama, vLLM, compatible); model, API key, and base URL can be set via environment or call-time options.
  • Bug Fixes
    • Anthropic API key validation now runs only when Anthropic is selected, avoiding needless errors for other providers.
  • Tests
    • Added tests covering provider routing, configuration precedence, and API-key behaviors.
  • Documentation
    • Quickstart and docs updated to document multi-provider LLM selection and related env vars.

…nd runtime (#544)

- get_provider() now accepts openai/ollama/vllm/compatible → OpenAIProvider
- Reads CODEFRAME_LLM_MODEL and OPENAI_BASE_URL from environment
- runtime.py reads CODEFRAME_LLM_PROVIDER (default: anthropic) instead of hardcoding
- ANTHROPIC_API_KEY check is skipped when provider != anthropic
- Tests cover all three paths: anthropic, openai, mock
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

get_provider() now accepts kwargs and routes OpenAI-compatible types ("openai", "ollama", "vllm", "compatible") to OpenAIProvider with env/kwarg-configured api_key, model, and base_url. The runtime reads CODEFRAME_LLM_PROVIDER to choose the provider for execute_agent(), and Anthropic key validation runs only when Anthropic is selected.

Changes

Cohort / File(s) Summary
Provider factory & tests
codeframe/adapters/llm/__init__.py, tests/adapters/test_llm.py
get_provider(provider_type: str = "anthropic", **kwargs) added; _OPENAI_COMPATIBLE set; OpenAI-compatible types route to OpenAIProvider with api_key, model, base_url sourced from kwargs or env (OPENAI_API_KEY, CODEFRAME_LLM_MODEL default gpt-4o, OPENAI_BASE_URL). Tests added for routing, model/base_url overrides, default model, and API-key behavior.
Runtime selection & tests
codeframe/core/runtime.py, tests/core/test_runtime_adapters.py
execute_agent() reads CODEFRAME_LLM_PROVIDER (default anthropic) and calls get_provider(provider_type). Anthropic ANTHROPIC_API_KEY validation is conditional on selected provider. Tests added to verify provider-driven behavior and that get_provider() receives the chosen provider type.
Docs / Quickstart
CLAUDE.md, docs/QUICKSTART.md
Documentation updated to explain CODEFRAME_LLM_PROVIDER, CODEFRAME_LLM_MODEL, OPENAI_API_KEY, and OPENAI_BASE_URL, and to reframe ANTHROPIC_API_KEY as required only for the Anthropic provider. Tests-only files changed do not alter public APIs.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Runtime
  participant Env
  participant ProviderFactory
  participant LLMProvider

  User->>Runtime: request execute_agent()
  Runtime->>Env: read CODEFRAME_LLM_PROVIDER
  Env-->>Runtime: provider_type
  Runtime->>ProviderFactory: get_provider(provider_type, **kwargs)
  ProviderFactory->>Env: read OPENAI_API_KEY / CODEFRAME_LLM_MODEL / OPENAI_BASE_URL
  Env-->>ProviderFactory: api_key, model, base_url
  ProviderFactory->>LLMProvider: instantiate (OpenAIProvider or AnthropicProvider)
  LLMProvider-->>Runtime: provider instance
  Runtime->>LLMProvider: execute calls
  LLMProvider-->>Runtime: responses
  Runtime-->>User: results
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Poem

🐰 Hop-hop, I scoped the keys and tides,
I routed providers where choice resides,
Env vars hum, models pick their tune,
No hardcoded hops beneath the moon,
A tiny rabbit cheers the provider rune.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)' accurately reflects the main changes: factory pattern implementation for provider selection and environment variable integration for LLM provider configuration.
Linked Issues check ✅ Passed The PR implements all core coding requirements from issue #544: extends get_provider() to route OpenAI-compatible providers, configures them via environment variables (CODEFRAME_LLM_PROVIDER, CODEFRAME_LLM_MODEL, OPENAI_API_KEY, OPENAI_BASE_URL), makes ANTHROPIC_API_KEY checks conditional in runtime.py, and adds comprehensive test coverage for all three provider paths.
Out of Scope Changes check ✅ Passed All changes are directly aligned with issue #544 requirements: get_provider() routing logic, OpenAIProvider instantiation, environment variable integration, conditional API key validation, and test coverage. Documentation updates support the implementation. No extraneous changes detected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/544-provider-factory-env-wiring

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown

claude Bot commented Apr 4, 2026

Review: [4.D.2] Provider factory + env var wiring

The core implementation is clean and correct — routing via _OPENAI_COMPATIBLE, env var wiring, and the runtime key-check guard all read well. A few items worth addressing:

Test quality: conditional assertion in test_get_provider_called_with_env_provider_type (tests/core/test_runtime_adapters.py:201):

if mock_get_provider.called:
    assert mock_get_provider.call_args[0][0] == "openai"

If mock_get_provider is never called (e.g., the patched import path does not match where runtime actually imports from), the test passes silently with zero validation. This should be an unconditional assert mock_get_provider.called followed by the call args check. The patch target may also need to be codeframe.core.runtime.get_provider rather than codeframe.adapters.llm.get_provider.

Test quality: test_openai_provider_skips_anthropic_key_check (tests/core/test_runtime_adapters.py:144) catches all ValueErrors that do not mention ANTHROPIC and passes silently. This swallows the OPENAI_API_KEY error from OpenAIProvider.__init__, so the test can pass regardless of whether the auth-skip logic actually works. Mocking get_provider in this test would isolate it to just the auth-check conditional.

Unused variable: patchers = [p.start() for p in patches] at line 185 — patchers is never referenced; the finally block iterates patches directly.

Missing OPENAI_API_KEY validation in runtime: The anthropic path in runtime.py has an early check with an actionable error message. For _OPENAI_COMPATIBLE providers, the error currently surfaces from OpenAIProvider.__init__ with a different message format. A symmetrical check in runtime.py would be consistent — not a blocker, but worth noting.

Env var documentation: CODEFRAME_LLM_PROVIDER, CODEFRAME_LLM_MODEL, and OPENAI_BASE_URL are not in the CLAUDE.md env vars table. These should be added here or in a follow-up PR.


The routing logic and the auth-check refactor are correct. The main ask before merge is removing the conditional guard in test_get_provider_called_with_env_provider_type so the test actually fails when the mock is not called.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
codeframe/adapters/llm/__init__.py (1)

55-75: Consider passing api_key via kwargs to support local providers.

The current implementation doesn't forward api_key from kwargs to OpenAIProvider, which would allow callers to explicitly pass a dummy key for local providers that don't need authentication.

Support api_key in kwargs
 def get_provider(provider_type: str = "anthropic", **kwargs) -> LLMProvider:
     """Get a configured LLM provider.

     Args:
         provider_type: Provider type ("anthropic", "openai", "ollama",
             "vllm", "compatible", or "mock"). OpenAI-compatible types are
             all routed to OpenAIProvider.
         **kwargs: Optional overrides passed to the provider constructor.
-            Supported keys: model, base_url.
+            Supported keys: api_key, model, base_url.

     Returns:
         Configured LLMProvider instance

     Raises:
         ValueError: If provider type is unknown
     """
     if provider_type in _OPENAI_COMPATIBLE:
         return OpenAIProvider(
+            api_key=kwargs.get("api_key"),
             model=kwargs.get("model", os.environ.get("CODEFRAME_LLM_MODEL", "gpt-4o")),
             base_url=kwargs.get("base_url", os.environ.get("OPENAI_BASE_URL")),
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@codeframe/adapters/llm/__init__.py` around lines 55 - 75, get_provider
currently routes openai-compatible types to OpenAIProvider but does not forward
an api_key from kwargs, preventing callers from supplying a local/dummy key;
update the OpenAIProvider call inside get_provider (the branch that checks
_OPENAI_COMPATIBLE) to pass api_key=kwargs.get("api_key",
os.environ.get("OPENAI_API_KEY")) (or your project env var if different),
keeping the existing model and base_url logic so callers can override api_key
via kwargs when needed.
tests/core/test_runtime_adapters.py (1)

141-142: Missing @pytest.mark.v2 marker on new test class.

Per coding guidelines, new v2 tests should be marked appropriately.

Add v2 marker
+@pytest.mark.v2
 class TestRuntimeProviderSelection:
     """Tests for CODEFRAME_LLM_PROVIDER env var routing in execute_agent."""

As per coding guidelines: "New v2 Python tests must be marked with @pytest.mark.v2 decorator or pytestmark = pytest.mark.v2"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 141 - 142, The new test
class TestRuntimeProviderSelection in tests/core/test_runtime_adapters.py is
missing the v2 marker; add the pytest marker by either decorating the class with
`@pytest.mark.v2` above the class definition or adding pytestmark = pytest.mark.v2
at module level so the TestRuntimeProviderSelection class is marked as a v2 test
per guidelines.
tests/adapters/test_llm.py (1)

206-249: New tests are comprehensive but missing @pytest.mark.v2 marker.

The test coverage is thorough, validating:

  • Provider routing for all OpenAI-compatible types (openai, ollama, vllm, compatible)
  • Environment variable overrides (CODEFRAME_LLM_MODEL, OPENAI_BASE_URL)
  • Default model fallback (gpt-4o)

However, per coding guidelines, new tests should be marked with @pytest.mark.v2. Consider adding the marker at class or module level.

Add pytest marker for v2 tests
 class TestGetProvider:
     """Tests for get_provider factory function."""
+
+    pytestmark = pytest.mark.v2

     def test_get_mock_provider(self):

Or at module level if all tests in this file are v2:

pytestmark = pytest.mark.v2

As per coding guidelines: "New v2 Python tests must be marked with @pytest.mark.v2 decorator or pytestmark = pytest.mark.v2"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/adapters/test_llm.py` around lines 206 - 249, The new tests (e.g.,
test_get_openai_provider, test_get_ollama_provider, test_get_vllm_provider,
test_get_compatible_provider, test_codeframe_llm_model_sets_model,
test_openai_base_url_env_var, test_openai_default_model_is_gpt4o) need the
pytest v2 marker; add pytest.mark.v2 to them either by decorating the test
functions or the containing test class, or add a module-level marker by defining
pytestmark = pytest.mark.v2 near the top of the file so all tests in this file
are marked v2.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@codeframe/adapters/llm/__init__.py`:
- Around line 71-75: get_provider currently builds an OpenAIProvider for any
provider_type in _OPENAI_COMPATIBLE but never passes api_key from kwargs,
causing OpenAIProvider (and its validation) to require OPENAI_API_KEY even for
local providers like "ollama" and "vllm"; fix by accepting an "api_key" kwarg
(and documenting it in the get_provider docstring) and forward it into the
OpenAIProvider constructor (e.g., pass api_key=kwargs.get("api_key",
os.environ.get("OPENAI_API_KEY"))) so callers can supply a key or leave it blank
for local backends, or alternatively add provider-specific bypass logic in
get_provider for "ollama" and "vllm" to skip API key validation inside
OpenAIProvider.

In `@tests/core/test_runtime_adapters.py`:
- Around line 186-202: The test currently patches
codeframe.adapters.llm.get_provider which won't intercept the name imported into
runtime; update the patch target to patch "codeframe.core.runtime.get_provider"
(the symbol looked up by execute_agent) and keep the existing patch for
codeframe.core.engine_registry.get_builtin_adapter; also remove the conditional
silent check and instead assert that mock_get_provider.was_called (or
mock_get_provider.called) is True and that mock_get_provider.call_args[0][0] ==
"openai" after calling execute_agent to ensure the patch actually intercepted
the call.

---

Nitpick comments:
In `@codeframe/adapters/llm/__init__.py`:
- Around line 55-75: get_provider currently routes openai-compatible types to
OpenAIProvider but does not forward an api_key from kwargs, preventing callers
from supplying a local/dummy key; update the OpenAIProvider call inside
get_provider (the branch that checks _OPENAI_COMPATIBLE) to pass
api_key=kwargs.get("api_key", os.environ.get("OPENAI_API_KEY")) (or your project
env var if different), keeping the existing model and base_url logic so callers
can override api_key via kwargs when needed.

In `@tests/adapters/test_llm.py`:
- Around line 206-249: The new tests (e.g., test_get_openai_provider,
test_get_ollama_provider, test_get_vllm_provider, test_get_compatible_provider,
test_codeframe_llm_model_sets_model, test_openai_base_url_env_var,
test_openai_default_model_is_gpt4o) need the pytest v2 marker; add
pytest.mark.v2 to them either by decorating the test functions or the containing
test class, or add a module-level marker by defining pytestmark = pytest.mark.v2
near the top of the file so all tests in this file are marked v2.

In `@tests/core/test_runtime_adapters.py`:
- Around line 141-142: The new test class TestRuntimeProviderSelection in
tests/core/test_runtime_adapters.py is missing the v2 marker; add the pytest
marker by either decorating the class with `@pytest.mark.v2` above the class
definition or adding pytestmark = pytest.mark.v2 at module level so the
TestRuntimeProviderSelection class is marked as a v2 test per guidelines.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f986c040-8bf2-4881-9e95-0a0c5c6f0fbf

📥 Commits

Reviewing files that changed from the base of the PR and between c04458b and b1ca37d.

📒 Files selected for processing (4)
  • codeframe/adapters/llm/__init__.py
  • codeframe/core/runtime.py
  • tests/adapters/test_llm.py
  • tests/core/test_runtime_adapters.py

Comment thread codeframe/adapters/llm/__init__.py
Comment thread tests/core/test_runtime_adapters.py Outdated
Comment on lines +186 to +202
try:
# Patch at the source module level since runtime imports locally
with patch("codeframe.adapters.llm.get_provider") as mock_get_provider:
mock_get_provider.return_value = MagicMock()
with patch(
"codeframe.core.engine_registry.get_builtin_adapter",
create=True,
) as mock_adapter:
mock_adapter.return_value = MagicMock()
mock_adapter.return_value.run.return_value = iter([])
with patch.dict("os.environ", env, clear=True):
try:
execute_agent(mock_workspace, mock_run, engine="react")
except Exception:
pass
if mock_get_provider.called:
assert mock_get_provider.call_args[0][0] == "openai"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Patch target may not intercept the actual import; assertion is weak.

Two concerns with this test:

  1. Patch location: The patch at codeframe.adapters.llm.get_provider may not intercept the call since runtime.py uses from codeframe.adapters.llm import get_provider. The function is looked up in runtime's namespace, so the patch should target codeframe.core.runtime.get_provider.

  2. Silent pass on uncalled mock: Lines 201-202 only assert if mock_get_provider.called is True. If the patch fails to intercept (due to issue #1), the test passes silently without verifying anything.

Suggested fix
-            with patch("codeframe.adapters.llm.get_provider") as mock_get_provider:
+            with patch("codeframe.core.runtime.get_provider") as mock_get_provider:
                 mock_get_provider.return_value = MagicMock()
                 with patch(
                     "codeframe.core.engine_registry.get_builtin_adapter",
                     create=True,
                 ) as mock_adapter:
                     mock_adapter.return_value = MagicMock()
                     mock_adapter.return_value.run.return_value = iter([])
                     with patch.dict("os.environ", env, clear=True):
                         try:
                             execute_agent(mock_workspace, mock_run, engine="react")
                         except Exception:
                             pass
-                        if mock_get_provider.called:
-                            assert mock_get_provider.call_args[0][0] == "openai"
+                        assert mock_get_provider.called, "get_provider was not called"
+                        assert mock_get_provider.call_args[0][0] == "openai"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 186 - 202, The test
currently patches codeframe.adapters.llm.get_provider which won't intercept the
name imported into runtime; update the patch target to patch
"codeframe.core.runtime.get_provider" (the symbol looked up by execute_agent)
and keep the existing patch for
codeframe.core.engine_registry.get_builtin_adapter; also remove the conditional
silent check and instead assert that mock_get_provider.was_called (or
mock_get_provider.called) is True and that mock_get_provider.call_args[0][0] ==
"openai" after calling execute_agent to ensure the patch actually intercepted
the call.

- get_provider() for local providers (ollama, vllm, compatible) no longer
  requires OPENAI_API_KEY; uses 'not-required' placeholder for openai SDK
- Pass api_key through kwargs for caller override support
- Strengthen test_get_provider_called_with_env_provider_type assertion:
  unconditionally assert mock was called rather than conditionally
- Add tests: ollama/vllm work without OPENAI_API_KEY, openai still requires it
@claude
Copy link
Copy Markdown

claude Bot commented Apr 4, 2026

Follow-up Review

Good progress since the last round. Two of the main prior concerns are resolved:

  • api_key kwarg forwardedOpenAIProvider now receives api_key from kwargs or env, with the "not-required" fallback for local providers. Clean.
  • Conditional guard removedtest_get_provider_called_with_env_provider_type now has an unconditional assert mock_get_provider.called. Good.

Three items from previous reviews still open:

1. Patch target is still wrong (tests/core/test_runtime_adapters.py ~line 188)

with patch("codeframe.adapters.llm.get_provider") as mock_get_provider:

This patches the function in the adapters.llm namespace, but runtime.py imports it with from codeframe.adapters.llm import get_provider (or similar), so the name in the runtime module's namespace is a separate binding. The mock will never intercept the actual call. The target must be codeframe.core.runtime.get_provider. CodeRabbit also flagged this.

2. Unused patchers variable (line 185)

patchers = [p.start() for p in patches]  # patchers is never referenced

The finally block iterates patches directly. This is dead code — either use patchers in the finally block, or drop the assignment.

3. test_openai_provider_skips_anthropic_key_check still swallows errors silently

except Exception:
    pass  # Other errors (db, OPENAI_API_KEY, etc.) are fine

If execute_agent raises ValueError: OPENAI_API_KEY (which it will, given the current provider logic), the bare except Exception: pass swallows it and the test passes even though the assertion intent failed. Mocking get_provider (at the correct path — see #1) would sidestep the OPENAI key error and let this test be deterministic.

Still missing: @pytest.mark.v2 on TestRuntimeProviderSelection and the new TestGetProvider methods — per CLAUDE.md convention. CodeRabbit flagged this too.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/core/test_runtime_adapters.py (1)

1-6: ⚠️ Potential issue | 🟡 Minor

Missing pytestmark = pytest.mark.v2 for v2 test module.

This test file is in tests/core/ and adds new tests, but lacks the required v2 marker at module level.

Proposed fix
 """Tests for runtime execute_agent with external engine adapters."""
 
 import pytest
 from unittest.mock import MagicMock, patch
 
+pytestmark = pytest.mark.v2
+
 from codeframe.core.adapters.agent_adapter import AgentResult

As per coding guidelines: tests/core/**/*.py: v2 tests in core should be marked with @pytest.mark.v2 or pytestmark = pytest.mark.v2 at module level.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 1 - 6, This test module is
missing the v2 marker; add a module-level marker by defining pytestmark =
pytest.mark.v2 at the top of the file (after imports or at module scope) so
pytest treats these tests as v2; reference the pytestmark symbol to locate where
to add it in tests/core/test_runtime_adapters.py and ensure pytest is imported
if not already.
🧹 Nitpick comments (2)
tests/core/test_runtime_adapters.py (2)

150-161: Redundant os.environ.pop after patch.dict(..., clear=True).

Using clear=True in patch.dict already removes all environment variables within the context. The subsequent os.environ.pop("ANTHROPIC_API_KEY", None) on line 153 is unnecessary.

Suggested simplification
         env = {"CODEFRAME_LLM_PROVIDER": "openai"}
         with patch.dict("os.environ", env, clear=True):
-            import os
-            os.environ.pop("ANTHROPIC_API_KEY", None)
             try:
                 execute_agent(mock_workspace, mock_run, engine="react")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 150 - 161, The
os.environ.pop("ANTHROPIC_API_KEY", None) call is redundant because
patch.dict("os.environ", env, clear=True) already clears env vars; remove that
pop call in the test block that calls execute_agent(mock_workspace, mock_run,
engine="react") so the environment setup relies solely on patch.dict and the
test remains correct and simpler.

169-175: Same redundant os.environ.pop pattern.

Lines 170-172 have the same redundancy—patch.dict({}, clear=True) already clears the environment.

Suggested simplification
         with patch.dict("os.environ", {}, clear=True):
-            import os
-            os.environ.pop("ANTHROPIC_API_KEY", None)
-            os.environ.pop("CODEFRAME_LLM_PROVIDER", None)
-
             with pytest.raises(ValueError, match="ANTHROPIC_API_KEY"):
                 execute_agent(mock_workspace, mock_run, engine="react")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 169 - 175, The test uses
patch.dict("os.environ", {}, clear=True) which already clears env vars, so
remove the redundant os.environ.pop("ANTHROPIC_API_KEY", None) and
os.environ.pop("CODEFRAME_LLM_PROVIDER", None) calls in the test block; keep the
patch.dict and the call to execute_agent(mock_workspace, mock_run,
engine="react") (and retain the local import os only if other statements in the
block still reference os), ensuring the ValueError assertion for
"ANTHROPIC_API_KEY" still runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/core/test_runtime_adapters.py`:
- Around line 1-6: This test module is missing the v2 marker; add a module-level
marker by defining pytestmark = pytest.mark.v2 at the top of the file (after
imports or at module scope) so pytest treats these tests as v2; reference the
pytestmark symbol to locate where to add it in
tests/core/test_runtime_adapters.py and ensure pytest is imported if not
already.

---

Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Around line 150-161: The os.environ.pop("ANTHROPIC_API_KEY", None) call is
redundant because patch.dict("os.environ", env, clear=True) already clears env
vars; remove that pop call in the test block that calls
execute_agent(mock_workspace, mock_run, engine="react") so the environment setup
relies solely on patch.dict and the test remains correct and simpler.
- Around line 169-175: The test uses patch.dict("os.environ", {}, clear=True)
which already clears env vars, so remove the redundant
os.environ.pop("ANTHROPIC_API_KEY", None) and
os.environ.pop("CODEFRAME_LLM_PROVIDER", None) calls in the test block; keep the
patch.dict and the call to execute_agent(mock_workspace, mock_run,
engine="react") (and retain the local import os only if other statements in the
block still reference os), ensuring the ValueError assertion for
"ANTHROPIC_API_KEY" still runs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3bfdbac5-751e-4e3a-bcfd-692bb3031d02

📥 Commits

Reviewing files that changed from the base of the PR and between b1ca37d and d1d2736.

📒 Files selected for processing (3)
  • codeframe/adapters/llm/__init__.py
  • tests/adapters/test_llm.py
  • tests/core/test_runtime_adapters.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/adapters/test_llm.py
  • codeframe/adapters/llm/init.py

- Add @pytest.mark.v2 to TestRuntimeProviderSelection class and new
  TestGetProvider methods per CLAUDE.md convention
- Remove unused `patchers` variable in test_get_provider_called_with_env_provider_type
- Add comment explaining why codeframe.adapters.llm.get_provider is the
  correct patch target (local function import reads module.__dict__ at call time)
@claude
Copy link
Copy Markdown

claude Bot commented Apr 4, 2026

Follow-up review

Good progress — the main issues from my first pass were addressed:

  • Conditional guard on mock_get_provider.called replaced with an unconditional assert
  • Unused patchers = list comprehension removed
  • @pytest.mark.v2 added to TestRuntimeProviderSelection
  • api_key forwarding via kwargs implemented in get_provider

Two remaining concerns:

Patch target in test_get_provider_called_with_env_provider_type — the test patches codeframe.adapters.llm.get_provider. If runtime.py does a top-level from codeframe.adapters.llm import get_provider, that binding will not be intercepted and the mock will never be called, causing the unconditional assert to fail (better than silent pass before, but still broken). The standard fix is to patch at the use site: codeframe.core.runtime.get_provider. The comment in the test argues source-level patching works due to lazy imports inside the function body — if that is actually how runtime.py is structured, the comment is correct and this is fine. Worth a quick verify: check whether runtime.py imports get_provider at module top-level or inside execute_agent.

test_openai_provider_skips_anthropic_key_check — the broad except Exception: pass still swallows all non-ANTHROPIC exceptions silently. This structural issue was noted in my first review. The test could pass even if the auth-guard conditional is entirely absent, as long as a different exception fires first. Mocking get_provider in this test would isolate it to just the guard logic.

Neither is blocking if CI is green and the patch-target question is confirmed, but the test reliability concern is real.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/core/test_runtime_adapters.py (1)

186-186: Prefer a for-loop over list comprehension for side effects.

Using a list comprehension to call p.start() discards the return values and allocates an unused list. This is inconsistent with the pattern used earlier in this file (lines 70-72) which properly iterates with a for-loop.

♻️ Suggested refactor
-        [p.start() for p in patches]
+        for p in patches:
+            p.start()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` at line 186, Replace the list
comprehension that calls p.start() for side effects with an explicit for-loop to
avoid allocating an unused list and match the earlier pattern; locate the
expression "[p.start() for p in patches]" in tests/core/test_runtime_adapters.py
and change it to iterate with "for p in patches: p.start()" so the calls are
made without creating a temporary list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Line 186: Replace the list comprehension that calls p.start() for side effects
with an explicit for-loop to avoid allocating an unused list and match the
earlier pattern; locate the expression "[p.start() for p in patches]" in
tests/core/test_runtime_adapters.py and change it to iterate with "for p in
patches: p.start()" so the calls are made without creating a temporary list.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 072073a9-f8eb-4d51-b6d6-f90c05c5b726

📥 Commits

Reviewing files that changed from the base of the PR and between d1d2736 and b86726b.

📒 Files selected for processing (2)
  • tests/adapters/test_llm.py
  • tests/core/test_runtime_adapters.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/adapters/test_llm.py

Mock get_provider in test_openai_provider_skips_anthropic_key_check so
the test is isolated to just the ANTHROPIC_API_KEY guard logic and does
not depend on OPENAI_API_KEY availability. Addresses claude-review concern
about broad exception swallowing obscuring test intent.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/core/test_runtime_adapters.py (2)

157-157: Avoid list comprehension for side effects.

Using a list comprehension purely for side effects ([p.start() for p in patches]) allocates an unnecessary list that's immediately discarded. A simple for loop is more idiomatic and clearer:

Proposed fix
-        [p.start() for p in patches]
+        for p in patches:
+            p.start()

Same applies to line 199.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` at line 157, Replace the list
comprehension used for side effects with an explicit loop: locate the use of
"[p.start() for p in patches]" (variable "patches" and the call "p.start()") in
tests/core/test_runtime_adapters.py and change it to a standard for loop (for p
in patches: p.start()) to avoid allocating an unnecessary list; do the same for
the analogous usage at the later location (the call sequence around line with
"p.stop()" or similar) so both start/stop invocations use explicit for loops.

159-172: Missing assertion that mock was actually called.

The test mocks get_provider to isolate the auth guard logic, but doesn't verify the mock was invoked. If the patch location is incorrect (the same concern raised in a prior review), the test could pass without the mock ever being used.

Consider adding an assertion to confirm the mock was called, similar to what's done in test_get_provider_called_with_env_provider_type:

Proposed fix
                     try:
                         execute_agent(mock_workspace, mock_run, engine="react")
                     except ValueError as exc:
                         assert "ANTHROPIC_API_KEY" not in str(exc), (
                             f"Should not require ANTHROPIC_API_KEY when provider=openai: {exc}"
                         )
                     except Exception:
                         pass  # Non-ValueError errors are outside the scope of this test
+                    # Verify mock was actually used to confirm test isolation
+                    assert mock_get_provider.called, "get_provider mock was not called"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/test_runtime_adapters.py` around lines 159 - 172, Add an assertion
that the patched get_provider was actually invoked to ensure the patch location
is correct: after calling execute_agent(mock_workspace, mock_run,
engine="react") (still inside the with
patch("codeframe.adapters.llm.get_provider") as mock_get_provider: block),
assert mock_get_provider.assert_called() or
mock_get_provider.assert_called_once() so the test fails if the mock was never
used; reference the mocked symbol get_provider and the call site execute_agent
to locate where to insert the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/core/test_runtime_adapters.py`:
- Line 157: Replace the list comprehension used for side effects with an
explicit loop: locate the use of "[p.start() for p in patches]" (variable
"patches" and the call "p.start()") in tests/core/test_runtime_adapters.py and
change it to a standard for loop (for p in patches: p.start()) to avoid
allocating an unnecessary list; do the same for the analogous usage at the later
location (the call sequence around line with "p.stop()" or similar) so both
start/stop invocations use explicit for loops.
- Around line 159-172: Add an assertion that the patched get_provider was
actually invoked to ensure the patch location is correct: after calling
execute_agent(mock_workspace, mock_run, engine="react") (still inside the with
patch("codeframe.adapters.llm.get_provider") as mock_get_provider: block),
assert mock_get_provider.assert_called() or
mock_get_provider.assert_called_once() so the test fails if the mock was never
used; reference the mocked symbol get_provider and the call site execute_agent
to locate where to insert the assertion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 62a4687e-cbec-407e-90dd-9232cff0a768

📥 Commits

Reviewing files that changed from the base of the PR and between b86726b and 82026e0.

📒 Files selected for processing (1)
  • tests/core/test_runtime_adapters.py

@claude
Copy link
Copy Markdown

claude Bot commented Apr 4, 2026

Review - Round 4

Resolving my two remaining open items from round 3.

Patch target - resolved

Verified codeframe/core/runtime.py:631 does a lazy import inside execute_agent():

from codeframe.adapters.llm import get_provider

A lazy import reads from the module's dict at call time, so patching codeframe.adapters.llm.get_provider is the correct target. The test comment is accurate. No change needed.

except Exception: pass - resolved

test_openai_provider_skips_anthropic_key_check - The guard intentionally narrows scope to the ANTHROPIC_API_KEY check. The comment clarifies this, and get_provider is mocked so unrelated runtime paths are isolated. Acceptable.

test_get_provider_called_with_env_provider_type - The except Exception: pass block on lines 215-216 is followed by unconditional assertions on lines 217-218, so those assertions still run regardless of whether the call raises. No silent pass possible.


All prior concerns are now addressed. The implementation is clean: provider routing is correct, env var wiring is consistent, and the test suite covers the key cases. LGTM - ready to merge.

@claude
Copy link
Copy Markdown

claude Bot commented Apr 4, 2026

Follow-up Review: [4.D.2] Provider factory + env var wiring

All items raised in my previous review have been addressed. Here's the status:

✅ Conditional assertion removedtest_get_provider_called_with_env_provider_type now uses unconditional assert mock_get_provider.called followed by the call args check. The patch target (codeframe.adapters.llm.get_provider) is also correct here because runtime.py does a local import inside the function body (from codeframe.adapters.llm import get_provider at line 631), so the binding resolves from the module __dict__ at call time — the active patch is picked up correctly.

✅ Unused variable removed[p.start() for p in patches] is no longer assigned to patchers.

✅ Env var documentationCODEFRAME_LLM_PROVIDER, CODEFRAME_LLM_MODEL, OPENAI_API_KEY, and OPENAI_BASE_URL are now documented in both CLAUDE.md and docs/QUICKSTART.md.

api_key kwarg forwardingget_provider now accepts and forwards api_key from kwargs, with a sensible "not-required" fallback for local providers (ollama, vllm, compatible). Tests for the no-key path are included.

@pytest.mark.v2 markers — Applied to all new tests in both tests/adapters/test_llm.py and tests/core/test_runtime_adapters.py.

One minor observation (not a blocker): test_openai_provider_skips_anthropic_key_check still has a bare except Exception: pass for non-ValueError exceptions. With get_provider now mocked, the test is well-isolated from authentication side-effects, so this is acceptable. If the test is revisited later, replacing the broad catch with targeted assertions on execution path would make intent clearer.

The implementation is correct, tests are solid, and documentation is current. Good to go.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CLAUDE.md`:
- Around line 243-248: Update the documented CODEFRAME_LLM_PROVIDER options to
include the missing "mock" provider so the README matches supported runtime
values; specifically edit the block that lists provider values (the line
containing CODEFRAME_LLM_PROVIDER=anthropic and the inline comment listing
"Provider: anthropic (default), openai, ollama, vllm, compatible") to add "mock"
to that comma-separated list so it reads "...anthropic (default), openai,
ollama, vllm, compatible, mock" and ensure the example or surrounding text
references "mock" where relevant.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3e4787dc-fa15-41e9-8823-e2b706f62054

📥 Commits

Reviewing files that changed from the base of the PR and between 82026e0 and 468707b.

📒 Files selected for processing (2)
  • CLAUDE.md
  • docs/QUICKSTART.md
✅ Files skipped from review due to trivial changes (1)
  • docs/QUICKSTART.md

Comment thread CLAUDE.md
Comment on lines +243 to +248
# LLM Provider selection (multi-provider support)
CODEFRAME_LLM_PROVIDER=anthropic # Provider: anthropic (default), openai, ollama, vllm, compatible
CODEFRAME_LLM_MODEL=gpt-4o # Model override (used with openai/ollama/vllm/compatible)
OPENAI_API_KEY=sk-... # Required for openai provider; not needed for local providers
OPENAI_BASE_URL=http://localhost:11434/v1 # Base URL override (for ollama, vllm, or custom endpoints)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify all valid provider types in get_provider function

# Search for provider type handling in get_provider
rg -nP -A2 'def get_provider|provider_type ==|provider_type in' codeframe/adapters/llm/__init__.py

Repository: frankbria/codeframe

Length of output: 539


🏁 Script executed:

# Verify exact content and line numbers in CLAUDE.md
sed -n '240,250p' CLAUDE.md | cat -n

Repository: frankbria/codeframe

Length of output: 764


🏁 Script executed:

# Search for other mentions of provider types in documentation files
rg -n 'anthropic.*openai.*ollama|Provider.*anthropic|CODEFRAME_LLM_PROVIDER' CLAUDE.md README.md 2>/dev/null | head -20

Repository: frankbria/codeframe

Length of output: 180


Add "mock" to the list of supported LLM providers.

Line 244 documents five provider values but the code accepts a sixth valid type: "mock". Since this is developer documentation, the supported provider list should be complete.

Suggested fix
-CODEFRAME_LLM_PROVIDER=anthropic      # Provider: anthropic (default), openai, ollama, vllm, compatible
+CODEFRAME_LLM_PROVIDER=anthropic      # Provider: anthropic (default), openai, ollama, vllm, compatible, mock
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CLAUDE.md` around lines 243 - 248, Update the documented
CODEFRAME_LLM_PROVIDER options to include the missing "mock" provider so the
README matches supported runtime values; specifically edit the block that lists
provider values (the line containing CODEFRAME_LLM_PROVIDER=anthropic and the
inline comment listing "Provider: anthropic (default), openai, ollama, vllm,
compatible") to add "mock" to that comma-separated list so it reads
"...anthropic (default), openai, ollama, vllm, compatible, mock" and ensure the
example or surrounding text references "mock" where relevant.

@frankbria frankbria merged commit 8cc7b4f into main Apr 4, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[4.D.2] Provider factory + env var wiring (get_provider, ModelSelector)

1 participant