feat(mtls): add certificate lifecycle API and DB schema (#146)#387
feat(mtls): add certificate lifecycle API and DB schema (#146)#387Subhajitdas99 wants to merge 24 commits intoGetBindu:mainfrom
Conversation
…safe task handling
- Add agent_certificates and certificate_audit_log tables with Alembic migration - Add CertificateData, CertificateIssueParams, CertificateRenewParams, CertificateRevokeParams to protocol/types.py - Add /api/v1/certificates/issue, /renew, /revoke endpoints with local CA signing - Register certificate routes in BinduApplication (opt-in via mtls_enabled setting) Part of GetBindu#146 - mTLS Transport Layer Security
d54f2a6 to
c8bb25b
Compare
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds mTLS certificate lifecycle (issue/renew/revoke) with CA handling, DB persistence and audit logs, Hydra JWKS binding; a semantic-memory subsystem (embeddings, in-memory store, retriever, example and tests); Alembic migration and schema; AgentMesh README + Docker Compose; protocol TypedDicts; and a .gitignore edit containing unresolved Git conflict markers. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client/Endpoint
participant CertSvc as Certificate Service
participant CA as Local CA
participant DB as Database
participant Hydra as HydraClient
Client->>CertSvc: POST /certificates (issue/renew/revoke) with agent_did, CSR/fingerprint
CertSvc->>CA: load_or_create_local_ca()
CA-->>CertSvc: CA keypair & cert
CertSvc->>CA: sign_csr / compute fingerprint
CA-->>CertSvc: signed cert PEM / fingerprint
CertSvc->>DB: persist agent_certificates state (insert/update)
DB-->>CertSvc: confirmation
CertSvc->>Hydra: bind/unbind fingerprint (update JWKS)
Hydra-->>CertSvc: success/failure
CertSvc->>DB: insert certificate_audit_log event
DB-->>CertSvc: audit confirmed
CertSvc-->>Client: return JSON response (cert data / status)
sequenceDiagram
participant App as Application
participant EmbSvc as Embeddings Service
participant MemStore as Memory Store
participant Retriever as Retriever
App->>EmbSvc: get_embedding(text)
EmbSvc->>EmbSvc: call OpenRouter or return 1536-zero fallback
EmbSvc-->>App: embedding vector
App->>MemStore: add_memory(text, embedding, agent_id)
MemStore-->>App: stored
App->>Retriever: query_memory(query, top_k)
Retriever->>EmbSvc: get_embedding(query)
EmbSvc-->>Retriever: query embedding
Retriever->>MemStore: get_memories()
MemStore-->>Retriever: memory list
Retriever->>Retriever: compute cosine similarities, rank
Retriever-->>App: top-k results
Estimated Code Review Effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
.gitignore (1)
211-229:⚠️ Potential issue | 🔴 CriticalCritical: Unresolved git conflict markers must be removed before merge.
The .gitignore file contains unresolved git conflict markers (
<<<<<<< HEAD,=======,>>>>>>> upstream/main). These will be treated as literal text patterns, breaking git's ignore functionality and indicating an incomplete merge resolution.🔧 Proposed fix to resolve the conflict
postman/* -<<<<<<< HEAD - AI-Agents/ -======= node_modules/ **/node_modules/ # TypeScript SDK sdks/typescript/dist/ sdks/typescript/node_modules/ # Kotlin SDK sdks/kotlin/build/ sdks/kotlin/.gradle/ sdks/kotlin/bin/ examples/gateway_test_fleet/pids/ examples/gateway_test_fleet/logs/ examples/gateway_test_fleet/.fleet.env ->>>>>>> upstream/main🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 211 - 229, Remove the unresolved Git conflict markers (<<<<<<< HEAD, =======, >>>>>>> upstream/main) and merge the two sections into a single clean .gitignore entry set: keep the intended patterns such as AI-Agents/, node_modules/, **/node_modules/, sdks/typescript/dist/, sdks/typescript/node_modules/, sdks/kotlin/build/, sdks/kotlin/.gradle/, sdks/kotlin/bin/, and the examples/... pids/logs/.fleet.env lines without duplication or conflict markers so the file is valid and functions as intended.bindu/server/applications.py (3)
106-150:⚠️ Potential issue | 🟠 MajorNarrow
manifestbefore X402 setup.CI is right:
manifestis optional, but_create_payment_requirements(),_setup_middleware(), and_setup_payment_session_manager()require anAgentManifest. Guard X402 setup onmanifest is not Noneso manifest-less apps still type-check and start.🛠️ Proposed fix
- x402_ext = get_x402_extension_from_capabilities(manifest) + x402_ext = ( + get_x402_extension_from_capabilities(manifest) + if manifest is not None + else None + ) payment_requirements_for_middleware = ( self._create_payment_requirements(x402_ext, manifest, resource_suffix="/") - if x402_ext + if x402_ext and manifest is not None else None ) @@ - if x402_ext and payment_requirements_for_middleware: + if x402_ext and payment_requirements_for_middleware and manifest is not None: self._setup_payment_session_manager( manifest, payment_requirements_for_middleware )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/applications.py` around lines 106 - 150, The code currently uses manifest without guaranteeing it's present when setting up X402; narrow the guard so _create_payment_requirements, _setup_middleware, and _setup_payment_session_manager are only called when manifest is not None and x402_ext is truthy. Concretely, change the x402_ext checks to also require manifest (e.g., if x402_ext and manifest:), compute payment_requirements_for_middleware only when manifest is present, pass the concrete AgentManifest into _create_payment_requirements and _setup_payment_session_manager, and call _setup_middleware with manifest only under the same guard so apps without a manifest still type-check and start.
525-532:⚠️ Potential issue | 🟠 MajorValidate
amountbefore converting it.
opt.get("amount")can beNone, and CI showsprocess_price_to_atomic_amount()does not acceptNone. Return a configuration error before conversion.🛠️ Proposed fix
for opt in options: amount = opt.get("amount") network = opt.get("network") or app_settings.x402.default_network pay_to_address = opt.get("pay_to_address") or x402_ext.pay_to_address + + if amount is None: + raise ValueError("X402 payment option is missing required amount") max_amount_required, asset_address, eip712_domain = ( process_price_to_atomic_amount(amount, network) )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/applications.py` around lines 525 - 532, The code calls process_price_to_atomic_amount(amount, network) without validating amount; if opt.get("amount") is None this will raise—add a guard in the loop over options to validate amount (e.g., if amount is None or not a valid numeric/string price) and raise/return a configuration error before calling process_price_to_atomic_amount; update the loop that assigns amount, network, pay_to_address (in applications.py) to check amount and produce a clear error message referencing the failing option index or identifier so callers can fix the config.
172-223:⚠️ Potential issue | 🔴 CriticalAdd or export
healthz_endpointbefore registering/healthz.The import on line 176 of
applications.pyreferenceshealthz_endpoint, but this symbol is not defined inbindu/server/endpoints/health.py. Onlyhealth_endpointexists. Either definehealthz_endpointas a separate function in the health module, or alias it tohealth_endpointif both endpoints should use the same implementation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/applications.py` around lines 172 - 223, The import in applications.py references healthz_endpoint but only health_endpoint exists; fix by exporting/defining healthz_endpoint in the health module (or alias it to health_endpoint) so both symbols exist: add a function named healthz_endpoint (or set healthz_endpoint = health_endpoint) in the endpoints health module and ensure it is included in that module's exports so the applications.py import succeeds when registering the "/healthz" route.
🧹 Nitpick comments (2)
bindu/server/endpoints/certificates.py (1)
475-477: Use the public storage pattern instead ofapp._storage.These endpoints reach into the private
_storageattribute. Useapp.storageso the configured storage backend remains the single access path.As per coding guidelines,
bindu/server/**/*.py: Use Storage pattern viaapp.storagewhich is auto-configured from settings, accessingStorageBackend(MemoryStorage or PostgresStorage).Also applies to: 500-503, 529-532
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 475 - 477, The code is accessing the private storage attribute app._storage; change these to use the public storage pattern app.storage so the configured StorageBackend is used. Replace uses of app._storage.connection() with app.storage.connection() in this block where HydraClient is created and issue_certificate(agent_did, csr_pem, conn, hydra_client) is called, and make the same substitution for the other occurrences noted (around the blocks that create HydraClient and call issue_certificate or otherwise use conn at the later ranges). Ensure you only change the attribute access (from _storage to storage) and keep existing variable names like conn, hydra_client, agent_did, csr_pem and the call to issue_certificate intact.bindu/common/protocol/types.py (1)
1689-1728: UseA2A_MODEL_CONFIGfor all certificate TypedDicts to maintain consistent validation behavior.The new certificate
TypedDicts at lines 1689, 1700, 1714, and 1728 useConfigDict(alias_generator=to_camel)withoutpopulate_by_name=True, which diverges from every otherTypedDictin this module and will cause Pydantic to reject snake_case field names likeagent_didduring validation. Reuse the sharedA2A_MODEL_CONFIG(defined at line 62) to accept both camelCase wire aliases and Python field names, consistent with the rest of the protocol module.♻️ Proposed fix
-@pydantic.with_config(ConfigDict(alias_generator=to_camel)) +@pydantic.with_config(A2A_MODEL_CONFIG) class CertificateIssueParams(TypedDict): -@pydantic.with_config(ConfigDict(alias_generator=to_camel)) +@pydantic.with_config(A2A_MODEL_CONFIG) class CertificateRenewParams(TypedDict): -@pydantic.with_config(ConfigDict(alias_generator=to_camel)) +@pydantic.with_config(A2A_MODEL_CONFIG) class CertificateRevokeParams(TypedDict): -@pydantic.with_config(ConfigDict(alias_generator=to_camel)) +@pydantic.with_config(A2A_MODEL_CONFIG) class CertificateData(TypedDict):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/common/protocol/types.py` around lines 1689 - 1728, The three certificate TypedDicts CertificateIssueParams, CertificateRenewParams, and CertificateRevokeParams currently use pydantic.with_config(ConfigDict(alias_generator=to_camel)); change them to use the shared A2A_MODEL_CONFIG instead so Pydantic accepts both snake_case fields (e.g., agent_did, cert_fingerprint, current_fingerprint) and camelCase wire aliases, preserving consistent validation behavior with the rest of the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agentmesh/docker/docker-compose.yml`:
- Around line 8-11: Remove the committed DB password by deleting the hard-coded
POSTGRES_PASSWORD value in the environment block and require it to be injected
at runtime (e.g., via an external env var or .env file). Specifically, update
the environment entries that include POSTGRES_PASSWORD (and optionally
POSTGRES_USER/POSTGRES_DB if sensitive) in the docker-compose service definition
so they reference an external environment variable instead of a literal string;
ensure documentation or a .env.example is provided for POSTGRES_PASSWORD so
CI/locals set it securely and detect-secrets will no longer flag the checked-in
credential.
In `@alembic/versions/20260322_0001_add_mtls_certificate_tables.py`:
- Around line 31-34: The CI failure is due to pydocstyle D202: remove the extra
blank line immediately following the docstrings for the public migration
functions upgrade() and downgrade(); update the docstrings so the next line of
code follows directly after the closing triple quotes (no empty line) in both
functions (look for the docstring lines inside upgrade() and downgrade() and
delete the blank line after each).
- Around line 3-5: The module docstring Revises header doesn't match the Alembic
down_revision; update the docstring Revises value to match the declared
down_revision ("20260119_0001") so migration history is consistent. Locate the
top-of-file revision header (Revision ID / Revises / Create Date) and any other
occurrences around the down_revision declaration (e.g., the down_revision
variable) and change the Revises value from "20250614_0001" to "20260119_0001"
(or conversely change down_revision if the intended prior revision is
"20250614_0001") so both the header and the down_revision symbol agree.
- Around line 145-150: The index creation currently passes
postgresql_ops={"created_at": "DESC"} which is invalid; update the
op.create_index call for "idx_cert_audit_created_at" on "certificate_audit_log"
to use a descending expression for created_at by replacing the plain column
reference with sa.literal_column("created_at DESC") in the columns list (keep
the same index name and table), and remove the postgresql_ops argument.
In `@bindu/extensions/semantic_memory/embeddings.py`:
- Around line 1-12: Replace direct os.getenv/_api_key usage and the hardcoded
OpenAI base_url in this module by pulling OpenRouter configuration from
app_settings; specifically read the OpenRouter API key and base_url (and any
relevant timeouts) from app_settings in bindu/settings.py and use those values
to initialize the OpenAI client (the client variable and OpenAI(...) call), and
only create the client when the app_settings value for the API key is present
instead of using _api_key from os.getenv.
- Around line 15-36: Update get_embedding to stop silently returning a
zero-vector: remove the blank line after the docstring to satisfy pydocstyle
D202; import get_logger(__name__) from bindu/utils/logging.py and app_settings
from bindu.settings and use them; pull the model name and fallback dimension
from app_settings (e.g. app_settings.EMBEDDING_MODEL and
app_settings.EMBEDDING_DIM) instead of hardcoding; when client is missing or
embeddings.create raises, log an error with the exception via the module logger
and only return the zero-vector fallback when app_settings.ENV is a test/dev
value (otherwise re-raise or return None/error) so real failures are not masked;
keep references to the existing get_embedding function and client variable to
locate changes.
In `@bindu/extensions/semantic_memory/memory_store.py`:
- Around line 10-19: Add proper docstrings and explicit return type annotations
for add_memory and get_memories, and stop returning the module-level list
directly to prevent external mutation: update add_memory(text: str, embedding:
list[float], agent_id: str) with a one-line docstring describing its behavior
and a return annotation (e.g., None), and update get_memories() to include a
docstring and a return type like list[dict[str, object]] or
Sequence[Mapping[str, object]]; have get_memories return a shallow copy of
MEMORY_STORE (or a deep copy of each dict if immutability is required) instead
of the live MEMORY_STORE reference, and keep references to the unique symbols
MEMORY_STORE, add_memory, and get_memories when making these changes.
In `@bindu/extensions/semantic_memory/retriever.py`:
- Around line 1-14: Add a module-level docstring at the top describing the
purpose of this semantic memory retriever module, replace the relative imports
with absolute imports (e.g. import get_memories and get_embedding from
bindu.extensions.semantic_memory.memory_store and
bindu.extensions.semantic_memory.embeddings), and add public docstrings plus
explicit return type annotations for the functions cosine_similarity (-> float)
and query_memory (e.g. -> list[tuple[float, dict]] or the exact structure
returned) so all functions have type hints and docstrings compliant with Python
3.12+ repository rules; update any parameter type hints on query_memory (query:
str, top_k: int) if needed to match the return annotation.
- Around line 6-11: The cosine_similarity function should validate inputs and
dimensions explicitly: add type hints for parameters (e.g., Sequence[float]) and
check that len(a) == len(b), raising a ValueError if they differ; compute norms
and if either norm is zero return 0 (or the same zero-vector guard behavior used
in bindu/server/negotiation/embedder.py) instead of silently truncating via zip;
ensure you reference the function name cosine_similarity and keep the small
epsilon guard consistent with the embedder implementation when performing the
final division.
In `@bindu/server/endpoints/certificates.py`:
- Around line 279-298: The current flow inserts/updates
agent_certificates_table, calls _bind_certificate(hydra_client, ...), then calls
_write_audit_log without atomicity, so Hydra failures can leave DB and Hydra out
of sync; change the sequence to either (A) perform the Hydra bind first and only
insert/commit the DB row and audit after _bind_certificate succeeds, or (B) wrap
DB mutations in a transaction that records a compensating state column (e.g.,
binding_status: "pending"/"bound"/"failed") on agent_certificates_table, commit
the transaction, then call _bind_certificate and update binding_status + write
audit in a safe retry loop on failure; update the code paths where
agent_certificates_table is modified (the block calling
agent_certificates_table.insert/update, the calls to _bind_certificate, and
_write_audit_log) to implement one of these approaches and add reconciliation
logic that retries failed bindings based on binding_status.
- Around line 37-69: Replace the hardcoded CERT_TTL_HOURS and local certs_dir
path in this module with values from bindu.settings.app_settings: import
app_settings from bindu.settings and read an MTLS_CERT_TTL_HOURS (or similar)
instead of the CERT_TTL_HOURS constant, and read a CA_CERTS_DIR (or similar)
path instead of constructing Path.home() / ".bindu" / "certs" inside
load_or_create_local_ca; update references to CERT_TTL_HOURS and the
certs_dir/ca_key_path/ca_cert_path variables to use those app_settings keys (and
document sensible defaults in app_settings if needed) so environments can
override TTL and CA storage location.
- Around line 102-109: The CA private key is currently written via
open(ca_key_path, "wb") which can inherit permissive umask or overwrite an
existing file; change the write in the ca key creation routine to use an
exclusive create and explicit restrictive mode: open the path with low-level
os.open flags (O_CREAT|O_EXCL) or use pathlib/atomic tempfile then os.chmod to
set 0o600 (POSIX) before/after writing the bytes produced by
ca_key.private_bytes, and check existing file permissions and fail closed if
they are broader than 0o600; ensure any failure to create or to set restrictive
permissions raises an exception so the process does not proceed with an insecure
CA key.
- Around line 462-541: The endpoints issue_certificate_endpoint,
renew_certificate_endpoint, and revoke_certificate_endpoint currently trust
agent_did from the request body and manually parse body keys, bypassing the
project's TypeAdapters and protocol validators; update each endpoint to parse
and validate the incoming payload using the existing TypeAdapters/protocol
validators (so camelCase wire keys are accepted) and to derive/authorize
agent_did from the authenticated principal (e.g., request auth context/session)
rather than trusting body-supplied agent_did; additionally enforce authorization
logic (ensure principal matches the target agent_did or has an admin scope)
before calling issue_certificate, renew_certificate, or revoke_certificate and
only pass validated values into HydraClient calls and storage operations.
In `@bindu/server/storage/schema.py`:
- Around line 257-310: The schema drift is caused by using a Python-side default
for the status column and a plain ascending index for the audit created_at;
change the status Column to use a DB server default (replace default="active"
with server_default=text("'active'")) on the agent certificates table's "status"
Column, and make the audit index use a descending index on
certificate_audit_log_table.c.created_at (e.g.,
Index("idx_cert_audit_created_at",
certificate_audit_log_table.c.created_at.desc())); ensure sqlalchemy.text is
imported if not already.
In `@create-bindu-agent`:
- Line 1: Add a .gitmodules file that declares the submodule entry for
"create-bindu-agent" so Git can clone/update the 160000-mode entry; specifically
add a section with path = create-bindu-agent and url = <upstream-repo-url>
(replace with the actual remote URL for that submodule), commit hash
2e11000e3d22405512530ed0155d08a4dcb2fea6 should be verified to exist in that
upstream repository, then commit .gitmodules and run git submodule
sync/init/update to ensure the pointer resolves correctly (update any project
docs or CI that rely on initializing submodules if present).
In `@tests/unit/test_semantic_memory.py`:
- Around line 4-7: This test mutates the module-level semantic memory used by
add_memory and query_memory, risking cross-test pollution; update the test to
use a fixture that isolates and cleans the global memory store before/after the
test (e.g., in tests/conftest.py provide a fixture that clears or
snapshots/restores the memory store used by add_memory/query_memory) and apply
that fixture to test_memory_store so the store is emptied at setup and restored
at teardown.
---
Outside diff comments:
In @.gitignore:
- Around line 211-229: Remove the unresolved Git conflict markers (<<<<<<< HEAD,
=======, >>>>>>> upstream/main) and merge the two sections into a single clean
.gitignore entry set: keep the intended patterns such as AI-Agents/,
node_modules/, **/node_modules/, sdks/typescript/dist/,
sdks/typescript/node_modules/, sdks/kotlin/build/, sdks/kotlin/.gradle/,
sdks/kotlin/bin/, and the examples/... pids/logs/.fleet.env lines without
duplication or conflict markers so the file is valid and functions as intended.
In `@bindu/server/applications.py`:
- Around line 106-150: The code currently uses manifest without guaranteeing
it's present when setting up X402; narrow the guard so
_create_payment_requirements, _setup_middleware, and
_setup_payment_session_manager are only called when manifest is not None and
x402_ext is truthy. Concretely, change the x402_ext checks to also require
manifest (e.g., if x402_ext and manifest:), compute
payment_requirements_for_middleware only when manifest is present, pass the
concrete AgentManifest into _create_payment_requirements and
_setup_payment_session_manager, and call _setup_middleware with manifest only
under the same guard so apps without a manifest still type-check and start.
- Around line 525-532: The code calls process_price_to_atomic_amount(amount,
network) without validating amount; if opt.get("amount") is None this will
raise—add a guard in the loop over options to validate amount (e.g., if amount
is None or not a valid numeric/string price) and raise/return a configuration
error before calling process_price_to_atomic_amount; update the loop that
assigns amount, network, pay_to_address (in applications.py) to check amount and
produce a clear error message referencing the failing option index or identifier
so callers can fix the config.
- Around line 172-223: The import in applications.py references healthz_endpoint
but only health_endpoint exists; fix by exporting/defining healthz_endpoint in
the health module (or alias it to health_endpoint) so both symbols exist: add a
function named healthz_endpoint (or set healthz_endpoint = health_endpoint) in
the endpoints health module and ensure it is included in that module's exports
so the applications.py import succeeds when registering the "/healthz" route.
---
Nitpick comments:
In `@bindu/common/protocol/types.py`:
- Around line 1689-1728: The three certificate TypedDicts
CertificateIssueParams, CertificateRenewParams, and CertificateRevokeParams
currently use pydantic.with_config(ConfigDict(alias_generator=to_camel)); change
them to use the shared A2A_MODEL_CONFIG instead so Pydantic accepts both
snake_case fields (e.g., agent_did, cert_fingerprint, current_fingerprint) and
camelCase wire aliases, preserving consistent validation behavior with the rest
of the module.
In `@bindu/server/endpoints/certificates.py`:
- Around line 475-477: The code is accessing the private storage attribute
app._storage; change these to use the public storage pattern app.storage so the
configured StorageBackend is used. Replace uses of app._storage.connection()
with app.storage.connection() in this block where HydraClient is created and
issue_certificate(agent_did, csr_pem, conn, hydra_client) is called, and make
the same substitution for the other occurrences noted (around the blocks that
create HydraClient and call issue_certificate or otherwise use conn at the later
ranges). Ensure you only change the attribute access (from _storage to storage)
and keep existing variable names like conn, hydra_client, agent_did, csr_pem and
the call to issue_certificate intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 31a017a5-c1ab-4e68-8f22-680c0766ac49
📒 Files selected for processing (15)
.gitignoreagentmesh/README.mdagentmesh/docker/docker-compose.ymlalembic/versions/20260322_0001_add_mtls_certificate_tables.pybindu/common/protocol/types.pybindu/extensions/semantic_memory/__init__.pybindu/extensions/semantic_memory/embeddings.pybindu/extensions/semantic_memory/memory_store.pybindu/extensions/semantic_memory/retriever.pybindu/server/applications.pybindu/server/endpoints/certificates.pybindu/server/storage/schema.pycreate-bindu-agentexamples/semantic_memory_demo.pytests/unit/test_semantic_memory.py
| def upgrade() -> None: | ||
| """Add agent_certificates and certificate_audit_log tables.""" | ||
|
|
||
| # ------------------------------------------------------------------ |
There was a problem hiding this comment.
Remove the blank line after public function docstrings.
CI is failing with pydocstyle D202 for both upgrade() and downgrade().
🛠️ Proposed fix
def upgrade() -> None:
"""Add agent_certificates and certificate_audit_log tables."""
-
# ------------------------------------------------------------------
@@
def downgrade() -> None:
"""Remove mTLS certificate tables."""
-
# Drop certificate_audit_logAlso applies to: 153-156
🧰 Tools
🪛 GitHub Actions: CI
[error] 32-32: pydocstyle D202: No blank lines allowed after function docstring (in public function 'upgrade').
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@alembic/versions/20260322_0001_add_mtls_certificate_tables.py` around lines
31 - 34, The CI failure is due to pydocstyle D202: remove the extra blank line
immediately following the docstrings for the public migration functions
upgrade() and downgrade(); update the docstrings so the next line of code
follows directly after the closing triple quotes (no empty line) in both
functions (look for the docstring lines inside upgrade() and downgrade() and
delete the blank line after each).
| op.create_index( | ||
| "idx_cert_audit_created_at", | ||
| "certificate_audit_log", | ||
| ["created_at"], | ||
| postgresql_ops={"created_at": "DESC"}, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect migration code for PostgreSQL index sort-direction usage.
# Expected: No create_index call should use postgresql_ops={"created_at": "DESC"}.
rg -n -C3 'postgresql_ops=\{"created_at": "DESC"\}|created_at DESC'Repository: GetBindu/Bindu
Length of output: 9435
Use a descending expression for the created_at index.
postgresql_ops configures PostgreSQL operator classes, not sort direction, so "DESC" here generates invalid DDL. Use sa.literal_column("created_at DESC") instead, as shown in the pattern used in migration 20251207_0905_ef0d61440935_add_foreign_key_constraint_from_tasks_.py.
🛠️ Proposed fix
op.create_index(
"idx_cert_audit_created_at",
"certificate_audit_log",
- ["created_at"],
- postgresql_ops={"created_at": "DESC"},
+ [sa.literal_column("created_at DESC")],
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| op.create_index( | |
| "idx_cert_audit_created_at", | |
| "certificate_audit_log", | |
| ["created_at"], | |
| postgresql_ops={"created_at": "DESC"}, | |
| ) | |
| op.create_index( | |
| "idx_cert_audit_created_at", | |
| "certificate_audit_log", | |
| [sa.literal_column("created_at DESC")], | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@alembic/versions/20260322_0001_add_mtls_certificate_tables.py` around lines
145 - 150, The index creation currently passes postgresql_ops={"created_at":
"DESC"} which is invalid; update the op.create_index call for
"idx_cert_audit_created_at" on "certificate_audit_log" to use a descending
expression for created_at by replacing the plain column reference with
sa.literal_column("created_at DESC") in the columns list (keep the same index
name and table), and remove the postgresql_ops argument.
| import os | ||
| from typing import List | ||
|
|
||
| from openai import OpenAI | ||
|
|
||
| # Initialize client only if API key exists | ||
| _api_key = os.getenv("OPENROUTER_API_KEY") | ||
|
|
||
| client = OpenAI( | ||
| api_key=_api_key, | ||
| base_url="https://openrouter.ai/api/v1", | ||
| ) if _api_key else None |
There was a problem hiding this comment.
Move OpenRouter configuration into app_settings.
This bypasses central configuration with os.getenv, a local _api_key, and a hardcoded base URL. Add semantic-memory/OpenRouter settings and initialize from app_settings instead.
Proposed direction
-import os
-from typing import List
+"""Embedding generation for semantic memory."""
from openai import OpenAI
+from bindu.settings import app_settings
-# Initialize client only if API key exists
-_api_key = os.getenv("OPENROUTER_API_KEY")
-
client = OpenAI(
- api_key=_api_key,
- base_url="https://openrouter.ai/api/v1",
-) if _api_key else None
+ api_key=app_settings.semantic_memory.openrouter_api_key,
+ base_url=app_settings.semantic_memory.openrouter_base_url,
+) if app_settings.semantic_memory.openrouter_api_key else NoneAs per coding guidelines, “Use app_settings from bindu/settings.py for configuration instead of accessing environment variables directly” and “never hardcode values such as URLs, ports, timeouts, API keys, or feature flags”.
🧰 Tools
🪛 GitHub Actions: CI
[error] 1-1: pydocstyle D100: Missing docstring in public module.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/extensions/semantic_memory/embeddings.py` around lines 1 - 12, Replace
direct os.getenv/_api_key usage and the hardcoded OpenAI base_url in this module
by pulling OpenRouter configuration from app_settings; specifically read the
OpenRouter API key and base_url (and any relevant timeouts) from app_settings in
bindu/settings.py and use those values to initialize the OpenAI client (the
client variable and OpenAI(...) call), and only create the client when the
app_settings value for the API key is present instead of using _api_key from
os.getenv.
| await conn.execute( | ||
| agent_certificates_table.insert().values( | ||
| id=uuid.uuid4(), | ||
| agent_did=agent_did, | ||
| cert_fingerprint=fingerprint, | ||
| status="active", | ||
| issued_at=now, | ||
| expires_at=expires_at, | ||
| ) | ||
| ) | ||
|
|
||
| await _bind_certificate(hydra_client, fingerprint, agent_did) | ||
|
|
||
| await _write_audit_log( | ||
| conn, | ||
| "issued", | ||
| agent_did, | ||
| fingerprint, | ||
| event_data={"expires_at": expires_at.isoformat()}, | ||
| ) |
There was a problem hiding this comment.
Add compensation for DB/Hydra partial failures.
The lifecycle methods update the DB, call Hydra, and then write audit rows without an atomic boundary across systems. A Hydra failure after DB mutation can leave an active DB cert unbound, an expired old cert with no replacement binding, or a revoked DB cert still accepted by Hydra.
Prefer a transaction plus explicit compensation/retry state, e.g. bind first where safe, commit only after success, and record failed binding states for reconciliation.
Also applies to: 356-386, 425-449
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/certificates.py` around lines 279 - 298, The current
flow inserts/updates agent_certificates_table, calls
_bind_certificate(hydra_client, ...), then calls _write_audit_log without
atomicity, so Hydra failures can leave DB and Hydra out of sync; change the
sequence to either (A) perform the Hydra bind first and only insert/commit the
DB row and audit after _bind_certificate succeeds, or (B) wrap DB mutations in a
transaction that records a compensating state column (e.g., binding_status:
"pending"/"bound"/"failed") on agent_certificates_table, commit the transaction,
then call _bind_certificate and update binding_status + write audit in a safe
retry loop on failure; update the code paths where agent_certificates_table is
modified (the block calling agent_certificates_table.insert/update, the calls to
_bind_certificate, and _write_audit_log) to implement one of these approaches
and add reconciliation logic that retries failed bindings based on
binding_status.
| Column("status", String(50), nullable=False, default="active"), | ||
| Column( | ||
| "issued_at", TIMESTAMP(timezone=True), nullable=False, server_default=func.now() | ||
| ), | ||
| Column("expires_at", TIMESTAMP(timezone=True), nullable=False), | ||
| # Timestamps | ||
| Column( | ||
| "created_at", | ||
| TIMESTAMP(timezone=True), | ||
| nullable=False, | ||
| server_default=func.now(), | ||
| ), | ||
| Column( | ||
| "updated_at", | ||
| TIMESTAMP(timezone=True), | ||
| nullable=False, | ||
| server_default=func.now(), | ||
| onupdate=func.now(), | ||
| ), | ||
| # Indexes for zero-trust freshness checks | ||
| Index("idx_agent_certs_fingerprint", "cert_fingerprint"), | ||
| Index("idx_agent_certs_status", "status"), | ||
| Index("idx_agent_certs_agent_did", "agent_did"), | ||
| # Table comment | ||
| comment="mTLS agent certificates tied to DIDs", | ||
| ) | ||
|
|
||
| # ----------------------------------------------------------------------------- | ||
| # Certificate Audit Log Table (mTLS) | ||
| # ----------------------------------------------------------------------------- | ||
|
|
||
| certificate_audit_log_table = Table( | ||
| "certificate_audit_log", | ||
| metadata, | ||
| # Primary key | ||
| Column("id", PG_UUID(as_uuid=True), primary_key=True, nullable=False), | ||
| # What happened | ||
| Column("event_type", String(50), nullable=False), # issued | renewed | revoked | ||
| Column("agent_did", String(255), nullable=False), | ||
| Column("cert_fingerprint", String(255), nullable=False), | ||
| # Who/when | ||
| Column("performed_by", String(255), nullable=True), # DID or system | ||
| Column("event_data", JSONB, nullable=True, server_default=text("'{}'::jsonb")), | ||
| Column( | ||
| "created_at", | ||
| TIMESTAMP(timezone=True), | ||
| nullable=False, | ||
| server_default=func.now(), | ||
| ), | ||
| # Indexes | ||
| Index("idx_cert_audit_agent_did", "agent_did"), | ||
| Index("idx_cert_audit_fingerprint", "cert_fingerprint"), | ||
| Index("idx_cert_audit_event_type", "event_type"), | ||
| Index("idx_cert_audit_created_at", "created_at"), |
There was a problem hiding this comment.
Keep the SQLAlchemy schema aligned with the migration.
status uses a Python-side default, while the migration uses a database server_default; the audit created_at index is also plain here while the migration intends descending order. This can create drift between metadata.create_all() and Alembic-managed databases.
🛠️ Proposed fix
- Column("status", String(50), nullable=False, default="active"),
+ Column("status", String(50), nullable=False, server_default=text("'active'")),
@@
- Index("idx_cert_audit_created_at", "created_at"),
+ Index("idx_cert_audit_created_at", text("created_at DESC")),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/storage/schema.py` around lines 257 - 310, The schema drift is
caused by using a Python-side default for the status column and a plain
ascending index for the audit created_at; change the status Column to use a DB
server default (replace default="active" with server_default=text("'active'"))
on the agent certificates table's "status" Column, and make the audit index use
a descending index on certificate_audit_log_table.c.created_at (e.g.,
Index("idx_cert_audit_created_at",
certificate_audit_log_table.c.created_at.desc())); ensure sqlalchemy.text is
imported if not already.
| @@ -0,0 +1 @@ | |||
| Subproject commit 2e11000e3d22405512530ed0155d08a4dcb2fea6 | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find old/new submodule SHAs from superproject history
old_sha="$(git ls-tree HEAD^ create-bindu-agent | awk '{print $3}')"
new_sha="$(git ls-tree HEAD create-bindu-agent | awk '{print $3}')"
echo "old_sha=$old_sha"
echo "new_sha=$new_sha"
# Resolve submodule URL from .gitmodules
sub_url="$(git config -f .gitmodules --get submodule.create-bindu-agent.url || true)"
echo "submodule_url=$sub_url"
# If GitHub URL, compare commits via API (read-only)
# Supports URLs like:
# https://github.com/<owner>/<repo>.git
# git@github.com:<owner>/<repo>.git
if [[ "$sub_url" =~ github.com[:/](.+/.+)\.git$ ]]; then
repo="${BASH_REMATCH[1]}"
echo "repo=$repo"
gh api "repos/$repo/compare/$old_sha...$new_sha" \
--jq '{status, ahead_by, files: [.files[].filename], commits: [.commits[].sha]}'
else
echo "Non-GitHub submodule URL or unable to parse; manually compare $old_sha...$new_sha in upstream."
fiRepository: GetBindu/Bindu
Length of output: 96
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Repository State ==="
git log --oneline -5 || echo "No commits yet"
echo -e "\n=== Current HEAD ==="
git rev-parse HEAD || echo "No HEAD"
echo -e "\n=== Check for .gitmodules ==="
if [ -f .gitmodules ]; then
echo ".gitmodules exists:"
cat .gitmodules
else
echo ".gitmodules not found"
fi
echo -e "\n=== Check create-bindu-agent in index ==="
git ls-files create-bindu-agent 2>/dev/null || echo "Not in index"
echo -e "\n=== Check git status ==="
git status --short create-bindu-agent 2>/dev/null || echo "Not tracked"
echo -e "\n=== Check current commit pointer (if submodule) ==="
if [ -d create-bindu-agent ]; then
cd create-bindu-agent
git rev-parse HEAD 2>/dev/null || echo "Not a git repo"
cd ..
else
echo "create-bindu-agent directory does not exist"
fi
echo -e "\n=== Check if this is first commit ==="
commit_count=$(git rev-list --count HEAD 2>/dev/null || echo "0")
echo "Total commits: $commit_count"Repository: GetBindu/Bindu
Length of output: 494
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== File type check ==="
ls -ld create-bindu-agent
file create-bindu-agent
echo -e "\n=== Contents of create-bindu-agent (first 20 lines) ==="
if [ -f create-bindu-agent ]; then
head -20 create-bindu-agent
elif [ -d create-bindu-agent ]; then
echo "It's a directory. Checking for .git file (submodule indicator):"
ls -la create-bindu-agent/.git 2>/dev/null || echo "No .git file found"
echo "Directory contents:"
ls -la create-bindu-agent | head -10
fi
echo -e "\n=== Git ls-tree mode for create-bindu-agent ==="
git ls-tree HEAD create-bindu-agent
echo -e "\n=== Git cat-file to inspect object ==="
sha=$(git ls-tree HEAD create-bindu-agent | awk '{print $3}')
echo "SHA: $sha"
git cat-file -t "$sha" 2>/dev/null || echo "Cannot determine object type"
</script>Repository: GetBindu/Bindu
Length of output: 209
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Git ls-tree full output for create-bindu-agent ==="
git ls-tree HEAD create-bindu-agent
echo -e "\n=== Directory contents ==="
ls -la create-bindu-agent/
echo -e "\n=== Object details ==="
sha=$(git ls-tree HEAD create-bindu-agent | awk '{print $3}')
echo "SHA: $sha"
echo "Object type:"
git cat-file -t "$sha"
echo "Object content (abbreviated):"
git cat-file -p "$sha" | head -5Repository: GetBindu/Bindu
Length of output: 599
Add missing .gitmodules configuration for the submodule pointer.
The create-bindu-agent entry is a git submodule (mode 160000, commit 2e11000e3d22405512530ed0155d08a4dcb2fea6), but the repository is missing a .gitmodules file that defines the submodule URL and path. Without this configuration file, the submodule cannot be properly cloned or updated. Please add .gitmodules with the correct submodule URL for create-bindu-agent, and verify that the referenced commit exists in the upstream repository.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@create-bindu-agent` at line 1, Add a .gitmodules file that declares the
submodule entry for "create-bindu-agent" so Git can clone/update the 160000-mode
entry; specifically add a section with path = create-bindu-agent and url =
<upstream-repo-url> (replace with the actual remote URL for that submodule),
commit hash 2e11000e3d22405512530ed0155d08a4dcb2fea6 should be verified to exist
in that upstream repository, then commit .gitmodules and run git submodule
sync/init/update to ensure the pointer resolves correctly (update any project
docs or CI that rely on initializing submodules if present).
| def test_memory_store(): | ||
| add_memory("Bindu powers the Internet of Agents.", [0.1]*1536, "agent_a") | ||
| results = query_memory("What powers agents?") | ||
| assert len(results) > 0 No newline at end of file |
There was a problem hiding this comment.
Isolate the global memory store in the test.
add_memory() mutates the module-level store, and query_memory() reads all accumulated memories. Without teardown, this test can leak data into later tests and become order-dependent.
Prefer a shared fixture in tests/conftest.py that clears/restores the semantic memory store around each test using it.
As per coding guidelines, “Always clean up test data in teardown to avoid test pollution and ensure test isolation”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_semantic_memory.py` around lines 4 - 7, This test mutates the
module-level semantic memory used by add_memory and query_memory, risking
cross-test pollution; update the test to use a fixture that isolates and cleans
the global memory store before/after the test (e.g., in tests/conftest.py
provide a fixture that clears or snapshots/restores the memory store used by
add_memory/query_memory) and apply that fixture to test_memory_store so the
store is emptied at setup and restored at teardown.
…tion docstring, test isolation)
- Add _get_caller_did() to extract authenticated DID from request.state - Add _authorize_agent_did() to verify caller matches requested agent_did - Return 401 if unauthenticated, 403 if caller != requested agent_did - Remove internal error details from 500 responses
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
bindu/server/endpoints/certificates.py (4)
325-337:⚠️ Potential issue | 🔴 CriticalDB/Hydra operations can still leave partial, inconsistent certificate state.
The flow mutates DB state before Hydra bind/unbind succeeds. If Hydra fails, DB and OAuth client binding diverge (active/expired/revoked statuses may be incorrect operationally). Add a compensating state/transactional strategy.
Also applies to: 401-421, 467-485
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 325 - 337, The DB insert is happening before the external Hydra bind, causing inconsistent state if _bind_certificate fails; change the flow to perform the Hydra operation first and only insert the active certificate row after _bind_certificate succeeds, or wrap the DB insert and the _bind_certificate call in a single DB transaction/compensating block so on any exception you either roll back the insert or update the inserted row to a terminal error state (e.g., delete the row or set status='failed'/'pending'). Apply the same pattern to the other similar blocks referenced (the blocks around the ranges noted: 401-421 and 467-485) and ensure you reference/handle failures from hydra_client and _bind_certificate consistently, using the agent_certificates_table operations to undo or mark the record appropriately.
102-109:⚠️ Potential issue | 🟠 MajorCA private key write is still non-restrictive and overwrite-prone.
Writing
ca.keywithopen(..., "wb")depends on umask and allows replacement; use exclusive create with restrictive permissions (POSIX0o600) and fail closed on insecure perms.Based on learnings: Applies to
bindu/**/*.py: Useos.open()on POSIX systems and direct write on Windows for DID private key permissions to ensure cross-platform compatibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 102 - 109, The current write of the CA private key using open(ca_key_path, "wb") is insecure and overwrite-prone; replace it with a POSIX-safe exclusive create using os.open with flags O_WRONLY|O_CREAT|O_EXCL and mode 0o600 to atomically create the file with restrictive permissions (and raise/fail if the file exists), and on Windows fallback to a normal open("wb") followed immediately by os.chmod(path, 0o600) to enforce permissions; locate the code that calls ca_key.private_bytes(...) and writes to ca_key_path and apply the same pattern to other private-key writes across bindu/**/*.py so all DID/CA key writes use the secure write routine.
37-39:⚠️ Potential issue | 🟠 MajorHardcoded mTLS TTL and CA path still bypass settings.
CERT_TTL_HOURSand the~/.bindu/certspath are local config constants; these should come fromapp_settingsso deployments can control cert lifetime and key storage safely.Based on learnings: Applies to
bindu/**/*.py: Usebindu.settings.app_settingsfor all configuration - never hardcode values such as URLs, ports, timeouts, API keys, or feature flags.Also applies to: 65-70, 136-137, 320-321
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 37 - 39, Replace the hardcoded CERT_TTL_HOURS and the literal "~/.bindu/certs" path with values read from bindu.settings.app_settings: import app_settings and use a configurable key (e.g. app_settings.MTLS_CERT_TTL_HOURS with a default of 24 and app_settings.CERTS_DIR with a default of "~/.bindu/certs") wherever CERT_TTL_HOURS or the literal path appear (including the other occurrences noted). Update any functions or modules that reference CERT_TTL_HOURS (symbol CERT_TTL_HOURS) or the string "~/.bindu/certs" to read from app_settings, preserve the original defaults if the settings are absent, and remove the local constant so all configuration comes from app_settings.
507-510:⚠️ Potential issue | 🟠 MajorEndpoints still bypass protocol validators by manually reading JSON keys.
Manual
body.get(...)parsing skips the typed protocol adapters and weakens input validation consistency for these external payloads. Parse request bodies through the certificate TypeAdapters and use only validated values downstream.As per coding guidelines,
bindu/**/*.py: Validate all external input and use type hints for input validation in Python files.Also applies to: 534-538, 566-570
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 507 - 510, Replace the manual JSON-parsing pattern (body = await request.json(); agent_did = body.get("agent_did"); csr_pem = body.get("csr")) with the certificate TypeAdapter validation flow: pass request.json() into the appropriate certificate TypeAdapter (the module's certificate TypeAdapter for incoming certificate requests) to parse/validate and extract strongly-typed fields, then use the adapter's validated attributes (e.g., agent_did, csr) downstream; apply the same change for the similar spots that read csr_pem/agent_did at the other occurrences so no external input bypasses the TypeAdapter validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindu/server/endpoints/certificates.py`:
- Around line 534-538: The renew/revoke handlers in certificates.py currently
catch ValueError and return 404 for any failure; change this to use distinct
exception types and proper status mapping: add CertificateNotFoundError for
missing certs and CertificateBindingError (or IntegrationError) for
Hydra/binding failures, update the code paths that currently raise ValueError
(search uses of ValueError around agent_did/csr/current_fingerprint handling and
the renew/revoke endpoint functions) to raise the new specific exceptions, then
update the exception handling in the renew and revoke endpoint handlers to
return 404 on CertificateNotFoundError and 5xx (e.g., 502/500) for
CertificateBindingError or other integration errors, and leave unexpected
exceptions to bubble as 500; ensure the same changes are applied to the other
handler blocks referenced (lines ~556-560, 566-570, 588-592).
---
Duplicate comments:
In `@bindu/server/endpoints/certificates.py`:
- Around line 325-337: The DB insert is happening before the external Hydra
bind, causing inconsistent state if _bind_certificate fails; change the flow to
perform the Hydra operation first and only insert the active certificate row
after _bind_certificate succeeds, or wrap the DB insert and the
_bind_certificate call in a single DB transaction/compensating block so on any
exception you either roll back the insert or update the inserted row to a
terminal error state (e.g., delete the row or set status='failed'/'pending').
Apply the same pattern to the other similar blocks referenced (the blocks around
the ranges noted: 401-421 and 467-485) and ensure you reference/handle failures
from hydra_client and _bind_certificate consistently, using the
agent_certificates_table operations to undo or mark the record appropriately.
- Around line 102-109: The current write of the CA private key using
open(ca_key_path, "wb") is insecure and overwrite-prone; replace it with a
POSIX-safe exclusive create using os.open with flags O_WRONLY|O_CREAT|O_EXCL and
mode 0o600 to atomically create the file with restrictive permissions (and
raise/fail if the file exists), and on Windows fallback to a normal open("wb")
followed immediately by os.chmod(path, 0o600) to enforce permissions; locate the
code that calls ca_key.private_bytes(...) and writes to ca_key_path and apply
the same pattern to other private-key writes across bindu/**/*.py so all DID/CA
key writes use the secure write routine.
- Around line 37-39: Replace the hardcoded CERT_TTL_HOURS and the literal
"~/.bindu/certs" path with values read from bindu.settings.app_settings: import
app_settings and use a configurable key (e.g. app_settings.MTLS_CERT_TTL_HOURS
with a default of 24 and app_settings.CERTS_DIR with a default of
"~/.bindu/certs") wherever CERT_TTL_HOURS or the literal path appear (including
the other occurrences noted). Update any functions or modules that reference
CERT_TTL_HOURS (symbol CERT_TTL_HOURS) or the string "~/.bindu/certs" to read
from app_settings, preserve the original defaults if the settings are absent,
and remove the local constant so all configuration comes from app_settings.
- Around line 507-510: Replace the manual JSON-parsing pattern (body = await
request.json(); agent_did = body.get("agent_did"); csr_pem = body.get("csr"))
with the certificate TypeAdapter validation flow: pass request.json() into the
appropriate certificate TypeAdapter (the module's certificate TypeAdapter for
incoming certificate requests) to parse/validate and extract strongly-typed
fields, then use the adapter's validated attributes (e.g., agent_did, csr)
downstream; apply the same change for the similar spots that read
csr_pem/agent_did at the other occurrences so no external input bypasses the
TypeAdapter validation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9753f001-2dbb-4b98-803c-c287c28ff4d2
📒 Files selected for processing (1)
bindu/server/endpoints/certificates.py
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
bindu/server/endpoints/certificates.py (1)
46-60:⚠️ Potential issue | 🟠 MajorDrop the hardcoded fallbacks — access
app_settings.mtlsdirectly.
Settingsalways initializesmtls: MTLSSettings = MTLSSettings(), so thegetattr(app_settings, "mtls", None)fallback is unreachable — but it reintroduces the exact local defaults (24,"~/.bindu/certs") that the previous review asked you to move into settings. It also leaves an orphan# Default certificate TTL — 24 hours as per ADRcomment at line 60 with nothing to attach to.As per coding guidelines,
bindu/**/*.py: NEVER create local config constants - useapp_settingsfrombindu.settings.🔧 Proposed fix
-def _get_cert_ttl_hours() -> int: - """Get certificate TTL from settings with a safe fallback.""" - return getattr(getattr(app_settings, "mtls", None), "cert_ttl_hours", 24) - - -def _get_certs_dir() -> Path: - """Get certificate directory from settings with a safe fallback.""" - certs_dir = getattr( - getattr(app_settings, "mtls", None), - "certs_dir", - "~/.bindu/certs", - ) - return Path(certs_dir).expanduser() - -# Default certificate TTL — 24 hours as per ADR +def _get_cert_ttl_hours() -> int: + """Get certificate TTL from settings.""" + return app_settings.mtls.cert_ttl_hours + + +def _get_certs_dir() -> Path: + """Get certificate directory from settings.""" + return Path(app_settings.mtls.certs_dir).expanduser()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/certificates.py` around lines 46 - 60, The helpers _get_cert_ttl_hours and _get_certs_dir should stop using getattr fallbacks and read values directly from app_settings.mtls (which is always initialized); replace the getattr chains with direct accesses to app_settings.mtls.cert_ttl_hours and app_settings.mtls.certs_dir, keep converting the certs_dir to a Path and calling expanduser(), and remove the orphan comment about the 24‑hour default; reference the functions _get_cert_ttl_hours and _get_certs_dir and the app_settings.mtls/MTLSSettings attributes when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindu/server/endpoints/certificates.py`:
- Around line 553-555: Replace the logger.error(...) calls in the catch-all
except blocks for the certificate issuance, renew, and revoke handlers with
logger.exception(...) so the full traceback is preserved; locate the occurrences
of the generic "except Exception as e:" blocks in the issuance, renew and revoke
handlers (the three blocks around the current diff and at the other mentioned
locations) and change logger.error(f"Certificate issuance failed: {e}") (and the
analogous renew/revoke messages) to logger.exception("Certificate issuance
failed") (or the matching message text for renew/revoke) to retain stack traces
while keeping the same log message.
- Line 544: Replace direct access to the private storage attribute and use the
public Storage API: change any "async with app._storage.connection()" calls to
"async with app.storage.connection()" so the code uses the configured
StorageBackend (e.g., MemoryStorage or PostgresStorage) via the BinduApplication
contract; update the three occurrences of that pattern in the certificates
endpoint (where the async context manager is created) to use
app.storage.connection() instead.
- Around line 185-205: The PUT call to hydra_client._http_client.replace is
overwriting the entire client because PUT replaces all fields; instead fetch the
existing client with get_oauth_client(agent_did), merge/replace only the jwks
entry into that client dict, then PUT the full merged client back (or use
Hydra's PATCH to only mutate jwks) to avoid wiping grant_types, scope,
redirect_uris, token_endpoint_auth_method, audience, metadata, etc.; also stop
using the private _http_client directly—implement public HydraClient methods
bind_certificate(agent_did, cert_fingerprint) and unbind_certificate(agent_did,
cert_fingerprint) that encapsulate the GET→modify→PUT (or PATCH) logic and raise
CertificateBindingError on failures.
- Around line 197-212: The helper functions _bind_certificate and
_unbind_certificate violate encapsulation by using HydraClient._http_client
directly and they also drop exception chains; add a public method on HydraClient
named update_client_jwks(client_id, keys) that performs the GET→modify→PUT
sequence and handles the aiohttp response lifecycle so callers no longer access
_http_client, then refactor _bind_certificate/_unbind_certificate to call
HydraClient.update_client_jwks; also update both except blocks to re-raise using
"raise CertificateBindingError(...) from error" (preserving original exception)
instead of plain raise.
In `@bindu/settings.py`:
- Around line 1012-1019: The MTLSSettings.env_prefix currently uses
"BINDU_MTLS_" which breaks the project's nested-settings convention; change
MTLSSettings.model_config.env_prefix to the double-underscore form used by other
submodels (e.g., "MTLS__") so keys nest under the top-level Settings parser, and
then update any downstream references (for example in
bindu/server/applications.py) and the PR summary to use the reconciled key name
(e.g., MTLS__ENABLED or security.mtls_enabled as agreed) so configuration access
is consistent across the codebase.
---
Duplicate comments:
In `@bindu/server/endpoints/certificates.py`:
- Around line 46-60: The helpers _get_cert_ttl_hours and _get_certs_dir should
stop using getattr fallbacks and read values directly from app_settings.mtls
(which is always initialized); replace the getattr chains with direct accesses
to app_settings.mtls.cert_ttl_hours and app_settings.mtls.certs_dir, keep
converting the certs_dir to a Path and calling expanduser(), and remove the
orphan comment about the 24‑hour default; reference the functions
_get_cert_ttl_hours and _get_certs_dir and the app_settings.mtls/MTLSSettings
attributes when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0dd03c62-2068-4031-a3e9-e9731b273f89
📒 Files selected for processing (2)
bindu/server/endpoints/certificates.pybindu/settings.py
| payload = { | ||
| "client_id": agent_did, | ||
| "jwks": { | ||
| "keys": [ | ||
| { | ||
| "use": "sig", | ||
| "kty": "RSA", | ||
| "x5t#S256": cert_fingerprint, | ||
| } | ||
| ] | ||
| }, | ||
| } | ||
| try: | ||
| response = await hydra_client._http_client.put( | ||
| f"/admin/clients/{encoded_did}", json=payload | ||
| ) | ||
| if response.status not in (200, 201): | ||
| error_text = await response.text() | ||
| raise CertificateBindingError( | ||
| f"Failed to bind certificate to Hydra client: {error_text}" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Ory Hydra admin API PUT /admin/clients/{id} replace vs merge semantics
💡 Result:
The Ory Hydra Admin API endpoint PUT /admin/clients/{id} uses replace semantics. It fully replaces the existing OAuth 2.0 client with the provided payload. Any fields omitted from the request body will be set to their default values or null/empty, potentially resetting configuration. This is confirmed by: - Official CLI documentation: "This command replaces an OAuth 2.0 Client by its ID. Please be aware that this command replaces the entire client. If only the name flag is provided, all other fields are updated to their default values." - Source code comment: "Replaces an existing OAuth 2.0 Client with the payload you send." - GitHub issue #2505: PUT overwrites with provided data (no merge), recommending PATCH for partial updates. For merge-like semantics (partial updates), use PATCH /admin/clients/{id} instead, which applies a JSON Patch and only modifies specified fields.
Citations:
- 1: https://www.ory.sh/docs/hydra/cli/hydra-update-client
- 2: Update Client endpoint can not handle simultaneous requests property ory/hydra#2505
- 3: https://ory.sh/docs/hydra/cli/hydra-update-client
- 4: https://github.com/ory/hydra/blob/1cc40581/client/handler.go
- 5: https://www.ory.sh/docs/hydra/guides/oauth2-clients
- 6: https://www.ory.sh/docs/hydra/cli/hydra-update-oauth2-client
🏁 Script executed:
# Find HydraClient definition
fd -t f HydraClientRepository: GetBindu/Bindu
Length of output: 40
🏁 Script executed:
# Look at the full certificates.py file structure and HydraClient usage
find . -name "*.py" -type f | xargs grep -l "class HydraClient" | head -5Repository: GetBindu/Bindu
Length of output: 87
🏁 Script executed:
# Check the bindu/server/endpoints/certificates.py file to see the full _bind_certificate and _unbind_certificate methods
cat -n bindu/server/endpoints/certificates.py | head -250Repository: GetBindu/Bindu
Length of output: 10082
🏁 Script executed:
cat -n bindu/auth/hydra/client.pyRepository: GetBindu/Bindu
Length of output: 12530
🏁 Script executed:
# Check AsyncHTTPClient to see if it properly manages response contexts
find . -name "http.py" -o -name "async_http*.py" | grep -i asyncRepository: GetBindu/Bindu
Length of output: 40
🏁 Script executed:
# Search for AsyncHTTPClient definition
rg "class AsyncHTTPClient" -A 30Repository: GetBindu/Bindu
Length of output: 1950
🏁 Script executed:
# Get the full AsyncHTTPClient class, especially the PUT/GET methods
cat -n bindu/utils/http/client.py | head -200Repository: GetBindu/Bindu
Length of output: 8194
🏁 Script executed:
# Continue reading the AsyncHTTPClient to see the PUT method
sed -n '200,280p' bindu/utils/http/client.pyRepository: GetBindu/Bindu
Length of output: 2259
Critical: PUT /admin/clients/{id} replaces the entire OAuth2 client, wiping all fields not in the payload.
The payload only sets client_id and jwks. Ory Hydra's admin API treats PUT /admin/clients/{id} as a full replacement—any field omitted is reset to its default value. After the first certificate binding, the target client loses grant_types, scope, redirect_uris, token_endpoint_auth_method, audience, metadata, and all other configuration, breaking every subsequent auth flow for that agent. _unbind_certificate (line 215) has the same failure mode.
Use a GET→modify→PUT pattern: fetch the existing client via the public get_oauth_client(agent_did) method, merge the jwks field only, then PUT the full merged client back. Alternatively, use PATCH (RFC 6902 JSON Patch) which Ory Hydra supports for clients and only mutates specified fields.
Additionally, both functions access the private _http_client attribute directly. Consider adding public methods bind_certificate and unbind_certificate to HydraClient to encapsulate this logic and prevent future maintenance issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/certificates.py` around lines 185 - 205, The PUT call
to hydra_client._http_client.replace is overwriting the entire client because
PUT replaces all fields; instead fetch the existing client with
get_oauth_client(agent_did), merge/replace only the jwks entry into that client
dict, then PUT the full merged client back (or use Hydra's PATCH to only mutate
jwks) to avoid wiping grant_types, scope, redirect_uris,
token_endpoint_auth_method, audience, metadata, etc.; also stop using the
private _http_client directly—implement public HydraClient methods
bind_certificate(agent_did, cert_fingerprint) and unbind_certificate(agent_did,
cert_fingerprint) that encapsulate the GET→modify→PUT (or PATCH) logic and raise
CertificateBindingError on failures.
| try: | ||
| response = await hydra_client._http_client.put( | ||
| f"/admin/clients/{encoded_did}", json=payload | ||
| ) | ||
| if response.status not in (200, 201): | ||
| error_text = await response.text() | ||
| raise CertificateBindingError( | ||
| f"Failed to bind certificate to Hydra client: {error_text}" | ||
| ) | ||
| logger.debug( | ||
| f"Certificate bound to Hydra client: did={agent_did}, " | ||
| f"fingerprint={cert_fingerprint[:16]}..." | ||
| ) | ||
| except (aiohttp.ClientError, CertificateBindingError) as error: | ||
| logger.error(f"Failed to bind agent certificate: {error}") | ||
| raise CertificateBindingError(f"Certificate binding failed: {str(error)}") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm HydraClient's actual public surface so we know which method to add/use.
fd -t f 'client.py' bindu/auth/hydra
ast-grep --pattern $'class HydraClient:
$$$'
rg -nP --type=py -C2 '\b_http_client\b'Repository: GetBindu/Bindu
Length of output: 32977
🏁 Script executed:
# Check the exact code in certificates.py at the specified line ranges
cd bindu/server/endpoints && wc -l certificates.pyRepository: GetBindu/Bindu
Length of output: 78
🏁 Script executed:
# Read the full context around both line ranges (197-212 and 227-239)
sed -n '180,250p' bindu/server/endpoints/certificates.py | cat -nRepository: GetBindu/Bindu
Length of output: 3113
🏁 Script executed:
# Check AsyncHTTPClient implementation to understand response lifecycle
fd -t f 'http_client.py' bindu/utils/http
head -150 bindu/utils/http/http_client.py | cat -nRepository: GetBindu/Bindu
Length of output: 149
🏁 Script executed:
# Find AsyncHTTPClient definition
rg -l 'class AsyncHTTPClient' --type pyRepository: GetBindu/Bindu
Length of output: 85
🏁 Script executed:
# Also check what's in bindu/utils/http directory
fd -type f bindu/utils/httpRepository: GetBindu/Bindu
Length of output: 229
🏁 Script executed:
# Check AsyncHTTPClient implementation for response lifecycle management
sed -n '1,200p' bindu/utils/http/client.py | cat -nRepository: GetBindu/Bindu
Length of output: 8194
🏁 Script executed:
# Continue reading to see put() method
sed -n '200,300p' bindu/utils/http/client.py | cat -nRepository: GetBindu/Bindu
Length of output: 3513
Add a public method to HydraClient for updating client JWKS; use exception chaining in both helpers.
Two issues in both _bind_certificate and _unbind_certificate:
-
Encapsulation violation: Both functions reach into the private
_http_clientattribute.HydraClientshould expose a public method (e.g.,update_client_jwks(client_id, keys)) that atomically performs the GET→modify→PUT sequence and manages the aiohttp response lifecycle properly. -
Ruff B904 - Missing exception chaining: Both
exceptclauses re-raise without preserving the original exception cause. Useraise ... from errorto maintain the exception chain.
🔧 Minimal fix for exception chaining
- except (aiohttp.ClientError, CertificateBindingError) as error:
- logger.error(f"Failed to bind agent certificate: {error}")
- raise CertificateBindingError(f"Certificate binding failed: {str(error)}")
+ except (aiohttp.ClientError, CertificateBindingError) as error:
+ logger.error(f"Failed to bind agent certificate: {error}")
+ raise CertificateBindingError(f"Certificate binding failed: {error!s}") from errorApply the same pattern to the corresponding except block in _unbind_certificate.
However, the architectural fix remains preferable: add a public method on HydraClient to encapsulate the JWKS update logic and avoid direct access to internal state.
🧰 Tools
🪛 Ruff (0.15.11)
[warning] 212-212: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
[warning] 212-212: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/certificates.py` around lines 197 - 212, The helper
functions _bind_certificate and _unbind_certificate violate encapsulation by
using HydraClient._http_client directly and they also drop exception chains; add
a public method on HydraClient named update_client_jwks(client_id, keys) that
performs the GET→modify→PUT sequence and handles the aiohttp response lifecycle
so callers no longer access _http_client, then refactor
_bind_certificate/_unbind_certificate to call HydraClient.update_client_jwks;
also update both except blocks to re-raise using "raise
CertificateBindingError(...) from error" (preserving original exception) instead
of plain raise.
| if auth_error := _authorize_agent_did(_get_caller_did(request), agent_did): | ||
| return auth_error | ||
|
|
||
| async with app._storage.connection() as conn: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm the public storage attribute on the application object.
ast-grep --pattern $'class BinduApplication:
$$$'
rg -nP --type=py -C2 '\bself\._storage\b|\bself\.storage\b' bindu/serverRepository: GetBindu/Bindu
Length of output: 19692
Use the public app.storage, not app._storage.
The endpoints at lines 544, 575, and 610 access app._storage.connection(). This bypasses the storage contract—if BinduApplication renames the private attribute, these break. Per the coding guidelines for bindu/server/**/*.py, use the auto-configured Storage pattern via app.storage, accessing StorageBackend (MemoryStorage or PostgresStorage).
🔧 Proposed fix
- async with app._storage.connection() as conn:
+ async with app.storage.connection() as conn:Apply at lines 544, 575, and 610.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/certificates.py` at line 544, Replace direct access to
the private storage attribute and use the public Storage API: change any "async
with app._storage.connection()" calls to "async with app.storage.connection()"
so the code uses the configured StorageBackend (e.g., MemoryStorage or
PostgresStorage) via the BinduApplication contract; update the three occurrences
of that pattern in the certificates endpoint (where the async context manager is
created) to use app.storage.connection() instead.
| except Exception as e: | ||
| logger.error(f"Certificate issuance failed: {e}") | ||
| return JSONResponse({"error": "Certificate issuance failed"}, status_code=500) |
There was a problem hiding this comment.
Preserve the traceback on the catch-all branches.
The except Exception as e: logger.error(f"...: {e}") pattern drops the stack, which makes these 500s nearly impossible to debug when Hydra/DB/crypto fail unexpectedly. Swap to logger.exception(...) (same message, no {e} needed) — Ruff BLE001 will still warn about the blind catch, but at least you'll have the trace.
🔧 Proposed fix
- except Exception as e:
- logger.error(f"Certificate issuance failed: {e}")
+ except Exception:
+ logger.exception("Certificate issuance failed")
return JSONResponse({"error": "Certificate issuance failed"}, status_code=500)(Apply the same change to the renew and revoke handlers.)
Also applies to: 588-590, 623-625
🧰 Tools
🪛 Ruff (0.15.11)
[warning] 553-553: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/certificates.py` around lines 553 - 555, Replace the
logger.error(...) calls in the catch-all except blocks for the certificate
issuance, renew, and revoke handlers with logger.exception(...) so the full
traceback is preserved; locate the occurrences of the generic "except Exception
as e:" blocks in the issuance, renew and revoke handlers (the three blocks
around the current diff and at the other mentioned locations) and change
logger.error(f"Certificate issuance failed: {e}") (and the analogous
renew/revoke messages) to logger.exception("Certificate issuance failed") (or
the matching message text for renew/revoke) to retain stack traces while keeping
the same log message.
| class MTLSSettings(BaseSettings): | ||
| """mTLS certificate lifecycle configuration settings.""" | ||
|
|
||
| model_config = SettingsConfigDict( | ||
| env_prefix="BINDU_MTLS_", | ||
| env_file=".env", | ||
| extra="ignore", | ||
| ) |
There was a problem hiding this comment.
Align MTLSSettings.env_prefix with the project convention.
Every other submodel in this file uses the <SECTION>__ (double-underscore) prefix matching Settings.env_nested_delimiter="__" (e.g., HYDRA__, VAULT__, GRPC__, NEGOTIATION__). BINDU_MTLS_ is the only divergence — it won't nest under the top-level Settings parser and creates an inconsistent ops surface (BINDU_MTLS_ENABLED vs. MTLS__ENABLED).
Also, the PR description/summary references security.mtls_enabled, but the implemented key is mtls.enabled. Worth reconciling the summary and any downstream references (e.g., bindu/server/applications.py).
🔧 Proposed fix
class MTLSSettings(BaseSettings):
"""mTLS certificate lifecycle configuration settings."""
model_config = SettingsConfigDict(
- env_prefix="BINDU_MTLS_",
+ env_prefix="MTLS__",
env_file=".env",
extra="ignore",
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/settings.py` around lines 1012 - 1019, The MTLSSettings.env_prefix
currently uses "BINDU_MTLS_" which breaks the project's nested-settings
convention; change MTLSSettings.model_config.env_prefix to the double-underscore
form used by other submodels (e.g., "MTLS__") so keys nest under the top-level
Settings parser, and then update any downstream references (for example in
bindu/server/applications.py) and the PR summary to use the reconciled key name
(e.g., MTLS__ENABLED or security.mtls_enabled as agreed) so configuration access
is consistent across the codebase.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
bindu/server/__init__.py (1)
31-35: Align the lazy export helper with the repo’s import and typing rules.Keep the lazy load, but this helper should use an absolute import and a typed signature. Adding a
TYPE_CHECKINGimport also preserves editor/type-checker support without reintroducing the eager import.♻️ Suggested adjustment
+from typing import TYPE_CHECKING, Any + +if TYPE_CHECKING: + from bindu.server.applications import BinduApplication + - -def __getattr__(name: str): +def __getattr__(name: str) -> Any: """Lazily expose server symbols that would otherwise cause circular imports.""" if name == "BinduApplication": - from .applications import BinduApplication + from bindu.server.applications import BinduApplication return BinduApplicationAs per coding guidelines: "Use absolute imports with format
from bindu.server.storage import StorageBackendinstead of relative imports across packages", "Use type-only imports withfrom typing import TYPE_CHECKINGandif TYPE_CHECKING:pattern for type hints", and "Python 3.12+ is required with type hints mandatory for all code".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/__init__.py` around lines 31 - 35, The __getattr__ lazy-export helper must use an absolute runtime import and a typed signature plus a TYPE_CHECKING type-only import to preserve type-checking; change the signature to def __getattr__(name: str) -> typing.Any:, add from typing import TYPE_CHECKING, Any and under if TYPE_CHECKING: from bindu.server.applications import BinduApplication, and inside __getattr__ perform an absolute import (from bindu.server.applications import BinduApplication) when name == "BinduApplication" and then return the symbol (e.g., globals()[name] or BinduApplication) so the attribute is exposed at runtime without causing circular eager imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bindu/server/__init__.py`:
- Around line 31-35: The __getattr__ lazy-export helper must use an absolute
runtime import and a typed signature plus a TYPE_CHECKING type-only import to
preserve type-checking; change the signature to def __getattr__(name: str) ->
typing.Any:, add from typing import TYPE_CHECKING, Any and under if
TYPE_CHECKING: from bindu.server.applications import BinduApplication, and
inside __getattr__ perform an absolute import (from bindu.server.applications
import BinduApplication) when name == "BinduApplication" and then return the
symbol (e.g., globals()[name] or BinduApplication) so the attribute is exposed
at runtime without causing circular eager imports.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2169a3c4-595b-439f-bb2f-48c720932348
📒 Files selected for processing (2)
bindu/extensions/semantic_memory/memory_store.pybindu/server/__init__.py
✅ Files skipped from review due to trivial changes (1)
- bindu/extensions/semantic_memory/memory_store.py
Summary
mtls_enabledsetting.pip install bindu+ zero-config agent still works with zero TLS configuration. This is the infrastructure layer only — the ASGI/Uvicorn SSL context (Standalone Mode) is being handled separately by @Subhajitdas99 in a companion PR.Change Type
Scope
Linked Issue/PR
User-Visible / Behavior Changes
None by default. mTLS is disabled (
mtls_enabled = False) unless explicitly enabled in settings. When enabled, three new endpoints become available:POST /api/v1/certificates/issuePOST /api/v1/certificates/renewPOST /api/v1/certificates/revokeOn first run with mTLS enabled, a local Root CA is auto-generated at
~/.bindu/certs/with no external dependencies.Security Impact
YesYesYesYesYesExplanation:
~/.bindu/certs/ca.keywith no passphrase — acceptable for local dev, operators should replace with Vault PKI or AWS Private CA in production (documented in ADR).x5t#S256claim — a PUT to the Hydra Admin API is made on every issue/renew/revoke.20260322_0001addsagent_certificatesandcertificate_audit_logtables — no changes to existing tables, fully backward compatible.Verification
Environment
Steps to Test
uv run alembic history— confirm20260322_0001appears as the single headpython examples/beginner/beginner_zero_config_agent.py— confirm agent starts normally with no mTLS errors (opt-out default)uv run pytest— confirm existing tests unaffectedmtls_enabled = True: runuv run alembic upgrade head, thenPOST /api/v1/certificates/issuewith a valid CSR — expect 201 + PEM certificate in responsePOST /api/v1/certificates/revoke— confirm statusrevokedinagent_certificatestable and Hydra binding clearedExpected Behavior
alembic historyshows a single clean migration head/issuereturns a signed PEM certificate with SHA-256 fingerprint/revokemarks cert as revoked in DB and removes Hydra binding immediatelyActual Behavior
http://localhost:3773✅alembic historyshows single head20260322_0001✅Evidence
alembic historyoutput confirms clean migration chainHuman Verification
mtls_enabledguard usesgetattrwith fallback — safe ifsecuritysection missing from settings entirely.Compatibility / Migration
YesYes— new optionalsecurity.mtls_enabledflag, defaults toFalseYesuv run alembic upgrade headto createagent_certificatesandcertificate_audit_logtables. No existing tables modified.Failure Recovery
security.mtls_enabled = False(already the default) — certificate routes simply won't registerbindu/server/applications.py— remove_register_certificate_endpoints()call and method/issueand/revokewill return 500 — this is expected and loggedRisks and Mitigations
~/.bindu/certs/ca.keyValueErroron failure — the endpoint returns 500 and logs the error. No partial state is committed.Checklist
uv run pytest) — pytest hook fails withuv: command not foundin pre-commit environment (Windows path issue, unrelated to this PR)Summary by CodeRabbit
New Features
Documentation
Chores
Tests