FEAT Wire frontend attack view to backend APIs#1371
FEAT Wire frontend attack view to backend APIs#1371romanlutz wants to merge 15 commits intoAzure:mainfrom
Conversation
| @@ -91,9 +96,3 @@ def setup_frontend() -> None: | |||
|
|
|||
| # Set up frontend at module load time (needed when running via uvicorn) | |||
| setup_frontend() | |||
There was a problem hiding this comment.
maybe this should go into lifespan?
There was a problem hiding this comment.
setup_frontend needs to happen at module load time otherwise uvicorn can't serve the routes. From what I understand, lifespan is for startup/shutdown tasks that are primarily async, not FastAPI route setup. That's why the comment is there right above 🙂 I'm happy to add to it to make this clearer if you have suggestions.
|
Can you add screenshots for the frontend |
Backend:
- Replace private CentralMemory._memory_instance access with try/except
around the public get_memory_instance() API in the lifespan handler.
Initialization:
- Extract run_initializers_async() as a public function in
pyrit.setup.initialization so initializer execution can be invoked
without redundantly re-loading env files, resetting defaults, and
re-creating the memory instance.
- FrontendCore.run_initializers_async() now calls the new function
directly instead of re-invoking initialize_pyrit_async.
- Export run_initializers_async from pyrit.setup.
Frontend:
- Extract TargetTable into its own component (TargetTable.tsx).
- Move makeStyles definitions to co-located .styles.ts files for
TargetConfig, TargetTable, and CreateTargetDialog.
- Remove redundant explicit generic in useState<string>('') calls.
- Use FluentUI Field validationMessage/validationState props for
inline field-level validation in CreateTargetDialog.
Tests:
- Update TestRunScenarioAsync patches to mock run_initializers_async
instead of initialize_pyrit_async.
c799aa4 to
7e30544
Compare
There was a problem hiding this comment.
Pull request overview
Wires the PyRIT frontend attack experience to live backend APIs, including target management, attack execution, conversation branching, and richer message rendering.
Changes:
- Added/updated backend attack + target endpoints and DTOs (including conversation summaries and target metadata).
- Refactored initialization flow to shift lifespan init into CLI and consolidate initializer execution.
- Expanded frontend UI (config/history/chat) and added extensive unit + E2E coverage.
Reviewed changes
Copilot reviewed 84 out of 88 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/unit/registry/test_converter_registry.py | Adds unit tests for converter registry singleton/metadata behavior. |
| tests/unit/memory/test_sqlite_memory.py | Adds tests for new conversation stats aggregation in SQLite memory. |
| tests/unit/cli/test_pyrit_backend.py | Adds tests for CLI config-file forwarding and server startup flow. |
| tests/unit/cli/test_frontend_core.py | Updates patch targets and initializer execution expectations after refactor. |
| tests/unit/backend/test_target_service.py | Updates tests for target registry naming changes. |
| tests/unit/backend/test_main.py | Updates lifespan expectations to “warn only” behavior. |
| pyrit/setup/initializers/airt_targets.py | Adds extra kwargs support and new/renamed target presets. |
| pyrit/setup/initializers/airt.py | Switches auth approach to Entra token providers and updates required vars. |
| pyrit/setup/initialization.py | Extracts run_initializers_async to separate initializer execution from memory init. |
| pyrit/setup/init.py | Exposes run_initializers_async as part of setup public API. |
| pyrit/prompt_target/openai/openai_video_target.py | Enforces single-turn conversation constraint for video target. |
| pyrit/prompt_target/openai/openai_target.py | Refactors OpenAI base target inheritance/init to align with PromptTarget. |
| pyrit/prompt_target/openai/openai_response_target.py | Includes target-specific params in identifiers (e.g., extra body params). |
| pyrit/prompt_target/openai/openai_realtime_target.py | Adjusts inheritance to keep chat-target semantics for realtime target. |
| pyrit/prompt_target/openai/openai_image_target.py | Enforces single-turn conversation constraint for image target. |
| pyrit/models/conversation_stats.py | Introduces ConversationStats aggregate model. |
| pyrit/models/attack_result.py | Adds attack_result_id onto domain AttackResult. |
| pyrit/models/init.py | Exports ConversationStats from models package. |
| pyrit/memory/sqlite_memory.py | Adds conversation stats query and refactors some filtering helpers/update behavior. |
| pyrit/memory/memory_models.py | Maps DB primary key into AttackResult.attack_result_id. |
| pyrit/memory/memory_interface.py | Adds conversation stats API and updates attack result insert/update semantics. |
| pyrit/memory/azure_sql_memory.py | Adds Azure SQL implementation for conversation stats and safer update behavior. |
| pyrit/cli/pyrit_backend.py | Refactors CLI to use FrontendCore two-step init and adds --config-file. |
| pyrit/cli/frontend_core.py | Moves deferred imports to module-level and adds run_initializers_async method. |
| pyrit/backend/services/target_service.py | Renames target id field to registry name and updates pagination cursor. |
| pyrit/backend/routes/version.py | Adds database backend info to version response payload. |
| pyrit/backend/routes/targets.py | Renames path params and docs to registry naming scheme. |
| pyrit/backend/routes/attacks.py | Expands attack routes to support conversations and changes identifiers to attack_result_id. |
| pyrit/backend/models/targets.py | DTO rename + adds supports_multiturn_chat. |
| pyrit/backend/models/attacks.py | Adds target metadata nesting, conversation endpoints, and new message request fields. |
| pyrit/backend/models/init.py | Updates exports for renamed message response DTO. |
| pyrit/backend/mappers/target_mappers.py | Maps multiturn capability and renames target id field in DTO mapping. |
| pyrit/backend/mappers/init.py | Renames exported async mapper function. |
| pyrit/backend/main.py | Removes standalone uvicorn runner and changes lifespan to warn when uninitialized. |
| frontend/src/utils/messageMapper.ts | Adds backend DTO ↔ UI Message mapping (attachments, reasoning, errors). |
| frontend/src/types/index.ts | Adds backend DTO type mirrors and expands UI message model. |
| frontend/src/services/api.ts | Adds targets/attacks/labels API clients and query serialization. |
| frontend/src/services/api.test.ts | Expands mocked API service tests for new endpoints. |
| frontend/src/components/Sidebar/Navigation.tsx | Adds navigation views (chat/history/config) and active styling. |
| frontend/src/components/Sidebar/Navigation.test.tsx | Updates navigation tests for new view switching behavior. |
| frontend/src/components/Layout/MainLayout.tsx | Shows DB info in version tooltip and wires navigation callbacks. |
| frontend/src/components/Layout/MainLayout.test.tsx | Updates layout tests for new navigation props and DB tooltip behavior. |
| frontend/src/components/Labels/LabelsBar.test.tsx | Adds unit tests for labels UI behavior and label fetching. |
| frontend/src/components/Config/TargetTable.tsx | Adds target list table UI with active-target selection controls. |
| frontend/src/components/Config/TargetTable.styles.ts | Adds styling for target table and active row highlighting. |
| frontend/src/components/Config/TargetConfig.tsx | Implements target config page with fetch/retry, refresh, and create dialog. |
| frontend/src/components/Config/TargetConfig.test.tsx | Adds tests for config page states and interactions. |
| frontend/src/components/Config/TargetConfig.styles.ts | Adds styling for config page layout and states. |
| frontend/src/components/Config/CreateTargetDialog.tsx | Adds create-target dialog and validation + submit flow. |
| frontend/src/components/Config/CreateTargetDialog.test.tsx | Adds tests for create-target dialog validation and submission. |
| frontend/src/components/Config/CreateTargetDialog.styles.ts | Adds styling for create-target dialog layout. |
| frontend/src/components/Chat/InputBox.tsx | Adds banners/locking states, ref API, and multiturn warnings for active target. |
| frontend/src/components/Chat/InputBox.test.tsx | Adds tests for new input-box behaviors (single-turn, ref attachments, banners). |
| frontend/src/components/Chat/ConversationPanel.tsx | Adds conversation list panel for attacks with promote-to-main and new conversation actions. |
| frontend/src/components/Chat/ConversationPanel.test.tsx | Adds tests for conversation panel rendering and interactions. |
| frontend/src/App.tsx | Introduces multi-view app shell, target selection, attack loading, and global labels. |
| frontend/src/App.test.tsx | Expands app tests for navigation, target selection, and opening historical attacks. |
| frontend/playwright.config.ts | Splits Playwright projects into seeded vs live modes. |
| frontend/package.json | Adds e2e scripts for seeded and live test projects. |
| frontend/eslint.config.js | Adds Node globals for Playwright e2e files. |
| frontend/e2e/config.spec.ts | Adds e2e coverage for config page and config↔chat flow. |
| frontend/e2e/api.spec.ts | Adds e2e API smoke tests (targets/attacks) and improves slow-backend handling. |
| frontend/e2e/accessibility.spec.ts | Updates a11y coverage for new navigation and config table; adjusts expected header text. |
| frontend/dev.py | Improves dev runner process management, adds detach/logs/config-file support. |
| frontend/README.md | Documents seeded vs live e2e modes. |
| .github/workflows/frontend_tests.yml | Runs seeded-only e2e in GitHub Actions. |
| .gitattributes | Adds union merge strategy for squad log/state files. |
| .devcontainer/devcontainer_setup.sh | Makes Playwright install failures non-blocking with clearer messaging. |
| from pyrit.memory.memory_models import AttackResultEntry, PromptMemoryEntry | ||
|
|
||
| return exists().where( | ||
| targeted_harm_categories_subquery = exists().where( |
There was a problem hiding this comment.
Both _get_attack_result_harm_category_condition and _get_attack_result_label_condition used to return an exists().where(...) condition, but now only assign it to a local variable (*_subquery) without returning it. That will make these helpers return None, breaking filtering logic that expects a SQLAlchemy condition. Return the constructed subquery (or revert to the inline return exists().where(...)).
| from pyrit.memory.memory_models import AttackResultEntry, PromptMemoryEntry | ||
|
|
||
| return exists().where( | ||
| labels_subquery = exists().where( |
There was a problem hiding this comment.
Both _get_attack_result_harm_category_condition and _get_attack_result_label_condition used to return an exists().where(...) condition, but now only assign it to a local variable (*_subquery) without returning it. That will make these helpers return None, breaking filtering logic that expects a SQLAlchemy condition. Return the constructed subquery (or revert to the inline return exists().where(...)).
| tb = traceback.format_exception(type(e), e, e.__traceback__) | ||
| # Include the root cause if chained | ||
| cause = e.__cause__ | ||
| if cause: | ||
| tb += traceback.format_exception(type(cause), cause, cause.__traceback__) | ||
| detail = "".join(tb) | ||
| raise HTTPException( | ||
| status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, | ||
| detail=f"Failed to add message: {str(e)}", | ||
| detail=detail, | ||
| ) from e |
There was a problem hiding this comment.
Returning full stack traces in the HTTP 500 detail leaks internal code structure and potentially sensitive runtime data to clients. Log the traceback server-side (optionally include request IDs), and return a generic error message to the client; if you want trace exposure for development, gate it behind a dev/debug flag.
|
|
||
|
|
||
| class OpenAITarget(PromptChatTarget): | ||
| class OpenAITarget(PromptTarget): |
There was a problem hiding this comment.
Now that OpenAI targets use multiple inheritance elsewhere (e.g., RealtimeTarget(OpenAITarget, PromptChatTarget)), calling PromptTarget.__init__ directly bypasses cooperative initialization and can break MRO-based init chaining. Prefer super().__init__(...) in OpenAITarget.__init__ so all bases in the MRO can initialize correctly.
| PromptTarget.__init__( | ||
| self, |
There was a problem hiding this comment.
Now that OpenAI targets use multiple inheritance elsewhere (e.g., RealtimeTarget(OpenAITarget, PromptChatTarget)), calling PromptTarget.__init__ directly bypasses cooperative initialization and can break MRO-based init chaining. Prefer super().__init__(...) in OpenAITarget.__init__ so all bases in the MRO can initialize correctly.
| PromptTarget.__init__( | |
| self, | |
| super().__init__( |
|
|
||
| useEffect(() => { | ||
| fetchConversations() | ||
| }, [fetchConversations, activeConversationId]) |
There was a problem hiding this comment.
Including activeConversationId in this effect dependency will refetch the full conversations list on every conversation selection, creating unnecessary API traffic and UI churn. Consider fetching on attackResultId changes (and after mutating actions like create/promote), but not on local selection changes.
| }, [fetchConversations, activeConversationId]) | |
| }, [fetchConversations]) |
| from pyrit.models import Message | ||
| from unit.mocks import get_sample_conversations |
There was a problem hiding this comment.
Several new imports are placed inside test functions. For consistency and readability, prefer moving these imports to the module level (unless there’s a specific reason to defer import side effects).
| import uuid | ||
|
|
||
| from pyrit.models import MessagePiece |
There was a problem hiding this comment.
Several new imports are placed inside test functions. For consistency and readability, prefer moving these imports to the module level (unless there’s a specific reason to defer import side effects).
| import uuid | ||
|
|
||
| from pyrit.models import MessagePiece |
There was a problem hiding this comment.
Several new imports are placed inside test functions. For consistency and readability, prefer moving these imports to the module level (unless there’s a specific reason to defer import side effects).
| import uuid | ||
|
|
||
| from pyrit.models import MessagePiece |
There was a problem hiding this comment.
Several new imports are placed inside test functions. For consistency and readability, prefer moving these imports to the module level (unless there’s a specific reason to defer import side effects).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 84 out of 88 changed files in this pull request and generated 4 comments.
Comments suppressed due to low confidence (1)
pyrit/memory/sqlite_memory.py:1
- This function used to return the
exists().where(...)condition, but now assigns it tolabels_subqueryand never returns it. That will cause callers to appendNoneto SQLAlchemy conditions and break attack filtering by labels. Returnlabels_subquery(or inline it back into a return statement).
# Copyright (c) Microsoft Corporation.
pyrit/memory/memory_interface.py
Outdated
| from contextlib import closing | ||
|
|
||
| with closing(self.get_session()) as session: | ||
| from sqlalchemy.exc import SQLAlchemyError | ||
|
|
||
| try: | ||
| session.add_all(entries) | ||
| session.commit() | ||
| # Populate the attack_result_id back onto the domain objects so callers | ||
| # can reference the DB-assigned ID immediately after insert. | ||
| for ar, entry in zip(attack_results, entries): | ||
| ar.attack_result_id = str(entry.id) | ||
| except SQLAlchemyError: | ||
| session.rollback() | ||
| raise |
There was a problem hiding this comment.
The new inline insert logic introduces deferred/local imports (closing, SQLAlchemyError). This makes the method harder to reason about and diverges from the project’s import organization (imports at module top). Move these imports to the top of the file (or reuse already-imported equivalents) and keep the method focused on DB operations.
| test("should list targets", async ({ request }) => { | ||
| const response = await request.get("/api/targets?count=50"); | ||
|
|
||
| expect(response.ok()).toBe(true); | ||
| const data = await response.json(); | ||
| expect(data).toHaveProperty("items"); | ||
| expect(Array.isArray(data.items)).toBe(true); | ||
| }); | ||
|
|
||
| test("should create and retrieve a target", async ({ request }) => { | ||
| const createPayload = { | ||
| target_type: "OpenAIChatTarget", | ||
| params: { | ||
| endpoint: "https://e2e-test.openai.azure.com", | ||
| model_name: "gpt-4o-e2e-test", | ||
| api_key: "e2e-test-key", | ||
| }, | ||
| }; | ||
|
|
||
| const createResp = await request.post("/api/targets", { data: createPayload }); | ||
| // The endpoint may not be implemented, may require different schema, or may | ||
| // return a validation error. Skip when the backend cannot handle the request. | ||
| if (!createResp.ok()) { | ||
| test.skip(true, `POST /api/targets returned ${createResp.status()} — skipping`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
The E2E API tests don’t align with the backend contract: targets list uses limit (not count), and create uses type (not target_type). As written, the create test will likely always skip, reducing the value of the suite. Update these requests to match the API schema and avoid conditional skipping for expected/steady-state flows (use deterministic mocks or dedicated seeded endpoints if needed).
| const mime = mimeField || defaultMimeForDataType(dataType) | ||
| const isBase64 = !value.startsWith('data:') && !value.startsWith('http') | ||
| const url = isBase64 ? buildDataUri(value, mime) : value | ||
| const prefix = isOriginal ? 'original_' : '' |
There was a problem hiding this comment.
The base64 detection treats any non-data: and non-http string as base64 and wraps it into a data URI. Backend values can be file paths (e.g., /tmp/x.png, C:\\...) or non-http URLs (blob:, file:, azblob:), which will become invalid data URIs. Consider a stricter base64 check (e.g., regex/length validation) and/or explicit handling for known path/scheme prefixes before deciding to build a data URI.
| <Button | ||
| className={styles.iconButton} | ||
| className={currentView === 'chat' ? styles.activeButton : styles.iconButton} | ||
| appearance="subtle" | ||
| icon={<ChatRegular />} | ||
| title="Chat" | ||
| disabled | ||
| onClick={() => onNavigate('chat')} | ||
| /> | ||
|
|
||
| <Button | ||
| className={currentView === 'history' ? styles.activeButton : styles.iconButton} | ||
| appearance="subtle" | ||
| icon={<HistoryRegular />} | ||
| title="Attack History" | ||
| onClick={() => onNavigate('history')} | ||
| /> | ||
|
|
||
| <Button | ||
| className={currentView === 'config' ? styles.activeButton : styles.iconButton} | ||
| appearance="subtle" | ||
| icon={<SettingsRegular />} | ||
| title="Configuration" | ||
| onClick={() => onNavigate('config')} | ||
| /> |
There was a problem hiding this comment.
These icon-only buttons rely on the title attribute for labeling. title is not consistently announced by screen readers and isn’t a reliable accessible name. Provide an explicit accessible label (e.g., aria-label) so the buttons have stable names for assistive tech and testing via role/name queries.
- Add run_initializers_async to pyrit.setup for programmatic initialization - Switch AIRTInitializer to Entra (Azure AD) auth, removing API key requirements - Add --config-file flag to pyrit_backend CLI - Use PyRIT configuration loader in FrontendCore and pyrit_backend - Update AIRTTargetInitializer with new target types Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add conversation_stats model and attack_result extensions - Add get_attack_results with filtering by harm categories, labels, attack type, and converter types to memory interface - Implement SQLite-specific JSON filtering for attack results - Add memory_models field for targeted_harm_categories - Add prompt_metadata support to openai image/video/response targets - Fix missing return statements in SQLite harm_category and label filters Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add attack CRUD routes with conversation management - Add message sending with target dispatch and response handling - Add attack mappers for domain-to-DTO conversion with signed blob URLs - Add attack service with video remix support and piece persistence - Expand target service and routes with registry-based target management - Add version endpoint with database info Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add attack-centric chat UI with multi-conversation support - Add conversation panel with branching and message actions - Add attack history view with filtering - Add labels bar for attack metadata - Add target configuration with create dialog - Add message mapper utilities for backend/frontend translation - Add video playback support with signed blob URLs - Add InputBox with attachment support and auto-expand - Update dev.py with --detach, logs, and process management - Add e2e tests for chat, config, and flow scenarios Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
8f1a532 to
65a4182
Compare
pyrit/backend/routes/version.py
Outdated
| db_name = None | ||
| if memory.engine.url.database: | ||
| db_name = memory.engine.url.database.split("?")[0] | ||
| database_info = f"{db_type} ({db_name})" if db_name else db_type |
There was a problem hiding this comment.
Exposing memory.engine.url.database in a public /version response can leak sensitive deployment details (e.g., server/database names, file paths). Consider returning only the backend type (or a redacted form), and/or only including database identifiers when running in a trusted/development mode.
| db_name = None | |
| if memory.engine.url.database: | |
| db_name = memory.engine.url.database.split("?")[0] | |
| database_info = f"{db_type} ({db_name})" if db_name else db_type | |
| database_info = db_type |
pyrit/backend/models/attacks.py
Outdated
| conversation_id: str = Field(..., description="Unique conversation identifier") | ||
| message_count: int = Field(0, description="Number of messages in this conversation") | ||
| last_message_preview: Optional[str] = Field(None, description="Preview of the last message") | ||
| created_at: Optional[str] = Field(None, description="ISO timestamp of the first message") |
There was a problem hiding this comment.
created_at is modeled as Optional[str] here while other timestamps in the same API use datetime (e.g., AttackSummary.created_at/updated_at, CreateConversationResponse.created_at). Using datetime consistently improves schema clarity and avoids clients guessing formats; Pydantic will serialize it as ISO 8601 anyway.
| created_at: Optional[str] = Field(None, description="ISO timestamp of the first message") | |
| created_at: Optional[datetime] = Field(None, description="ISO timestamp of the first message") |
frontend/src/services/api.test.ts
Outdated
| }), | ||
| }, | ||
| targetsApi: { | ||
| listTargets: jest.fn(async (limit = 50, cursor?: string) => { | ||
| const params: Record<string, string | number> = { limit }; | ||
| if (cursor) params.cursor = cursor; | ||
| const response = await mockApiClient.get("/targets", { params }); | ||
| return response.data; | ||
| }), |
There was a problem hiding this comment.
This test suite mocks the module under test (./api) and re-implements the production logic inside the mock, which means it won’t catch regressions in frontend/src/services/api.ts. Prefer importing the real targetsApi/attacksApi and mocking only apiClient (e.g., via jest spies or an axios mock adapter) so the tests validate the actual implementation.
| request = text_pieces[0] | ||
| messages = self._memory.get_conversation(conversation_id=request.conversation_id) | ||
|
|
||
| n_messages = len(messages) | ||
| if n_messages > 0: | ||
| raise ValueError( | ||
| "This target only supports a single turn conversation. " | ||
| f"Received: {n_messages} messages which indicates a prior turn." | ||
| ) |
There was a problem hiding this comment.
This loads the entire conversation just to check whether any prior turn exists. A cheaper approach is to query only for existence/first row (or count with a limit), which avoids pulling full histories into memory when a user accidentally reuses a conversation ID.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
| async def run_initializers_async( | ||
| *, | ||
| initializers: Optional[Sequence["PyRITInitializer"]] = None, | ||
| initialization_scripts: Optional[Sequence[Union[str, pathlib.Path]]] = None, | ||
| ) -> None: | ||
| """ | ||
| Run initializers and initialization scripts without re-initializing memory or environment. | ||
|
|
||
| This is useful when memory and environment are already set up (e.g. via | ||
| :func:`initialize_pyrit_async`) and only the initializer step needs to run. | ||
|
|
||
| Args: | ||
| initializers: Optional sequence of PyRITInitializer instances to execute directly. | ||
| initialization_scripts: Optional sequence of Python script paths containing | ||
| PyRITInitializer classes. | ||
|
|
||
| Raises: | ||
| ValueError: If initializers are invalid or scripts cannot be loaded. | ||
| """ | ||
| all_initializers = list(initializers) if initializers else [] | ||
|
|
||
| # Load additional initializers from scripts | ||
| if initialization_scripts: | ||
| script_initializers = _load_initializers_from_scripts(script_paths=initialization_scripts) | ||
| all_initializers.extend(script_initializers) | ||
|
|
||
| # Execute all initializers (sorted by execution_order) | ||
| if all_initializers: | ||
| await _execute_initializers_async(initializers=all_initializers) |
There was a problem hiding this comment.
run_initializers_async is intended to be callable independently of initialize_pyrit_async, but it currently doesn't (1) assert CentralMemory is initialized or (2) reset_default_values() before executing initializers. This can lead to initializers running against a partially-initialized environment or inheriting mutated global defaults when multiple scenarios/initializations run in the same process. Consider validating CentralMemory via get_memory_instance() and resetting defaults here (without reloading env files or recreating memory).
| **OpenAI Responses Targets (OpenAIResponseTarget):** | ||
| - AZURE_OPENAI_GPT5_RESPONSES_* - Azure OpenAI GPT-5 Responses | ||
| - AZURE_OPENAI_GPT5_RESPONSES_* (high reasoning) - Azure OpenAI GPT-5 Responses with high reasoning effort | ||
| - PLATFORM_OPENAI_RESPONSES_* - Platform OpenAI Responses | ||
| - AZURE_OPENAI_RESPONSES_* - Azure OpenAI Responses | ||
|
|
There was a problem hiding this comment.
This docstring section enumerating supported env var prefixes is now inconsistent with the actual TARGET_CONFIGS: the video target config uses OPENAI_VIDEO_* vars (and registry_name was changed to openai_video), but the docstring later still refers to AZURE_OPENAI_VIDEO_*. Please update the docstring list to match the current env var names so users don’t misconfigure video targets.
| def show_logs(*, follow: bool = False, lines: int = 50): | ||
| """Show dev.py logs.""" | ||
| if not DEVPY_LOG_FILE.exists(): | ||
| print(f"No log file found at {DEVPY_LOG_FILE}") | ||
| return | ||
| if follow: | ||
| subprocess.run(["tail", "-f", "-n", str(lines), str(DEVPY_LOG_FILE)]) | ||
| else: | ||
| subprocess.run(["tail", "-n", str(lines), str(DEVPY_LOG_FILE)]) |
There was a problem hiding this comment.
show_logs uses the external tail command, which isn’t available on Windows by default (this script claims to be cross-platform). Consider implementing log tailing in Python (read last N lines; optionally poll for follow) or add a Windows-specific branch (e.g., PowerShell Get-Content -Tail -Wait).
| def show_logs(*, follow: bool = False, lines: int = 50): | |
| """Show dev.py logs.""" | |
| if not DEVPY_LOG_FILE.exists(): | |
| print(f"No log file found at {DEVPY_LOG_FILE}") | |
| return | |
| if follow: | |
| subprocess.run(["tail", "-f", "-n", str(lines), str(DEVPY_LOG_FILE)]) | |
| else: | |
| subprocess.run(["tail", "-n", str(lines), str(DEVPY_LOG_FILE)]) | |
| def _print_last_log_lines(*, lines: int) -> None: | |
| """Print the last N lines from the dev.py log file.""" | |
| with DEVPY_LOG_FILE.open("r") as log_file: | |
| all_lines = log_file.readlines() | |
| for line in all_lines[-lines:]: | |
| print(line, end="") | |
| def _follow_logs(*, lines: int) -> None: | |
| """Print the last N lines and then follow the log file for new entries.""" | |
| _print_last_log_lines(lines=lines) | |
| with DEVPY_LOG_FILE.open("r") as log_file: | |
| log_file.seek(0, os.SEEK_END) | |
| try: | |
| while True: | |
| line = log_file.readline() | |
| if line: | |
| print(line, end="") | |
| else: | |
| time.sleep(0.5) | |
| except KeyboardInterrupt: | |
| print() | |
| def show_logs(*, follow: bool = False, lines: int = 50) -> None: | |
| """Show dev.py logs.""" | |
| if not DEVPY_LOG_FILE.exists(): | |
| print(f"No log file found at {DEVPY_LOG_FILE}") | |
| return | |
| if follow: | |
| _follow_logs(lines=lines) | |
| else: | |
| _print_last_log_lines(lines=lines) |
| const mockResponse = { | ||
| data: { | ||
| conversation_id: "conv-123", | ||
| created_at: "2026-02-15T00:00:00Z", | ||
| }, | ||
| }; |
There was a problem hiding this comment.
This test’s mocked CreateAttackResponse is missing the new required attack_result_id field. Keeping the mock aligned with the real API shape helps catch integration issues and avoids type drift (and potential TS compile errors if type-checking is enabled).
| const result = await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "Hello" }], | ||
| send: true, | ||
| }); |
There was a problem hiding this comment.
attacksApi.addMessage now requires additional fields that are mandatory for the backend contract (notably target_conversation_id, and target_registry_name when send: true). These tests are currently constructing requests that the backend will reject and will also drift from the TS AddMessageRequest type. Update the request objects (and the expected apiClient.post payload) to include the required fields.
| describe("attacksApi", () => { | ||
| it("should create an attack", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| conversation_id: "conv-123", | ||
| created_at: "2026-02-15T00:00:00Z", | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| const result = await attacksApi.createAttack({ | ||
| target_registry_name: "test-target", | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith("/attacks", { | ||
| target_registry_name: "test-target", | ||
| }); | ||
| expect(result.conversation_id).toBe("conv-123"); | ||
| }); |
There was a problem hiding this comment.
This test mock for attacksApi.createAttack doesn't include the now-required attack_result_id in the response shape, so the test no longer matches the real API contract. Update the mocked response and assertions to include attack_result_id (and ideally validate it is forwarded/returned).
| it("should add a text message to an attack", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| attack: { conversation_id: "conv-123", message_count: 2 }, | ||
| messages: { conversation_id: "conv-123", messages: [] }, | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| const result = await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "Hello" }], | ||
| send: true, | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith( | ||
| "/attacks/ar-conv-123/messages", | ||
| { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "Hello" }], | ||
| send: true, | ||
| } | ||
| ); | ||
| expect(result.attack.conversation_id).toBe("conv-123"); |
There was a problem hiding this comment.
AddMessageRequest now requires target_conversation_id (and typically target_registry_name when send: true), but these tests/mocks call attacksApi.addMessage without those fields. This makes the tests inconsistent with the real client wrapper + backend validation. Update the request objects and expected apiClient.post calls to include target_conversation_id (and target_registry_name when appropriate).
| from pyrit.backend.models.attacks import ( | ||
| AddMessageRequest, | ||
| AddMessageResponse, | ||
| AttackListResponse, | ||
| AttackMessagesResponse, | ||
| AttackOptionsResponse, | ||
| AttackSummary, | ||
| ConversationMessagesResponse, | ||
| ConverterOptionsResponse, | ||
| CreateAttackRequest, | ||
| CreateAttackResponse, | ||
| Message, | ||
| MessagePiece, | ||
| MessagePieceRequest, | ||
| PrependedMessageRequest, | ||
| Score, | ||
| UpdateAttackRequest, | ||
| ) |
There was a problem hiding this comment.
pyrit.backend.models.__init__ no longer re-exports PrependedMessageRequest (still part of CreateAttackRequest) and also doesn't export the new conversation-related DTOs. If external callers import DTOs from pyrit.backend.models, this is a breaking API change and makes the export surface inconsistent. Consider re-exporting these models (or keeping backwards-compatible aliases) in __all__.
| test("should list targets", async ({ request }) => { | ||
| const response = await request.get("/api/targets?count=50"); | ||
|
|
||
| expect(response.ok()).toBe(true); | ||
| const data = await response.json(); | ||
| expect(data).toHaveProperty("items"); | ||
| expect(Array.isArray(data.items)).toBe(true); | ||
| }); | ||
|
|
||
| test("should create and retrieve a target", async ({ request }) => { | ||
| const createPayload = { | ||
| target_type: "OpenAIChatTarget", | ||
| params: { | ||
| endpoint: "https://e2e-test.openai.azure.com", | ||
| model_name: "gpt-4o-e2e-test", | ||
| api_key: "e2e-test-key", | ||
| }, | ||
| }; | ||
|
|
||
| const createResp = await request.post("/api/targets", { data: createPayload }); | ||
| // The endpoint may not be implemented, may require different schema, or may |
There was a problem hiding this comment.
This E2E test uses a request schema that doesn't match the backend routes/models: it passes count instead of limit for listing targets, and target_type instead of type in the create-target payload. As written, the create test will likely always skip due to a 422, reducing its value. Align the query params and payload keys with /api/targets (e.g., ?limit=... and { type, params }).
…ssibility Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
| it("should add a text message to an attack", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| attack: { conversation_id: "conv-123", message_count: 2 }, | ||
| messages: { conversation_id: "conv-123", messages: [] }, | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| const result = await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "Hello" }], | ||
| send: true, | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith( | ||
| "/attacks/ar-conv-123/messages", | ||
| { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "Hello" }], | ||
| send: true, | ||
| } | ||
| ); | ||
| expect(result.attack.conversation_id).toBe("conv-123"); | ||
| }); | ||
|
|
||
| it("should add a message with image attachment", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| attack: { conversation_id: "conv-123", message_count: 2 }, | ||
| messages: { conversation_id: "conv-123", messages: [] }, | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [ | ||
| { data_type: "text", original_value: "What is in this image?" }, | ||
| { | ||
| data_type: "image_path", | ||
| original_value: "base64encodeddata", | ||
| mime_type: "image/png", | ||
| }, | ||
| ], | ||
| send: true, | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith( | ||
| "/attacks/ar-conv-123/messages", | ||
| expect.objectContaining({ | ||
| pieces: expect.arrayContaining([ | ||
| expect.objectContaining({ data_type: "image_path" }), | ||
| ]), | ||
| }) | ||
| ); | ||
| }); | ||
|
|
||
| it("should add a message with audio attachment", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| attack: { conversation_id: "conv-123", message_count: 2 }, | ||
| messages: { conversation_id: "conv-123", messages: [] }, | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [ | ||
| { | ||
| data_type: "audio_path", | ||
| original_value: "base64audiodata", | ||
| mime_type: "audio/wav", | ||
| }, | ||
| ], | ||
| send: true, | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith( | ||
| "/attacks/ar-conv-123/messages", | ||
| expect.objectContaining({ | ||
| pieces: [ | ||
| expect.objectContaining({ | ||
| data_type: "audio_path", | ||
| mime_type: "audio/wav", | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
| }); | ||
|
|
||
| it("should add a message with video attachment", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| attack: { conversation_id: "conv-123", message_count: 2 }, | ||
| messages: { conversation_id: "conv-123", messages: [] }, | ||
| }, | ||
| }; | ||
| (apiClient.post as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| await attacksApi.addMessage("ar-conv-123", { | ||
| role: "user", | ||
| pieces: [ | ||
| { | ||
| data_type: "video_path", | ||
| original_value: "base64videodata", | ||
| mime_type: "video/mp4", | ||
| }, | ||
| ], | ||
| send: true, | ||
| }); | ||
|
|
||
| expect(apiClient.post).toHaveBeenCalledWith( | ||
| "/attacks/ar-conv-123/messages", | ||
| expect.objectContaining({ | ||
| pieces: [ | ||
| expect.objectContaining({ | ||
| data_type: "video_path", | ||
| mime_type: "video/mp4", | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
| }); | ||
|
|
||
| it("should list attacks with filters", async () => { | ||
| const mockResponse = { | ||
| data: { | ||
| items: [], | ||
| pagination: { limit: 20, has_more: false }, | ||
| }, | ||
| }; | ||
| (apiClient.get as jest.Mock).mockResolvedValueOnce(mockResponse); | ||
|
|
||
| await attacksApi.listAttacks({ limit: 10, outcome: "success" }); | ||
|
|
||
| expect(apiClient.get).toHaveBeenCalledWith("/attacks", { | ||
| params: { limit: 10, outcome: "success" }, | ||
| paramsSerializer: { | ||
| indexes: null, | ||
| }, | ||
| }); | ||
| }); | ||
|
|
||
| it("should handle add message error", async () => { | ||
| const error = new Error("Target not found"); | ||
| (apiClient.post as jest.Mock).mockRejectedValueOnce(error); | ||
|
|
||
| await expect( | ||
| attacksApi.addMessage("conv-123", { | ||
| role: "user", | ||
| pieces: [{ data_type: "text", original_value: "test" }], | ||
| send: true, | ||
| }) | ||
| ).rejects.toThrow("Target not found"); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
attacksApi.addMessage now requires target_conversation_id (and typically target_registry_name when send: true). Several calls in this test file omit the required target_conversation_id, which will fail TypeScript compilation (and also doesn’t match the backend contract). Update the test requests to include target_conversation_id (and target_registry_name where applicable).
| test("should create and retrieve a target", async ({ request }) => { | ||
| const createPayload = { | ||
| type: "OpenAIChatTarget", | ||
| params: { | ||
| endpoint: "https://e2e-test.openai.azure.com", | ||
| model_name: "gpt-4o-e2e-test", | ||
| api_key: "e2e-test-key", | ||
| }, | ||
| }; | ||
|
|
||
| const createResp = await request.post("/api/targets", { data: createPayload }); | ||
| // The endpoint may not be implemented, may require different schema, or may | ||
| // return a validation error. Skip when the backend cannot handle the request. | ||
| if (!createResp.ok()) { | ||
| test.skip(true, `POST /api/targets returned ${createResp.status()} — skipping`); | ||
| return; | ||
| } | ||
|
|
||
| const created = await createResp.json(); | ||
| expect(created).toHaveProperty("target_registry_name"); | ||
| expect(created.type).toBe("OpenAIChatTarget"); | ||
|
|
||
| // Retrieve via list and check it's there | ||
| const listResp = await request.get("/api/targets?limit=200"); | ||
| expect(listResp.ok()).toBe(true); | ||
| const list = await listResp.json(); | ||
| const found = list.items.find( | ||
| (t: { target_registry_name: string }) => | ||
| t.target_registry_name === created.target_registry_name, | ||
| ); | ||
| expect(found).toBeDefined(); | ||
| }); |
There was a problem hiding this comment.
The create-target response DTO uses target_type, not type. This assertion (expect(created.type).toBe(...)) will fail whenever the endpoint actually returns 200/201. Assert on created.target_type instead (and keep the rest of the checks consistent with TargetInstance).
| def test_get_attack_results_by_attack_type(sqlite_instance: MemoryInterface): | ||
| """Test filtering attack results by attack_type matches class_name in JSON.""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "CrescendoAttack") | ||
| ar2 = _make_attack_result_with_identifier("conv_2", "ManualAttack") | ||
| ar3 = _make_attack_result_with_identifier("conv_3", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1, ar2, ar3]) | ||
|
|
||
| results = sqlite_instance.get_attack_results(attack_class="CrescendoAttack") | ||
| results = sqlite_instance.get_attack_results(attack_type="CrescendoAttack") | ||
| assert len(results) == 2 | ||
| assert {r.conversation_id for r in results} == {"conv_1", "conv_3"} | ||
|
|
||
|
|
||
| def test_get_attack_results_by_attack_class_no_match(sqlite_instance: MemoryInterface): | ||
| """Test that attack_class filter returns empty when nothing matches.""" | ||
| def test_get_attack_results_by_attack_type_no_match(sqlite_instance: MemoryInterface): | ||
| """Test that attack_type filter returns empty when nothing matches.""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1]) | ||
|
|
||
| results = sqlite_instance.get_attack_results(attack_class="NonExistentAttack") | ||
| results = sqlite_instance.get_attack_results(attack_type="NonExistentAttack") | ||
| assert len(results) == 0 | ||
|
|
||
|
|
||
| def test_get_attack_results_by_attack_class_case_sensitive(sqlite_instance: MemoryInterface): | ||
| """Test that attack_class filter is case-sensitive (exact match).""" | ||
| def test_get_attack_results_by_attack_type_case_sensitive(sqlite_instance: MemoryInterface): | ||
| """Test that attack_type filter is case-sensitive (exact match).""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1]) | ||
|
|
||
| results = sqlite_instance.get_attack_results(attack_class="crescendoattack") | ||
| results = sqlite_instance.get_attack_results(attack_type="crescendoattack") | ||
| assert len(results) == 0 | ||
|
|
||
|
|
||
| def test_get_attack_results_by_attack_class_no_identifier(sqlite_instance: MemoryInterface): | ||
| """Test that attacks with no attack_identifier (empty JSON) are excluded by attack_class filter.""" | ||
| def test_get_attack_results_by_attack_type_no_identifier(sqlite_instance: MemoryInterface): | ||
| """Test that attacks with no attack_identifier (empty JSON) are excluded by attack_type filter.""" | ||
| ar1 = create_attack_result("conv_1", 1) # No attack_identifier → stored as {} | ||
| ar2 = _make_attack_result_with_identifier("conv_2", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1, ar2]) | ||
|
|
||
| results = sqlite_instance.get_attack_results(attack_class="CrescendoAttack") | ||
| results = sqlite_instance.get_attack_results(attack_type="CrescendoAttack") | ||
| assert len(results) == 1 | ||
| assert results[0].conversation_id == "conv_2" | ||
|
|
||
|
|
||
| def test_get_attack_results_converter_classes_none_returns_all(sqlite_instance: MemoryInterface): | ||
| """Test that converter_classes=None (omitted) returns all attacks unfiltered.""" | ||
| def test_get_attack_results_converter_types_none_returns_all(sqlite_instance: MemoryInterface): | ||
| """Test that converter_types=None (omitted) returns all attacks unfiltered.""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "Attack", ["Base64Converter"]) | ||
| ar2 = _make_attack_result_with_identifier("conv_2", "Attack") # No converters (None) | ||
| ar3 = create_attack_result("conv_3", 3) # No identifier at all | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1, ar2, ar3]) | ||
|
|
||
| results = sqlite_instance.get_attack_results(converter_classes=None) | ||
| results = sqlite_instance.get_attack_results(converter_types=None) | ||
| assert len(results) == 3 |
There was a problem hiding this comment.
These tests were updated to call MemoryInterface.get_attack_results(attack_type=..., converter_types=...), but the actual MemoryInterface.get_attack_results signature still uses attack_class and converter_classes (see pyrit/memory/memory_interface.py around the method definition). As written, this will raise TypeError and fail the suite; update the tests to use the correct parameter names (or rename the production API consistently if that’s the intent).
| def test_get_unique_attack_type_names_empty(sqlite_instance: MemoryInterface): | ||
| """Test that no attacks returns empty list.""" | ||
| result = sqlite_instance.get_unique_attack_class_names() | ||
| result = sqlite_instance.get_unique_attack_type_names() | ||
| assert result == [] | ||
|
|
||
|
|
||
| def test_get_unique_attack_class_names_sorted_unique(sqlite_instance: MemoryInterface): | ||
| """Test that unique class names are returned sorted, with duplicates removed.""" | ||
| def test_get_unique_attack_type_names_sorted_unique(sqlite_instance: MemoryInterface): | ||
| """Test that unique type names are returned sorted, with duplicates removed.""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "CrescendoAttack") | ||
| ar2 = _make_attack_result_with_identifier("conv_2", "ManualAttack") | ||
| ar3 = _make_attack_result_with_identifier("conv_3", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1, ar2, ar3]) | ||
|
|
||
| result = sqlite_instance.get_unique_attack_class_names() | ||
| result = sqlite_instance.get_unique_attack_type_names() | ||
| assert result == ["CrescendoAttack", "ManualAttack"] | ||
|
|
||
|
|
||
| def test_get_unique_attack_class_names_skips_empty_identifier(sqlite_instance: MemoryInterface): | ||
| def test_get_unique_attack_type_names_skips_empty_identifier(sqlite_instance: MemoryInterface): | ||
| """Test that attacks with empty attack_identifier (no class_name) are excluded.""" | ||
| ar_no_id = create_attack_result("conv_1", 1) # No attack_identifier → stored as {} | ||
| ar_with_id = _make_attack_result_with_identifier("conv_2", "CrescendoAttack") | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar_no_id, ar_with_id]) | ||
|
|
||
| result = sqlite_instance.get_unique_attack_class_names() | ||
| result = sqlite_instance.get_unique_attack_type_names() | ||
| assert result == ["CrescendoAttack"] | ||
|
|
||
|
|
||
| def test_get_unique_converter_class_names_empty(sqlite_instance: MemoryInterface): | ||
| def test_get_unique_converter_type_names_empty(sqlite_instance: MemoryInterface): | ||
| """Test that no attacks returns empty list.""" | ||
| result = sqlite_instance.get_unique_converter_class_names() | ||
| result = sqlite_instance.get_unique_converter_type_names() | ||
| assert result == [] | ||
|
|
||
|
|
||
| def test_get_unique_converter_class_names_sorted_unique(sqlite_instance: MemoryInterface): | ||
| """Test that unique converter class names are returned sorted, with duplicates removed.""" | ||
| def test_get_unique_converter_type_names_sorted_unique(sqlite_instance: MemoryInterface): | ||
| """Test that unique converter type names are returned sorted, with duplicates removed.""" | ||
| ar1 = _make_attack_result_with_identifier("conv_1", "Attack", ["Base64Converter", "ROT13Converter"]) | ||
| ar2 = _make_attack_result_with_identifier("conv_2", "Attack", ["Base64Converter"]) | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar1, ar2]) | ||
|
|
||
| result = sqlite_instance.get_unique_converter_class_names() | ||
| result = sqlite_instance.get_unique_converter_type_names() | ||
| assert result == ["Base64Converter", "ROT13Converter"] | ||
|
|
||
|
|
||
| def test_get_unique_converter_class_names_skips_no_converters(sqlite_instance: MemoryInterface): | ||
| def test_get_unique_converter_type_names_skips_no_converters(sqlite_instance: MemoryInterface): | ||
| """Test that attacks with no converters don't contribute names.""" | ||
| ar_no_conv = _make_attack_result_with_identifier("conv_1", "Attack") # No converters | ||
| ar_with_conv = _make_attack_result_with_identifier("conv_2", "Attack", ["Base64Converter"]) | ||
| ar_empty_id = create_attack_result("conv_3", 3) # Empty attack_identifier | ||
| sqlite_instance.add_attack_results_to_memory(attack_results=[ar_no_conv, ar_with_conv, ar_empty_id]) | ||
|
|
||
| result = sqlite_instance.get_unique_converter_class_names() | ||
| result = sqlite_instance.get_unique_converter_type_names() |
There was a problem hiding this comment.
get_unique_attack_type_names() / get_unique_converter_type_names() are used here, but the memory layer still exposes get_unique_attack_class_names() / get_unique_converter_class_names() (e.g., on SQLite/AzureSQL memory). Unless the underlying API was renamed everywhere, these tests will error with AttributeError. Update the tests to use the existing method names or update the production interface consistently.
| messages = await service.get_conversation_messages_async( | ||
| attack_result_id=attack_result_id, | ||
| conversation_id=conversation_id, | ||
| ) | ||
| if not messages: | ||
| raise HTTPException( | ||
| status_code=status.HTTP_404_NOT_FOUND, | ||
| detail=f"Attack '{conversation_id}' not found", | ||
| detail=f"Attack '{attack_result_id}' not found", | ||
| ) | ||
|
|
||
| return messages |
There was a problem hiding this comment.
get_conversation_messages_async can raise ValueError when the provided conversation_id is not part of the attack. This route doesn’t catch that exception, so an invalid conversation_id will currently bubble up as a 500 instead of a 400/404 ProblemDetail. Catch ValueError from the service and translate it to an HTTP 400 (or 404) with a clear message.
Add target management and real attack execution to the PyRIT frontend, replacing the echo stub with live backend communication.
Backend:
CLI:
Frontend:
Tests: