Massive UX overhaul, BYOK support, and interactive fact-checking chat#148
Massive UX overhaul, BYOK support, and interactive fact-checking chat#148khushal1512 wants to merge 24 commits intoAOSSIE-Org:mainfrom
Conversation
…ython >= 3.12 (fix): removed incorrect pinecone client init in get_rag.py (fix): centralize groq model change from llm_config.py
…arch Tool (fix): Implement new fact check subgraph (enhancement): sentiment analysis and fact check run parallely \n clean_text->extract_claims->plan_searches->execute_searches->verify_facts
…odes i.e sentiment node and fact check node (chore): /process route awaits langgraph build compile
- Updated chunk_rag_data.py to support both new (claim, status) and old (original_claim, �erdict) key formats from the fact-checker. - Added logic to correctly parse perspective whether it is a Pydantic model or a dict. - implemented skipping of malformed facts instead of raising valueErr. - compatibility with the new parallel DuckDuckGo fact-checking workflow.
-'PerspectiveOutput' pydantic model to handle reasoning as a list for claims
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds provider-driven LLM configuration, a LangGraph-backed stateful chat with persistent memory, and a modular async fact-check pipeline; refactors LangGraph workflow and pipeline orchestration, updates backend env/config and dependency set, rewrites chunking/vectorization, and overhauls the frontend into a simplified landing + unified perspective page while removing many UI primitives. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant API as Backend API
participant LangGraph
participant LLM
participant Search as DDGS
participant Storage as Pinecone
participant Memory as MemorySaver
User->>Frontend: submit URL / chat message (provider)
Frontend->>API: POST /process or /api/chat (url/message, provider, thread_id?)
API->>LangGraph: anainvoke(workflow, config{thread_id, provider})
LangGraph->>LLM: get_llm(provider) -> model call (perspective/judge/extract)
LangGraph->>Search: execute web searches (parallel) -> results
LangGraph->>LLM: verify facts using evidence
LangGraph->>Storage: chunk_rag_data -> embed -> upsert vectors (Pinecone)
LangGraph->>Memory: MemorySaver.checkpoint(save state)
LangGraph-->>API: return workflow result (perspective, facts, score, thread_id)
API-->>Frontend: JSON response
Frontend-->>User: render perspective and enable chat
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/app/utils/fact_check_utils.py (1)
14-17:⚠️ Potential issue | 🟡 MinorStale docstring: Still references "Google search" but implementation uses DuckDuckGo.
The PR summary indicates Google Search was replaced with DuckDuckGo. Update the docstring to reflect the current implementation.
Proposed fix
2. Web Search: - - For each extracted claim, executes a Google search via `search_google` to find + - For each extracted claim, executes a web search via `search_google` to find relevant supporting or refuting sources.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/utils/fact_check_utils.py` around lines 14 - 17, The docstring in fact_check_utils.py still says "Google search" while the code uses DuckDuckGo; update the docstring (the Web Search step) to reference DuckDuckGo instead of Google and, if it names a helper, rename or reference the actual function used (e.g., search_duckduckgo) and clarify that it stores the top DuckDuckGo result with the claim so implementation and docs match.README.md (1)
166-170:⚠️ Potential issue | 🟠 MajorDocument the current BYOK env contract in Backend Setup.
Line 169 still documents a Google
SEARCH_KEY, while provider routing now relies on Groq/Gemini env variables. Missing Gemini/model env keys in docs can cause runtime failures when users pick Gemini.📄 Suggested README env block update
GROQ_API_KEY= <groq_api_key> +GROQ_MODEL_NAME= <groq_model_name> # optional +GEMINI_API_KEY= <gemini_api_key> # required for Gemini +GEMINI_MODEL_NAME= <gemini_model_name> # optional PINECONE_API_KEY = <your_pinecone_API_KEY> PORT = 8000 -SEARCH_KEY = <your_Google_custom_search_engine_API_key> HF_TOKEN = <your_huggingface_access_token>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 166 - 170, Update the README Backend Setup env block to reflect the current BYOK contract: remove the deprecated SEARCH_KEY entry and instead document the Groq/Gemini keys required at runtime by adding GEMINI_API_KEY and GEMINI_MODEL (or equivalent provider/model variables) alongside the existing GROQ_API_KEY, PINECONE_API_KEY, HF_TOKEN and PORT entries; ensure variable names match the code's provider routing (e.g., references to SEARCH_KEY are replaced with GEMINI_API_KEY/GEMINI_MODEL) and show example values or placeholders so users don’t miss required keys when selecting Gemini.backend/app/modules/scraper/extractor.py (1)
29-43:⚠️ Potential issue | 🟠 MajorGuard Trafilatura JSON parsing so fallback extraction still runs.
If
json.loads(result)fails,extract()exits before tryingextract_with_newspaperandextract_with_bs4.Proposed fix
def extract_with_trafilatura(self) -> dict: - downloaded = trafilatura.fetch_url(self.url) - if not downloaded: - return {} - result = trafilatura.extract( - downloaded, - no_fallback=True, - include_comments=False, - include_tables=False, - favor_recall=True, - output_format="json", - ) - if result: - return json.loads(result) - return {} + try: + downloaded = trafilatura.fetch_url(self.url) + if not downloaded: + return {} + result = trafilatura.extract( + downloaded, + no_fallback=True, + include_comments=False, + include_tables=False, + favor_recall=True, + output_format="json", + ) + if not result: + return {} + return json.loads(result) + except (json.JSONDecodeError, TypeError): + logging.exception("Trafilatura returned malformed JSON") + return {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/scraper/extractor.py` around lines 29 - 43, The Trafilatura extraction in extract_with_trafilatura currently calls json.loads(result) without guarding parse errors, which prevents later fallbacks (extract_with_newspaper and extract_with_bs4) from running; wrap the json.loads(result) call in a try/except catching json.JSONDecodeError (and a generic Exception as a fallback), log or ignore the parse error, and return an empty dict on failure so the caller can proceed to extract_with_newspaper and extract_with_bs4 instead of aborting.
🟠 Major comments (19)
frontend/lib/config.ts-5-6 (1)
5-6:⚠️ Potential issue | 🟠 MajorFail fast when
NEXT_PUBLIC_API_URLis missing outside development.Using a hard localhost fallback here can silently misroute requests in staging/production instead of surfacing a config error early.
Proposed fix
+const LOCAL_DEV_API_URL = "http://127.0.0.1:5555"; +const configuredApiUrl = process.env.NEXT_PUBLIC_API_URL?.trim(); + +if (process.env.NODE_ENV !== "development" && !configuredApiUrl) { + throw new Error("NEXT_PUBLIC_API_URL must be set outside development."); +} + export const API_BASE_URL = - process.env.NEXT_PUBLIC_API_URL || "http://127.0.0.1:5555"; + configuredApiUrl || LOCAL_DEV_API_URL;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/config.ts` around lines 5 - 6, The current API_BASE_URL constant falls back to "http://127.0.0.1:5555" which can silently misroute production/staging traffic; change API_BASE_URL initialization to fail fast when NEXT_PUBLIC_API_URL is not provided outside development: check process.env.NEXT_PUBLIC_API_URL and process.env.NODE_ENV (or a similar runtime env check) inside the module where API_BASE_URL is defined, and if NEXT_PUBLIC_API_URL is missing and NODE_ENV !== 'development' throw a clear error (or throw/console.error + process.exit) so the app won't start with a localhost fallback; keep the same exported const name API_BASE_URL.backend/pyproject.toml-9-9 (1)
9-9:⚠️ Potential issue | 🟠 MajorRemove duplicate deprecated
dotenvpackage; onlypython-dotenvis needed.The
dotenv>=0.9.9andpython-dotenv>=1.1.0dependencies conflict.dotenvis a deprecated package on PyPI, whilepython-dotenvis the actively maintained standard. Both packages provide adotenvmodule, causing potential package resolution confusion. The codebase imports only frompython-dotenv(from dotenv import load_dotenv). Remove the deprecateddotenvdependency and keep onlypython-dotenv.Proposed fix
dependencies = [ "bs4>=0.0.2", - "dotenv>=0.9.9", "duckduckgo-search>=8.0.4",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pyproject.toml` at line 9, Remove the deprecated duplicate dependency "dotenv>=0.9.9" from pyproject.toml so only "python-dotenv>=1.1.0" remains; the codebase imports from python-dotenv via "from dotenv import load_dotenv", so keep the "python-dotenv" entry and delete the "dotenv" entry to avoid package resolution conflicts.backend/main.py-54-54 (1)
54-54:⚠️ Potential issue | 🟠 MajorDefault binding to
0.0.0.0is too permissive for local/dev execution.At Line 54, this exposes the service on all interfaces by default. Prefer an env-configured host with a safer default (
127.0.0.1).🔒 Suggested hardening
- uvicorn.run(app, host="0.0.0.0", port=port) + host = os.environ.get("HOST", "127.0.0.1") + uvicorn.run(app, host=host, port=port)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/main.py` at line 54, The app is currently bound to 0.0.0.0 via the uvicorn.run call (uvicorn.run(app, host="0.0.0.0", port=port)); change this to read a host from environment/config (e.g., BIND_HOST or HOST) and default that value to "127.0.0.1" for local/dev, then pass that variable into uvicorn.run instead of the hardcoded "0.0.0.0" so the default is loopback but can be overridden via env for other environments.backend/app/llm_config.py-17-33 (1)
17-33:⚠️ Potential issue | 🟠 MajorUnsupported provider values silently default to Groq.
Any typo/case mismatch currently falls through to the Groq path. Reject unknown providers explicitly to avoid hidden misrouting.
🔧 Suggested provider validation
def get_llm(provider: str = "groq", temperature: float = 0.7): @@ - if provider == "gemini": + provider_normalized = (provider or "").strip().lower() + + if provider_normalized == "gemini": @@ - # Default → Groq - from langchain_groq import ChatGroq + elif provider_normalized == "groq": + from langchain_groq import ChatGroq @@ - api_key = os.getenv("GROQ_API_KEY") - model_name = os.getenv("GROQ_MODEL_NAME", "llama-3.3-70b-versatile") - if not api_key: - raise ValueError("GROQ_API_KEY environment variable is required for Groq") - return ChatGroq( - model=model_name, - api_key=api_key, - temperature=temperature, - ) + api_key = os.getenv("GROQ_API_KEY") + model_name = os.getenv("GROQ_MODEL_NAME", "llama-3.3-70b-versatile") + if not api_key: + raise ValueError("GROQ_API_KEY environment variable is required for Groq") + return ChatGroq( + model=model_name, + api_key=api_key, + temperature=temperature, + ) + else: + raise ValueError(f"Unsupported provider '{provider}'. Expected 'groq' or 'gemini'.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/llm_config.py` around lines 17 - 33, The code silently falls through to ChatGroq when provider is misspelled or has wrong casing; update the provider validation in llm_config.py to explicitly check allowed providers (e.g., "gemini" and "groq"), normalize casing (lower()) and raise a clear ValueError for unknown providers instead of defaulting to ChatGroq; locate the logic around the provider variable and the branches that construct ChatGoogleGenerativeAI and ChatGroq and replace the implicit default with an explicit conditional or a provider->factory mapping that throws on unrecognized keys.frontend/components/perspective/RightSidebar.tsx-80-85 (1)
80-85:⚠️ Potential issue | 🟠 MajorAdd ARIA labels/state for sidebar and accordion controls.
Icon-only and disclosure buttons are missing explicit assistive semantics (
aria-label/aria-expanded), reducing accessibility for screen-reader users.♿ Suggested accessibility attributes
<button onClick={onToggle} + aria-label={isOpen ? "Collapse right sidebar" : "Expand right sidebar"} className="p-2 text-gray-400 hover:text-white transition-colors rounded-lg hover:bg-white/5 mb-4 self-start" > ... <button onClick={() => toggleSection("bias")} + aria-expanded={sections.bias} className="w-full flex items-center justify-between mb-4 hover:text-gray-200 transition-colors" > ... <button onClick={onToggle} + aria-expanded={isOpen} className="w-full flex items-center justify-between p-3 hover:bg-white/5 transition-colors text-left" >Also applies to: 91-99, 209-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/perspective/RightSidebar.tsx` around lines 80 - 85, The sidebar toggle button (onToggle / isOpen in RightSidebar.tsx) and the other icon-only/disclosure buttons (the accordion header buttons around the ranges you noted) lack ARIA semantics; add explicit attributes: give the sidebar toggle an aria-label (e.g., "Close sidebar" / "Open sidebar" or a neutral "Toggle sidebar") and set aria-expanded={isOpen} and aria-controls pointing to the sidebar region id; for each accordion header button set aria-expanded={expandedState}, provide an aria-controls attribute that references the corresponding panel id, and ensure the collapsible panel has an id and role="region" (or aria-hidden toggled) so screen readers can associate the controls and panels.frontend/hooks/use-chat.ts-9-12 (1)
9-12:⚠️ Potential issue | 🟠 MajorReset local messages when
threadIdchanges to avoid cross-thread mixing.Current state persists messages across thread switches, so users can see old conversation content under a new backend thread.
🧼 Suggested thread-bound message reset
-import { useState, useCallback } from "react"; +import { useState, useCallback, useEffect } from "react"; ... export function useChat(threadId: string | undefined) { const [messages, setMessages] = useState<ChatMessage[]>([]); const [sending, setSending] = useState(false); + + useEffect(() => { + setMessages([]); + }, [threadId]);Also applies to: 64-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/hooks/use-chat.ts` around lines 9 - 12, The hook useChat currently preserves the messages state across different threadId values, causing conversations to bleed between threads; add a useEffect that watches the threadId parameter and calls setMessages([]) (and optionally setSending(false)) whenever threadId changes to reset local state for the new thread; locate the messages/setMessages state in useChat and add this effect near the top of the hook (also apply the same reset logic where similar state is managed around the code referenced at lines 64-67).backend/app/modules/langgraph_nodes/store_and_send.py-30-35 (1)
30-35:⚠️ Potential issue | 🟠 MajorChunking errors are logged but still returned as success.
Lines 30–35 record
chunk_error, thennot chunkspath returns"status": "success". This masks data-loss failures.🧭 Suggested explicit error propagation
chunks, chunk_error = chunk_rag_data(state) if chunk_error: logger.error(f"Chunking returned error: {chunk_error}") + return { + **state, + "status": "error", + "error_from": "store_and_send", + "message": chunk_error, + } if not chunks: logger.warning("No chunks generated. Skipping vector storage.") return {**state, "status": "success"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/langgraph_nodes/store_and_send.py` around lines 30 - 35, The code logs chunk_error but still treats empty chunks as success; update the handler in the store_and_send function (the block using variables chunk_error and chunks and logger) to propagate failures: if chunk_error is truthy, return state with "status": "error" (or set an appropriate failure status) and include chunk_error details in the returned payload (and keep the logger.error call); only return "status": "success" when chunks exist and the subsequent storage/send operations complete successfully.backend/app/modules/langgraph_nodes/judge.py-43-45 (1)
43-45:⚠️ Potential issue | 🟠 MajorScore extraction can select the wrong number from the model output.
Line 44 uses the first matched integer. Responses like “rate 0-100… score: 67” can be parsed as
0instead of67.🔢 Suggested robust score parsing
- numbers = re.findall(r"\d+", content) - score = int(numbers[0]) if numbers else 50 + numbers = [int(n) for n in re.findall(r"\b\d{1,3}\b", content)] + candidates = [n for n in numbers if 0 <= n <= 100] + score = candidates[-1] if candidates else 50 score = max(0, min(100, score))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/langgraph_nodes/judge.py` around lines 43 - 45, The current extraction uses the first regex match and can pick the wrong integer; in the block that defines numbers = re.findall(r"\d+", content) and score = int(numbers[0]) if numbers else 50, change the selection logic to: first try a targeted regex that captures a number following keywords like "score", "rating", "rate" (e.g. r'(?:score|rating|rate)\D*(\d{1,3})'), if that yields a valid integer use it, otherwise convert all matches in numbers to ints, filter to 0-100, prefer the last valid match (or the only valid match), and then clamp to 0-100 with the same max/min fallback to default 50; update references to content, numbers, and score accordingly.backend/app/modules/langgraph_nodes/store_and_send.py-47-50 (1)
47-50:⚠️ Potential issue | 🟠 MajorPreserve prior pipeline state in the exception response.
Lines 47–50 currently drop the existing state. If storage fails late, frontend can lose already-computed analysis output.
🧩 Suggested exception return shape
return { + **state, "status": "error", "error_from": "store_and_send", "message": str(e), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/langgraph_nodes/store_and_send.py` around lines 47 - 50, The exception response in store_and_send currently returns only error fields and drops the prior pipeline state; update the error return dict inside the exception handler in the store_and_send function to include the previously computed pipeline state (e.g., add a "pipeline_state" or "state" key) alongside "status", "error_from": "store_and_send", and "message": str(e); ensure the added key references the actual variable that holds the computed analysis/output (for example result, state, or pipeline_state) so the frontend receives already-computed data when storage fails.backend/app/modules/vector_store/chunk_rag_data.py-61-64 (1)
61-64:⚠️ Potential issue | 🟠 MajorGuard fact entry shape before calling
.get()to prevent full chunk loss.Line 62 assumes each
factis a dict. If an entry is a string/object, this throws and the catch block returns no chunks at all.🛡️ Suggested defensive parsing for fact entries
- for idx, fact in enumerate(state.get("facts", [])): - claim = fact.get("claim", "") - reason = fact.get("reason", "") - status = fact.get("status", "Unknown") + raw_facts = state.get("facts", []) + if not isinstance(raw_facts, list): + raw_facts = [] + + for idx, fact in enumerate(raw_facts): + if hasattr(fact, "model_dump"): + fact = fact.model_dump() + elif hasattr(fact, "dict"): + fact = fact.dict() + if not isinstance(fact, dict): + logger.debug(f"Skipping invalid fact at index {idx}: {type(fact)}") + continue + + claim = fact.get("claim", "") + reason = fact.get("reason", "") + status = fact.get("status", "Unknown")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/vector_store/chunk_rag_data.py` around lines 61 - 64, The loop over state.get("facts", []) assumes each fact is a dict and calls fact.get(...), which throws if a fact is a non-dict and causes all chunks to be lost; update the for idx, fact in enumerate(state.get("facts", [])) block to defensively handle non-dict entries by checking isinstance(fact, dict) before calling fact.get, and for non-dict values either skip the entry or normalize it (e.g., treat the whole value as claim or convert to {'claim': str(fact)}) so that claim = fact.get("claim", ""), reason = fact.get("reason", ""), status = fact.get("status", "Unknown") are only invoked on a dict and you avoid raising exceptions that abort chunk creation.backend/app/modules/langgraph_nodes/judge.py-22-24 (1)
22-24:⚠️ Potential issue | 🟠 MajorDon’t mark missing perspective input as a successful judgment.
Line 24 returns success with score
0when text is absent. That hides upstream failures and contaminates downstream scoring.✅ Suggested failure-path response
if not text: logger.warning("No perspective text found to judge.") - return {**state, "score": 0, "status": "success"} + return { + **state, + "score": 50, + "status": "error", + "error_from": "judge", + "message": "Missing perspective text", + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/langgraph_nodes/judge.py` around lines 22 - 24, The current early-return marks a missing perspective text as a successful judgment; instead return a clear failure state so upstream can detect it: inside the same block that checks the variable text (and uses logger.warning), return an updated state that sets "status" to "failed" (or "error"), set "score" to None or omit it, and include an error field such as "error": "missing_perspective_text" (keep using the existing state dict merging pattern {**state, ...}); update any callers if they expect numeric scores to handle None/error.frontend/hooks/use-perspective.ts-60-61 (1)
60-61:⚠️ Potential issue | 🟠 MajorProtect sessionStorage parsing from malformed JSON.
Lines 60–61 call
JSON.parsedirectly. A single invalid value throws and aborts the hook’s data-loading path.🧯 Suggested safe parse helper
+ const safeParse = <T,>(raw: string | null): T | null => { + if (!raw) return null; + try { + return JSON.parse(raw) as T; + } catch { + return null; + } + }; ... - if (storedAnalysis) setAnalysisData(JSON.parse(storedAnalysis)); - if (storedBias) setBiasData(JSON.parse(storedBias)); + const parsedAnalysis = safeParse<AnalysisData>(storedAnalysis); + const parsedBias = safeParse<BiasData>(storedBias); + if (parsedAnalysis) setAnalysisData(parsedAnalysis); + if (parsedBias) setBiasData(parsedBias);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/hooks/use-perspective.ts` around lines 60 - 61, The JSON.parse calls for storedAnalysis and storedBias in the use-perspective hook are unsafe and will throw on malformed sessionStorage; update the data-loading path (inside the usePerspective hook where setAnalysisData and setBiasData are called) to safely parse: create or use a safeParse helper that wraps JSON.parse in try/catch and returns null/undefined or a fallback on error, then call safeParse(storedAnalysis) and safeParse(storedBias) and only call setAnalysisData/setBiasData when parsing succeeds; ensure the helper is referenced where storedAnalysis/storedBias are handled so malformed JSON no longer aborts the hook.frontend/app/perspective/page.tsx-33-33 (1)
33-33:⚠️ Potential issue | 🟠 MajorMobile menu toggle has no functional panel.
mobileMenuOpenis toggled (Line 85) but never used to render mobile navigation. On small screens, sidebar actions are effectively unreachable.Proposed fix
<div className="lg:hidden fixed top-0 left-0 right-0 z-50 bg-background-dark border-b border-white/10 px-4 py-3 flex items-center justify-between"> <Link href="/" className="font-semibold text-xl tracking-tight text-white"> perspective </Link> <button onClick={() => setMobileMenuOpen(!mobileMenuOpen)} className="p-2 text-white"> {mobileMenuOpen ? <X className="w-6 h-6" /> : <Menu className="w-6 h-6" />} </button> </div> + + {mobileMenuOpen && ( + <div className="lg:hidden fixed top-14 left-0 right-0 z-40 bg-[`#15191E`] border-b border-white/10 p-4 space-y-3"> + <Link href="/" className="block text-white/90 hover:text-white"> + New Article + </Link> + <button type="button" className="block text-white/70 hover:text-white"> + Settings + </button> + </div> + )}Also applies to: 81-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/perspective/page.tsx` at line 33, The mobileMenuOpen state is toggled but never used to render a mobile navigation panel; update the page component to conditionally render a mobile nav panel (e.g., a Drawer or a responsive div) when mobileMenuOpen is true, move or duplicate the sidebar action buttons into that panel so they are reachable on small screens, and wire the close control to setMobileMenuOpen(false) (ensure the toggle control that calls setMobileMenuOpen is the same one that opens the panel). Target the mobileMenuOpen and setMobileMenuOpen usage in the page component, and ensure the mobile panel includes an accessible close button and appropriate styling/overlay for small screens.backend/app/modules/langgraph_nodes/sentiment.py-24-33 (1)
24-33:⚠️ Potential issue | 🟠 MajorValidate
cleaned_textbefore spawning parallel tasks.Right now, the pipeline can still launch fact-check/summary work even when input text is invalid and the final status is guaranteed to be error.
Proposed fix
async def run_parallel_analysis(state): provider = state.get("provider", "groq") + if not state.get("cleaned_text"): + return { + "status": "error", + "error_from": "parallel_analysis", + "message": "Missing or empty 'cleaned_text' in state", + } sentiment_task = asyncio.to_thread(run_sentiment, state, provider)Also applies to: 99-104
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/langgraph_nodes/sentiment.py` around lines 24 - 33, run_parallel_analysis currently spawns sentiment, fact-check and summary tasks unconditionally; first validate the input text by checking state.get("cleaned_text") (or the same validation used elsewhere) and if it's missing/invalid, set the error status on state and return early so no asyncio.to_thread or _run_fact_check_pipeline tasks are started; move the cleaned_text validation before creating sentiment_task/fact_check_task/summary_task and mirror the same fix for the analogous block around functions run_sentiment, _run_fact_check_pipeline, and generate_summary in the other occurrence (lines ~99-104).backend/app/modules/fact_check_tool.py-20-34 (1)
20-34:⚠️ Potential issue | 🟠 MajorAdd timeout guards around external calls.
A hanging Groq/DDGS call can block the whole graph request indefinitely.
Proposed fix pattern
- response = await asyncio.to_thread( - client.chat.completions.create, - ... - ) + response = await asyncio.wait_for( + asyncio.to_thread( + client.chat.completions.create, + ... + ), + timeout=30, + )Apply the same timeout wrapper pattern to search execution and verification calls.
Also applies to: 64-70, 98-99, 174-193
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/fact_check_tool.py` around lines 20 - 34, The external LLM/search/verification calls in this module (e.g., client.chat.completions.create invoked via asyncio.to_thread, plus the search execution and verification calls referenced around lines 64-70, 98-99, and 174-193) need timeout guards so a hung Groq/DDGS or LLM call can't block the request; wrap those awaits with asyncio.wait_for (or an equivalent timeout wrapper) using a shared constant like EXTERNAL_CALL_TIMEOUT, keep the existing asyncio.to_thread usage for CPU-bound calls, and catch asyncio.TimeoutError to log and return a clear failure/empty result path instead of letting the coroutine hang.frontend/app/perspective/page.tsx-85-87 (1)
85-87:⚠️ Potential issue | 🟠 MajorAdd accessible names to icon-only controls.
Line 85, Line 102, and Line 307 render icon-only buttons without
aria-label, which hurts keyboard/screen-reader navigation.Proposed fix
- <button onClick={() => setMobileMenuOpen(!mobileMenuOpen)} className="p-2 text-white"> + <button + onClick={() => setMobileMenuOpen(!mobileMenuOpen)} + className="p-2 text-white" + aria-label={mobileMenuOpen ? "Close menu" : "Open menu"} + > @@ <button onClick={() => setLeftSidebarOpen(!leftSidebarOpen)} className="p-2 text-gray-400 hover:text-white transition-colors rounded-lg hover:bg-white/5" + aria-label={leftSidebarOpen ? "Collapse sidebar" : "Expand sidebar"} > @@ <button onClick={handleSend} disabled={!chatInput.trim() || sending || !threadId} className="absolute right-3 top-1/2 -translate-y-1/2 text-gray-400 hover:text-white p-1 rounded-md hover:bg-white/10 transition-all disabled:opacity-30 disabled:cursor-not-allowed" + aria-label={sending ? "Sending message" : "Send message"} >Also applies to: 102-107, 307-316
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/perspective/page.tsx` around lines 85 - 87, The icon-only buttons (the toggle using setMobileMenuOpen/mobileMenuOpen that renders <X /> or <Menu /> and the other icon-only buttons referenced around the same blocks) lack accessible names; add descriptive aria-label attributes to each button (e.g., aria-label={mobileMenuOpen ? "Close menu" : "Open menu"} for the setMobileMenuOpen toggle) or, if a button is purely decorative, mark the inner icon as aria-hidden="true" and ensure the button has an aria-label. Also consider adding aria-pressed for toggle-like buttons to reflect state. Update the button elements that render the icons (<X />, <Menu /> and the other icon-only buttons in the same file) accordingly.backend/app/modules/fact_check_tool.py-12-13 (1)
12-13:⚠️ Potential issue | 🟠 MajorFact-check path is pinned to Groq regardless of selected provider.
This hard-requires
GROQ_API_KEYeven when the request/provider flow is Gemini, which breaks BYOK expectations.Proposed direction
- client = Groq(api_key=os.getenv("GROQ_API_KEY")) + # Build provider client per request/state instead of a fixed global Groq client. + # This keeps fact-check behavior consistent with provider routing used elsewhere.backend/app/modules/fact_check_tool.py-19-45 (1)
19-45:⚠️ Potential issue | 🟠 MajorSkip claim extraction when
cleaned_textis empty.The current flow still sends an LLM request with empty content, which can generate fabricated claims and unnecessary cost.
Proposed fix
async def extract_claims_node(state): logger.info("--- Fact Check Step 1: Extracting Claims ---") try: text = state.get("cleaned_text", "") + if not text or not text.strip(): + return {"claims": []} response = await asyncio.to_thread( client.chat.completions.create,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/fact_check_tool.py` around lines 19 - 45, The extraction path should skip the LLM call when cleaned_text is empty to avoid hallucinations and cost: in the function that uses state.get("cleaned_text", "") (the block calling asyncio.to_thread with client.chat.completions.create and assigning response → raw_content → claims), add an early guard that checks if text is falsy or only whitespace and immediately log (logger.info or debug) and return {"claims": []} before invoking asyncio.to_thread; keep the rest of the logic unchanged so claims are only built when text has content.backend/app/modules/chat/chat_graph.py-82-85 (1)
82-85:⚠️ Potential issue | 🟠 MajorThread initialization hardcodes Groq provider.
This ignores caller/provider selection and can break chat thread initialization when only Gemini credentials are configured.
Proposed fix
-async def initialize_chat_thread(thread_id: str, analysis_result: dict) -> None: +async def initialize_chat_thread( + thread_id: str, analysis_result: dict, provider: str = "groq" +) -> None: @@ - config = {"configurable": {"thread_id": thread_id, "provider": "groq"}} + config = {"configurable": {"thread_id": thread_id, "provider": provider}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/chat/chat_graph.py` around lines 82 - 85, The code currently hardcodes the provider in the config dict when initializing a thread (config = {"configurable": {"thread_id": thread_id, "provider": "groq"}}) and then calls chat_app.ainvoke(...), which breaks setups where a different provider (e.g., Gemini) is selected; replace the hardcoded "groq" with the actual provider selection used by the caller or chat subsystem (for example use a passed-in provider argument, thread/provider field, or chat_app.default_provider) so that the config becomes {"configurable": {"thread_id": thread_id, "provider": selected_provider}} before calling chat_app.ainvoke; ensure SystemMessage and thread_id usage remain unchanged.
🟡 Minor comments (10)
backend/app/modules/scraper/keywords.py-20-21 (1)
20-21:⚠️ Potential issue | 🟡 MinorRemove the duplicate stop-word entry to satisfy lint.
Line 20 includes
"also"twice in_STOP_WORDS. Runtime behavior is unchanged, but this triggers Ruff B033 and can fail lint gates.Suggested fix
- "one", "two", "many", "way", "even", "back", "well", "also", + "one", "two", "many", "way", "even", "back", "well",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/scraper/keywords.py` around lines 20 - 21, _VARIABLE _STOP_WORDS contains a duplicated entry "also" which triggers the Ruff B033 lint; open the _STOP_WORDS definition in keywords.py and remove the duplicate "also" so each stop-word appears only once (keep a single "also" in the tuple/list) ensuring no other formatting changes.backend/.env.example-1-7 (1)
1-7:⚠️ Potential issue | 🟡 MinorFix inconsistent spacing around
=to avoid parsing issues.Lines 3-4 have spaces around
=while other lines don't. Some environment variable loaders may include the spaces as part of the key or value, causing silent failures.Proposed fix
GROQ_API_KEY= GROQ_MODEL=llama-3.3-70b-versatile -PINECONE_API_KEY = -PORT = 5555 +PINECONE_API_KEY= +PORT=5555 HF_TOKEN= GEMINI_MODEL= GEMINI_API_KEY= +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/.env.example` around lines 1 - 7, The .env.example has inconsistent spacing around = which can break env parsers; update the file so all entries use the exact "KEY=VALUE" format with no spaces around the equals sign (e.g., change "PINECONE_API_KEY =" to "PINECONE_API_KEY=" and "PORT = 5555" to "PORT=5555"), and verify other keys like GROQ_API_KEY, GROQ_MODEL, HF_TOKEN, GEMINI_MODEL, and GEMINI_API_KEY also have no leading/trailing spaces so loaders read keys and values correctly.frontend/components/landing/StatsSection.tsx-27-27 (1)
27-27:⚠️ Potential issue | 🟡 MinorTypo: Missing space in "4.3stars".
The value should include a space or use a star symbol for readability.
Proposed fix
- <Stat value="4.3stars" label="Ratings" /> + <Stat value="4.3 ★" label="Ratings" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/landing/StatsSection.tsx` at line 27, The Stat component usage has a typo: the value prop is "4.3stars" which lacks spacing; update the Stat invocation (Stat value="4.3stars" label="Ratings") to include a space or a star symbol (e.g., value="4.3 stars" or value="4.3★") so the displayed rating is readable, and ensure any similar hardcoded rating strings in StatsSection.tsx follow the same format.frontend/components/perspective/BiasGauge.tsx-24-24 (1)
24-24:⚠️ Potential issue | 🟡 MinorNormalize
scorebefore rendering the gauge and percentage.Use a clamped finite value to avoid invalid dash lengths and UI glitches when score is outside
0..100.🛠 Suggested guard
export function BiasGauge({ score, gradientColors, textColor, label }: BiasGaugeProps) { + const normalizedScore = Number.isFinite(score) + ? Math.min(100, Math.max(0, score)) + : 0; return ( @@ - strokeDasharray={`${(score / 100) * 126} 126`} + strokeDasharray={`${(normalizedScore / 100) * 126} 126`} @@ - <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(score)}%</div> + <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(normalizedScore)}%</div>Also applies to: 36-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/perspective/BiasGauge.tsx` at line 24, Normalize and validate the incoming score before using it in the SVG strokeDasharray and percentage text: compute a normalizedScore in BiasGauge (e.g., derive from the prop score) that first coerces to a Number, checks Number.isFinite, and clamps to the 0..100 range via Math.max(0, Math.min(100, value)); then use normalizedScore for the strokeDasharray calculation and for the displayed percentage to avoid NaN/Infinity or out-of-range dash lengths and UI glitches.frontend/components/landing/FeaturesSection.tsx-11-27 (1)
11-27:⚠️ Potential issue | 🟡 Minor
\nin feature titles won’t render as visible line breaks.At Line 11/16/21/26, the strings use newline escapes, but current title rendering collapses whitespace. If the multi-line layout is intentional, this will not display as designed.
💡 Suggested fix in this file (remove escaped newlines)
- title: "Uncover Agendas\nand Leanings", + title: "Uncover Agendas and Leanings", ... - title: "Bring Your Own Keys,\nYour Privacy, Your Control", + title: "Bring Your Own Keys, Your Privacy, Your Control", ... - title: "Deep Research,\nDone in seconds", + title: "Deep Research, Done in seconds", ... - title: "Verify Claims with\nWeb-Search", + title: "Verify Claims with Web-Search",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/landing/FeaturesSection.tsx` around lines 11 - 27, The feature "title" strings in the features array (used by the FeaturesSection component) include escaped newlines ("\n") which won't produce visible breaks; update the features data by removing "\n" from the title fields and instead implement explicit line breaks in the renderer (e.g., render titles with JSX <br/> or map a title array to separate lines) so that titles in FeaturesSection display on multiple lines as intended; modify the "title" entries for OwnKeysImg, DeepResearchImg, FactCheckImg and the earlier item and adjust the rendering logic in FeaturesSection to handle the chosen format.backend/app/modules/chat/chat_graph.py-89-95 (1)
89-95:⚠️ Potential issue | 🟡 MinorAdd server-side guard for empty chat messages.
send_chat_messagecurrently accepts blank input, which can still trigger an LLM call from non-UI clients.Proposed fix
async def send_chat_message( thread_id: str, message: str, provider: str = "groq" ) -> str: + if not message or not message.strip(): + return "" config = {"configurable": {"thread_id": thread_id, "provider": provider}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/chat/chat_graph.py` around lines 89 - 95, send_chat_message currently allows blank or whitespace-only messages which can trigger unnecessary LLM calls; add a server-side guard at the start of send_chat_message to validate the incoming message (e.g., if not message or message.strip() == "") and handle it by returning an error/raising an appropriate exception instead of calling chat_app.ainvoke; ensure the check is applied before constructing the config or HumanMessage and reference send_chat_message and chat_app.ainvoke/HumanMessage when locating where to add the guard.frontend/components/landing/CTASection.tsx-17-19 (1)
17-19:⚠️ Potential issue | 🟡 Minor"Try now" button has no action.
Same as in
Navbar.tsx, this CTA button lacks anonClickor navigation behavior. For a call-to-action section, this is particularly important as users are explicitly being encouraged to take action.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/landing/CTASection.tsx` around lines 17 - 19, The CTA "Try now" Button in CTASection.tsx has no action; update the Button (the <Button size="large">Try now</Button> node) to perform navigation like the Navbar counterpart by wiring an onClick that routes to the intended page (e.g., using your router's push method) or by replacing/wrapping it with a Link component; ensure the handler calls the same route target used in Navbar.tsx so the CTA actually navigates users to the sign-up/demo page.frontend/components/landing/Button.tsx-16-22 (1)
16-22:⚠️ Potential issue | 🟡 MinorAdd
type="button"to prevent unintended form submissions.The
<button>element defaults totype="submit"when no type is specified. If this Button is ever used inside a form, it will trigger form submission. Addingtype="button"explicitly prevents this.🛠️ Proposed fix
return ( <button + type="button" onClick={onClick} className={`bg-background-button rounded-button text-white font-normal ${sizeClasses} flex items-center justify-end`} > {children} </button> );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/landing/Button.tsx` around lines 16 - 22, The Button component's <button> lacks an explicit type, which defaults to "submit" and can cause unintended form submissions; update the JSX in the Button component (the element rendering with props onClick, children and sizeClasses) to include type="button" on the <button> element so it will not submit enclosing forms when clicked.frontend/components/landing/Navbar.tsx-11-11 (1)
11-11:⚠️ Potential issue | 🟡 Minor"Try now" button has no action.
The
Buttoncomponent is rendered without anonClickhandler or navigation link. Users clicking this button will have no feedback or action. Consider either:
- Adding an
onClickto navigate to the/perspectivepage, or- Wrapping in a
Linkfrom Next.js, or- Converting
Buttonto accept anhrefprop for anchor behavior.🔗 Example using Next.js Link
import React from "react"; +import Link from "next/link"; import Button from "./Button"; export default function Navbar() { return ( <div className="w-full flex justify-center pt-6 px-4 z-50"> <nav className="w-full max-w-[1400px] flex items-center justify-between px-6 py-4 md:px-[60px] md:py-[19px] rounded-nav bg-white/5 backdrop-blur-md border border-white/10 shadow-lg"> <h1 className="font-semibold text-2xl md:text-[36px] leading-normal tracking-tight text-white cursor-pointer select-none"> perspective </h1> - <Button>Try now</Button> + <Link href="/perspective"> + <Button>Try now</Button> + </Link> </nav> </div> ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/landing/Navbar.tsx` at line 11, The "Try now" Button in Navbar.tsx is rendered without any action; update the Button usage in Navbar (symbol: Button in file Navbar.tsx) to perform navigation to the /perspective page by either adding an onClick handler that calls router.push('/perspective'), wrapping the Button in Next.js Link, or using a Button variant that accepts an href prop—pick one approach and ensure the click triggers navigation and accessible semantics (e.g., use Link or role="link" if necessary).backend/app/routes/routes.py-95-105 (1)
95-105:⚠️ Potential issue | 🟡 MinorReturn proper HTTP error status on chat failure.
The
/chatendpoint catches exceptions but returns a 200 OK with anerrorfield in the body. This makes it difficult for clients to distinguish successful responses from errors. Consider raising anHTTPExceptioninstead.Additionally, as noted by static analysis:
- Line 102: The
returncan be moved to anelseblock for clarity.- Line 104:
logger.exceptionalready includes the exception details;f"Chat error: {e}"is redundant.🛠️ Proposed fix
+from fastapi import APIRouter, HTTPException + `@router.post`("/chat") async def answer_query(request: ChatQuery): """Send a follow-up message within an existing analysis thread. The ``provider`` field allows the user to switch models mid-conversation. """ try: answer = await send_chat_message( thread_id=request.thread_id, message=request.message, provider=request.provider, ) - logger.info(f"Chat response for thread {request.thread_id}") - return {"answer": answer, "thread_id": request.thread_id} except Exception as e: - logger.exception(f"Chat error: {e}") - return {"error": str(e), "thread_id": request.thread_id} + logger.exception("Chat error") + raise HTTPException(status_code=500, detail=str(e)) + else: + logger.info(f"Chat response for thread {request.thread_id}") + return {"answer": answer, "thread_id": request.thread_id}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/routes/routes.py` around lines 95 - 105, The /chat handler currently catches exceptions from send_chat_message and returns a 200 with an error body; change this to raise a FastAPI HTTPException (e.g., status_code=500, detail=str(e)) so clients receive a proper non-200 status; also simplify the log call to logger.exception("Chat error") (remove f-string interpolation) and move the successful return into the try/else structure (call send_chat_message inside try, on success return in the else, and on exception raise HTTPException in the except). Ensure you reference the send_chat_message call, logger.exception, and raise HTTPException from fastapi.exceptions.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (12)
backend/uv.lockis excluded by!**/*.lockfrontend/assets/BiasDetectionBG.pngis excluded by!**/*.pngfrontend/assets/DeepResearchBG.pngis excluded by!**/*.pngfrontend/assets/FactCheckBG.pngis excluded by!**/*.pngfrontend/assets/OwnKeysBG.pngis excluded by!**/*.pngfrontend/assets/chatai.svgis excluded by!**/*.svgfrontend/assets/dropdown.svgis excluded by!**/*.svgfrontend/assets/newchaticon.svgis excluded by!**/*.svgfrontend/assets/sendbtn.svgis excluded by!**/*.svgfrontend/assets/settingsicon.svgis excluded by!**/*.svgfrontend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlfrontend/public/placeholder-logo.svgis excluded by!**/*.svg
📒 Files selected for processing (104)
README.mdbackend/.env.examplebackend/app/llm_config.pybackend/app/modules/bias_detection/check_bias.pybackend/app/modules/chat/chat_graph.pybackend/app/modules/chat/get_rag_data.pybackend/app/modules/chat/llm_processing.pybackend/app/modules/fact_check_tool.pybackend/app/modules/facts_check/__init__.pybackend/app/modules/facts_check/llm_processing.pybackend/app/modules/facts_check/web_search.pybackend/app/modules/langgraph_builder.pybackend/app/modules/langgraph_nodes/generate_perspective.pybackend/app/modules/langgraph_nodes/judge.pybackend/app/modules/langgraph_nodes/sentiment.pybackend/app/modules/langgraph_nodes/store_and_send.pybackend/app/modules/pipeline.pybackend/app/modules/scraper/cleaner.pybackend/app/modules/scraper/extractor.pybackend/app/modules/scraper/keywords.pybackend/app/modules/vector_store/chunk_rag_data.pybackend/app/routes/routes.pybackend/app/utils/fact_check_utils.pybackend/app/utils/prompt_templates.pybackend/main.pybackend/pyproject.tomlfrontend/app/analyze/loading/page.tsxfrontend/app/analyze/page.tsxfrontend/app/analyze/results/page.tsxfrontend/app/globals.cssfrontend/app/layout.tsxfrontend/app/page.tsxfrontend/app/perspective/page.tsxfrontend/components/bias-meter.tsxfrontend/components/landing/Button.tsxfrontend/components/landing/CTASection.tsxfrontend/components/landing/FeatureCard.tsxfrontend/components/landing/FeaturesSection.tsxfrontend/components/landing/Footer.tsxfrontend/components/landing/HeroSection.tsxfrontend/components/landing/Navbar.tsxfrontend/components/landing/SearchBar.tsxfrontend/components/landing/StatsSection.tsxfrontend/components/perspective/BiasGauge.tsxfrontend/components/perspective/RightSidebar.tsxfrontend/components/theme-provider.tsxfrontend/components/theme-toggle.tsxfrontend/components/ui/accordion.tsxfrontend/components/ui/alert-dialog.tsxfrontend/components/ui/alert.tsxfrontend/components/ui/aspect-ratio.tsxfrontend/components/ui/avatar.tsxfrontend/components/ui/badge.tsxfrontend/components/ui/breadcrumb.tsxfrontend/components/ui/button.tsxfrontend/components/ui/calendar.tsxfrontend/components/ui/card.tsxfrontend/components/ui/carousel.tsxfrontend/components/ui/chart.tsxfrontend/components/ui/checkbox.tsxfrontend/components/ui/collapsible.tsxfrontend/components/ui/command.tsxfrontend/components/ui/context-menu.tsxfrontend/components/ui/dialog.tsxfrontend/components/ui/drawer.tsxfrontend/components/ui/dropdown-menu.tsxfrontend/components/ui/form.tsxfrontend/components/ui/hover-card.tsxfrontend/components/ui/input-otp.tsxfrontend/components/ui/input.tsxfrontend/components/ui/label.tsxfrontend/components/ui/menubar.tsxfrontend/components/ui/navigation-menu.tsxfrontend/components/ui/pagination.tsxfrontend/components/ui/popover.tsxfrontend/components/ui/progress.tsxfrontend/components/ui/radio-group.tsxfrontend/components/ui/resizable.tsxfrontend/components/ui/scroll-area.tsxfrontend/components/ui/select.tsxfrontend/components/ui/separator.tsxfrontend/components/ui/sheet.tsxfrontend/components/ui/sidebar.tsxfrontend/components/ui/skeleton.tsxfrontend/components/ui/slider.tsxfrontend/components/ui/sonner.tsxfrontend/components/ui/switch.tsxfrontend/components/ui/table.tsxfrontend/components/ui/tabs.tsxfrontend/components/ui/textarea.tsxfrontend/components/ui/toast.tsxfrontend/components/ui/toaster.tsxfrontend/components/ui/toggle-group.tsxfrontend/components/ui/toggle.tsxfrontend/components/ui/tooltip.tsxfrontend/components/ui/use-mobile.tsxfrontend/components/ui/use-toast.tsfrontend/hooks/use-chat.tsfrontend/hooks/use-mobile.tsxfrontend/hooks/use-perspective.tsfrontend/hooks/use-toast.tsfrontend/lib/config.tsfrontend/styles/globals.cssfrontend/tailwind.config.ts
💤 Files with no reviewable changes (61)
- frontend/components/theme-provider.tsx
- frontend/components/ui/card.tsx
- frontend/components/ui/aspect-ratio.tsx
- frontend/components/ui/switch.tsx
- frontend/components/ui/separator.tsx
- frontend/components/ui/badge.tsx
- frontend/components/theme-toggle.tsx
- frontend/styles/globals.css
- frontend/components/ui/popover.tsx
- frontend/components/ui/radio-group.tsx
- frontend/components/bias-meter.tsx
- frontend/components/ui/alert.tsx
- frontend/components/ui/toaster.tsx
- frontend/components/ui/tooltip.tsx
- backend/app/modules/facts_check/web_search.py
- frontend/components/ui/label.tsx
- frontend/components/ui/breadcrumb.tsx
- frontend/hooks/use-mobile.tsx
- frontend/components/ui/use-mobile.tsx
- frontend/components/ui/checkbox.tsx
- frontend/components/ui/calendar.tsx
- frontend/components/ui/use-toast.ts
- backend/app/modules/facts_check/llm_processing.py
- frontend/components/ui/chart.tsx
- frontend/components/ui/scroll-area.tsx
- frontend/components/ui/drawer.tsx
- frontend/components/ui/resizable.tsx
- frontend/components/ui/menubar.tsx
- frontend/components/ui/sonner.tsx
- frontend/components/ui/accordion.tsx
- frontend/components/ui/form.tsx
- frontend/components/ui/input.tsx
- frontend/components/ui/collapsible.tsx
- frontend/components/ui/skeleton.tsx
- frontend/components/ui/command.tsx
- frontend/components/ui/carousel.tsx
- frontend/components/ui/tabs.tsx
- frontend/app/analyze/page.tsx
- frontend/components/ui/toggle.tsx
- frontend/components/ui/input-otp.tsx
- frontend/app/analyze/results/page.tsx
- frontend/components/ui/toggle-group.tsx
- frontend/app/analyze/loading/page.tsx
- frontend/components/ui/sheet.tsx
- frontend/components/ui/table.tsx
- frontend/components/ui/button.tsx
- frontend/components/ui/sidebar.tsx
- frontend/components/ui/progress.tsx
- frontend/components/ui/slider.tsx
- frontend/hooks/use-toast.ts
- frontend/components/ui/navigation-menu.tsx
- frontend/components/ui/select.tsx
- frontend/components/ui/pagination.tsx
- frontend/components/ui/textarea.tsx
- frontend/components/ui/hover-card.tsx
- frontend/components/ui/dialog.tsx
- frontend/components/ui/dropdown-menu.tsx
- frontend/components/ui/alert-dialog.tsx
- frontend/components/ui/avatar.tsx
- frontend/components/ui/context-menu.tsx
- frontend/components/ui/toast.tsx
| graph.add_conditional_edges( | ||
| "store_and_send", | ||
| lambda x: ("error_handler" if x.get("status") == "error" else "__end__"), | ||
| lambda x: "error_handler" if x.get("status") == "error" else "end", | ||
| ) | ||
|
|
||
| graph.set_finish_point("store_and_send") | ||
|
|
||
| return graph.compile() | ||
| return graph.compile(checkpointer=memory) No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
file="$(fd -p 'langgraph_builder.py' | head -n1)"
if [ -z "$file" ]; then
echo "File not found"
exit 1
fi
echo "=== Inspecting: $file ==="
wc -l "$file"
echo ""
echo "=== Lines 1-170 ==="
sed -n '1,170p' "$file"
echo ""
echo "=== Searching for graph targets and imports ==="
rg -n 'add_node|add_conditional_edges|set_finish_point|from langgraph\.graph' "$file"Repository: AOSSIE-Org/Perspective
Length of output: 4284
store_and_send success branch routes to undefined "end" target.
Line 106 returns the string "end" as a conditional edge target, but no such node exists in the graph. This will cause a runtime error when execution reaches the success path. Import the END constant from langgraph.graph and use it instead of the string "end".
Proposed fix
-from langgraph.graph import StateGraph
+from langgraph.graph import StateGraph, END
@@
graph.add_conditional_edges(
"store_and_send",
- lambda x: "error_handler" if x.get("status") == "error" else "end",
+ lambda x: "error_handler" if x.get("status") == "error" else END,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| graph.add_conditional_edges( | |
| "store_and_send", | |
| lambda x: ("error_handler" if x.get("status") == "error" else "__end__"), | |
| lambda x: "error_handler" if x.get("status") == "error" else "end", | |
| ) | |
| graph.set_finish_point("store_and_send") | |
| return graph.compile() | |
| return graph.compile(checkpointer=memory) | |
| graph.add_conditional_edges( | |
| "store_and_send", | |
| lambda x: "error_handler" if x.get("status") == "error" else END, | |
| ) | |
| graph.set_finish_point("store_and_send") | |
| return graph.compile(checkpointer=memory) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/modules/langgraph_builder.py` around lines 104 - 110, The
conditional edge target in graph.add_conditional_edges for "store_and_send"
returns the string "end", which doesn't exist; import the END sentinel from
langgraph.graph and replace the literal "end" with END in the lambda used by
graph.add_conditional_edges so the success path points to the framework's finish
node; keep graph.set_finish_point("store_and_send") and return
graph.compile(checkpointer=memory) unchanged.
| <div className="relative w-full h-[200px] mb-4 flex-shrink-0 flex items-center justify-center"> | ||
| <Image | ||
| src={image} | ||
| alt={title} | ||
| className="object-contain max-h-full max-w-full" | ||
| /> | ||
| </div> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Next.js Image component fill prop required width height 2024 2025
💡 Result:
In Next.js (including 2024–2025 releases), <Image /> requires width and height to prevent layout shift unless one of these is true:
- The image is statically imported, or
- You use
fill(i.e.,<Image fill ... />), which implicitly sizes the image by its parent container, sowidth/heightare not required. [1][2]
When using fill, make sure:
- The parent element has
position: relative/absolute/fixed, and - The parent has a real size via CSS (e.g., set height/width), since
fillmakes the image expand to the parent. [1][2]
Also recommended for fill + responsive layouts:
- Provide a
sizesprop; otherwise the browser assumes100vw, which can download unnecessarily large images. [1]
Sources: Next.js Image API Reference [1], Next.js “Optimizing Images” docs [2].
🏁 Script executed:
# Check the codebase for Next.js configuration and Image usage
git ls-files | grep -E "(next\.config|tsconfig)" | head -20Repository: AOSSIE-Org/Perspective
Length of output: 113
🏁 Script executed:
# Read the actual FeatureCard.tsx file to verify current state
cat -n frontend/components/landing/FeatureCard.tsxRepository: AOSSIE-Org/Perspective
Length of output: 1725
🏁 Script executed:
# Check next.config.mjs for any Image configuration
cat -n frontend/next.config.mjsRepository: AOSSIE-Org/Perspective
Length of output: 391
🏁 Script executed:
# Check if there are any other Image usages in the codebase for comparison
rg "Image\s+" frontend/components --type tsx --type ts -A 3 | head -50Repository: AOSSIE-Org/Perspective
Length of output: 93
🌐 Web query:
Next.js Image unoptimized true config width height requirement
💡 Result:
Setting images: { unoptimized: true } (or <Image unoptimized />) only disables Next.js Image Optimization (it serves the original src “as-is”, without changing size/quality/format and without the optimization pipeline). (nextjs.org)
It does not remove the width/height requirement of the next/image component. You must provide both width and height unless:
- the image is statically imported, or
- you use
fill(and size it via CSS). (nextjs.org)
Citations:
- 1: https://nextjs.org/docs/pages/api-reference/components/image?utm_source=openai
- 2: https://nextjs.org/docs/pages/api-reference/components/image
🏁 Script executed:
# Try to find Image usages with correct syntax
rg "Image" frontend/components --type ts -A 3 | head -60Repository: AOSSIE-Org/Perspective
Length of output: 841
Add required fill prop to Next.js Image component.
Next.js Image requires either width/height props or the fill prop. The unoptimized config setting does not remove this requirement. Since the parent container already has position: relative and a fixed height (h-[200px]), use fill to let the image size within the parent.
Proposed fix
<div className="relative w-full h-[200px] mb-4 flex-shrink-0 flex items-center justify-center">
<Image
src={image}
alt={title}
+ fill
className="object-contain"
/>
</div>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <div className="relative w-full h-[200px] mb-4 flex-shrink-0 flex items-center justify-center"> | |
| <Image | |
| src={image} | |
| alt={title} | |
| className="object-contain max-h-full max-w-full" | |
| /> | |
| </div> | |
| <div className="relative w-full h-[200px] mb-4 flex-shrink-0 flex items-center justify-center"> | |
| <Image | |
| src={image} | |
| alt={title} | |
| fill | |
| className="object-contain" | |
| /> | |
| </div> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/components/landing/FeatureCard.tsx` around lines 20 - 26, The
Next.js Image in FeatureCard.tsx is missing the required sizing prop; update the
<Image> usage inside the FeatureCard component to use the fill prop (since the
parent div is position:relative with fixed h-[200px]) instead of relying on
width/height; keep the existing className (object-contain max-h-full max-w-full)
and alt={title}, remove any width/height props if present, and ensure the parent
wrapper remains relative so the Image with fill sizes correctly.
| const validateUrl = (inputUrl: string) => { | ||
| try { | ||
| new URL(inputUrl); | ||
| return true; | ||
| } catch { | ||
| return false; | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the SearchBar.tsx file around the mentioned lines
head -60 frontend/components/landing/SearchBar.tsx | cat -nRepository: AOSSIE-Org/Perspective
Length of output: 2272
🏁 Script executed:
# Search for articleUrl usage in the codebase
rg "articleUrl" --type ts --type tsx -A 3 -B 1Repository: AOSSIE-Org/Perspective
Length of output: 93
🏁 Script executed:
# Search for articleUrl usage without file type restriction
rg "articleUrl" -A 3 -B 1Repository: AOSSIE-Org/Perspective
Length of output: 1610
🏁 Script executed:
# Let's examine the use-perspective.ts hook to see how the URL is used
cat frontend/hooks/use-perspective.ts | head -100Repository: AOSSIE-Org/Perspective
Length of output: 3417
🏁 Script executed:
# Search for the backend API endpoints that process the URL
find . -type f -name "*.ts" -o -name "*.js" -o -name "*.py" | xargs grep -l "api/bias\|api/process" | head -20Repository: AOSSIE-Org/Perspective
Length of output: 101
🏁 Script executed:
# Look for backend directory structure
ls -la . | head -20Repository: AOSSIE-Org/Perspective
Length of output: 1028
🏁 Script executed:
# Explore the backend directory structure
find backend -type f -name "*.py" | head -20Repository: AOSSIE-Org/Perspective
Length of output: 875
🏁 Script executed:
# Search for the main backend app file with route definitions
find backend -type f \( -name "main.py" -o -name "app.py" -o -name "server.py" \) | head -10Repository: AOSSIE-Org/Perspective
Length of output: 81
🏁 Script executed:
# Examine the main backend file to find the API endpoints
cat backend/main.py | head -150Repository: AOSSIE-Org/Perspective
Length of output: 1518
🏁 Script executed:
# Find and examine the routes file
cat backend/app/routes/routes.py | head -200Repository: AOSSIE-Org/Perspective
Length of output: 3644
🏁 Script executed:
# Find the pipeline module to see what run_scraper_pipeline does
find backend -type f -name "pipeline.py"Repository: AOSSIE-Org/Perspective
Length of output: 97
🏁 Script executed:
# Examine the pipeline module
cat backend/app/modules/pipeline.py | head -150Repository: AOSSIE-Org/Perspective
Length of output: 3134
🏁 Script executed:
# Examine the Article_extractor to see how it uses the URL
cat backend/app/modules/scraper/extractor.py | head -200Repository: AOSSIE-Org/Perspective
Length of output: 2792
Restrict URL validation to HTTP/HTTPS schemes.
The validateUrl() function currently accepts any URL scheme including file://, javascript:, and data:. Since the URL is sent to backend endpoints that attempt to fetch the resource, non-HTTP schemes create a Server-Side File Inclusion (SSFI) vulnerability—for example, file:///etc/passwd or file:///.env could expose server files.
Restrict schemes to http: and https:, and trim input:
Suggested fix
const validateUrl = (inputUrl: string) => {
try {
- new URL(inputUrl);
- return true;
+ const parsed = new URL(inputUrl.trim());
+ return parsed.protocol === "http:" || parsed.protocol === "https:";
} catch {
return false;
}
};
@@
- sessionStorage.setItem("articleUrl", url);
+ sessionStorage.setItem("articleUrl", url.trim());Also applies to: 45–46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/components/landing/SearchBar.tsx` around lines 19 - 26, The
validateUrl function accepts any scheme which enables SSFI risks; update
validateUrl to trim the input string, construct a URL object, then explicitly
allow only protocol === 'http:' or 'https:' and return false otherwise; also
ensure any call sites that previously validated raw input (e.g., where
validateUrl is invoked) pass the trimmed value or re-validate after trimming so
non-HTTP schemes (file:, data:, javascript:) are rejected.
| <a | ||
| href={c.url} | ||
| target="_blank" | ||
| rel="noopener noreferrer" | ||
| className="text-blue-400 hover:underline text-xs font-medium flex items-center gap-1" |
There was a problem hiding this comment.
Validate citation URLs before rendering links (javascript: injection risk).
Line 164 binds untrusted c.url directly into href. If a malicious value reaches the UI (e.g., javascript:), clicking executes script.
🔒 Suggested safe URL handling
+ const getSafeHref = (url?: string) => {
+ try {
+ if (!url) return null;
+ const parsed = new URL(url);
+ return parsed.protocol === "http:" || parsed.protocol === "https:" ? parsed.href : null;
+ } catch {
+ return null;
+ }
+ };
...
{citations.map((c, i) => (
<li key={i}>
- <a
- href={c.url}
- target="_blank"
- rel="noopener noreferrer"
- className="text-blue-400 hover:underline text-xs font-medium flex items-center gap-1"
- >
- {c.title || c.url}
- <ExternalLink className="w-3 h-3 flex-shrink-0" />
- </a>
+ {getSafeHref(c.url) ? (
+ <a
+ href={getSafeHref(c.url)!}
+ target="_blank"
+ rel="noopener noreferrer"
+ className="text-blue-400 hover:underline text-xs font-medium flex items-center gap-1"
+ >
+ {c.title || c.url}
+ <ExternalLink className="w-3 h-3 flex-shrink-0" />
+ </a>
+ ) : (
+ <span className="text-gray-500 text-xs">Invalid citation URL</span>
+ )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/components/perspective/RightSidebar.tsx` around lines 163 - 167, The
anchor in RightSidebar binds untrusted c.url directly to href, allowing
javascript: injection; sanitize and validate the URL before rendering by adding
a helper (e.g., isSafeUrl or sanitizeUrl) and use it when rendering the anchor
(the mapping that produces the <a href={c.url} ...> element). The validator
should allow only http:, https:, mailto: (or other approved schemes) and reject
or replace unsafe values (e.g., default to '#' or omit the anchor) so that c.url
is never used directly without validation.
Addressed Issues:
#147
Fixes #(TODO:issue number)
Description
Frontend / UI:
Complete Redesign: The /perspective page is now a chat-based interface. I fixed the casing bugs on RightSidebar and added proper loading states while the initial graph runs.
Model Toggle: Added a dropdown in the chat input letting the user switch between Groq and Gemini on the fly.
Transparency: Added UI components to prominently display the article_summary and clickable web_search_citations so users can see exactly where the DuckDuckGo agent got its facts.
Backend / Architecture:
BYOK Implemented: The backend now dynamically reads GROQ_API_KEY, GEMINI_API_KEY, and their respective model names from the environment variables based on the frontend payload. No more hardcoded providers.
LangGraph Memory: Integrated MemorySaver() with thread IDs. You can now ask the AI follow-up questions about the article, and it actually remembers the context.
Pipeline Upgrades: The initial payload now returns the summary and citations alongside the fact-check arrays. Also cleaned up some bugs in chunk_rag_data.py (fixed the Pydantic v2 model_dump() fallback and ID generation).
Spring Cleaning: Removed bloated packages that are no longer needed since the DuckDuckGo pivot (nltk, google-api-python-client, etc.).
Screenshots/Recordings:
Additional Notes:
LINK TO DEMO VIDEO
Related to
#147
(It is a integration PR for PR1 - DuckDuckGo Integration and Optimization of langgraph workflow and PR-2-Redesign of the website which I created unique to perspective - No AI used to design
Checklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: Claude Opus 4.6
The frontend use-perspective and use-chat.ts was written by Claude.
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
New Features
Improvements
Documentation
Other