Skip to content

Releases: PrynAI/PrynAI-chat

Your profile menu just got smarter

16 Oct 12:47

Choose a tag to compare

We’ve made it easier to find help and keep track of updates.

  • Click your avatar (top‑right) to open an improved menu.
  • Jump straight to the Help Center or Release Notes.
  • Manage your plan via Subscription.
  • Your display name is shown; your email appears only when we can confirm it’s your real address.

Try it now in the top‑right corner of chat. If you have feedback or spot anything odd, reply here—we’re listening.

Replies now look clean and organized (headings, bullets, code, tables) — while they stream

14 Oct 13:26

Choose a tag to compare

  • We upgraded how answers appear in chat. Replies now render with proper headings, bullet lists, numbered steps, code blocks, and tables—just like you’d expect in a document. Everything still streams in real‑time, but it’s much easier to read, skim, and copy.

What you’ll notice

  • Clear structure: Big titles, sub‑headings, real bullets and numbers.
  • Readable code: Fenced code blocks with monospaced font; inline code looks distinct.
  • Neat tables & quotes: Tables have borders; quotes are styled so they stand out.
  • No weird spacing: Words no longer “glue” together or stack one per line.

A tiny example

You ask:

  • Write a short doc with # Title, ## Section, a bullet list, and a numbered list.

You’ll now see (nicely formatted):

Trip Plan

Essentials

  • Passport
  • Charger
  • Snacks
  1. Book flights
  2. Reserve hotel
  3. Pack light

No setup required—this is the default for every reply.

Try it out

  • “Show a Python example in a fenced code block and an inline code snippet.”
  • “Give a 3‑row table comparing A/B/C and a short quote.”

Tech note

  • Behind the scenes we adjusted how messages stream from the server so each chunk arrives as a single event that preserves line breaks; the chat app then renders this Markdown directly.
  • We also fixed the parser to keep spaces and newlines exactly as written, which prevents words from running together.

File Uploads

14 Oct 23:09

Choose a tag to compare

Scope

  • Enable drag‑and‑drop file uploads in the Chainlit chat, stream those files to the Gateway, extract text (semantic‑only), and send that as lightweight context to the agent for Q&A, summaries, and insights. Files are session‑only (ephemeral) and cleaned up immediately after use. UI and normal chat streaming remain unchanged.

What’s new

Chat UI uploads.

  • Users can drop files on any message. The UI collects temp file paths and, when files are present, sends a multipart request to POST /api/chat/stream_files and streams the reply via SSE—exactly like normal chat. Temp files are deleted right after the turn.

New Gateway endpoint.
POST /api/chat/stream_files accepts payload (JSON) + files[], extracts text per file, builds a compact “attachments context” system message, and streams tokens back via SSE.

Early‑reject guardrails.

  • Gateway enforces ≤5 files and ≤10 MB per file, blocks audio/video and executables, and validates extensions before touching disk. Oversized files return 413 early.

Scanned PDFs & images (optional OCR).

  • Default path: pure‑text extract. If you set UPLOADS_OCR=tesseract, the Gateway will OCR textless PDFs (first N pages) and images, then include that text in the attachment's context. OCR budgets are tunable via UPLOADS_OCR_MAX_PAGES, UPLOADS_OCR_DPI, UPLOADS_OCR_LANG. (OCR is off by default.)

Session‑only ingestion.

  • Chainlit removes temp files after streaming, no automatic persistence of raw files. The response is streamed; we also append the user/assistant messages to the per‑thread transcript for replay after reload.

Supported file types

  • Documents/Data: .pdf, .docx, .txt, .csv, .pptx, .xlsx, .json, .xml
  • Images: .png, .jpg, .jpeg, .gif
  • Code/Text: .py, .js, .html, .css, .yaml/.yml, .sql, .ipynb, .md
  • Explicitly blocked: executables/binaries (.exe, .dll, .bin, .dmg, .iso, .apk, .msi, .so) and anything with audio/* or video/* MIME types.

Limits

  • Per file: ≤ 10 MB
  • Per turn: ≤ 5 files
  • Mixed batch: allowed (e.g., PDF + XLSX + PNG)
  • Context size: we cap extracted text at ~12k chars per file and 24k chars total per request to keep latency/TTFT steady.

How it works (request path)

  1. UI detects dropped files → if any, it posts multipart to /api/chat/stream_files with JSON payload (message, thread_id, web_search). Tokens stream back over SSE; the UI renders chunks as they arrive. Temp files are deleted in the finally block.
  2. Gateway validates JWT (Entra), applies early rejections, stream‑reads each file into memory with a hard byte cap, runs lightweight extractors (and optional OCR), and prepends one compact system message listing attachments + extracted text. Then it invokes the graph and streams the model’s reply over SSE.
  3. Transcript: user & assistant turns are appended to the thread transcript so the UI can replay history on refresh.

UI configuration (already shipped)

  • Chainlit’s spontaneous upload feature is enabled; we still enforce types/size at the Gateway. (Session and user session timeouts are untouched.)

Streaming details (unchanged)

  • UI parses SSE (event, data), Gateway frames chunks as data: lines with a trailing blank line; policy and error notices use custom event:

Security & privacy

No code execution.

  • Files are treated as text only; the system prompt explicitly warns the model to “use for semantic understanding only.”

Moderation:

  • input/output moderation still runs around the turn, unchanged.

Ephemeral handling:

  • UI cleans up temp files immediately after stream completion.

Known limitations / trade‑offs

Scanned PDFs/images:

  • without OCR enabled, they may have “no extractable text.” Turn on UPLOADS_OCR=tesseract in the Gateway to auto‑OCR first pages.

  • Excel extraction is lightweight (shared strings + first sheet text) by design—fast and semantic, not a full table parser.

Ops toggles (Gateway)

  • UPLOADS_OCR=none|tesseract, UPLOADS_OCR_MAX_PAGES, UPLOADS_OCR_DPI, UPLOADS_OCR_LANG (defaults keep OCR off).

Smoke tests

Happy path: drop a .pdf + .xlsx + .png, ask “Summarize the PDF; list 3 takeaways (ignore images).” Expect streamed text and transcript replay after refresh.

Rejections: upload 6 files → expect 413 “Too many files.” Upload a 20 MB file → expect 413 “File too large.” Upload .exe → expect 415 “blocked_type”

User Announcement (non‑technical)

New: Add files to your chat.

  • You can now attach documents, images, and data files directly in the conversation. Drop up to 5 files (each up to 10 MB)—mix and match (PDF + Excel + PNG is fine). The assistant will read the text and answer questions, summarize, or compare across files

What you can attach

  • PDFs, Word, PowerPoint, text/markdown
  • Excel/CSV, JSON, XML
  • Images (PNG/JPG/GIF)
  • Code/text snippets (py, js, html, css, sql, ipynb)

What we don’t accept

  • Audio/video
  • Programs or installers (e.g., .exe)

Examples you can try

  • “Summarize this PDF and list 5 risks.”
  • “Compare my resume (docx) to this job description (pdf) and highlight gaps.”
  • “Pull key stats from this Excel file and give me a short brief.”
  • “Read this README.md and explain how to run the project in 5 steps.”

Notes

  • We don’t run code from your files; we only read their text.
  • Scanned PDFs or photos may need clearer images. If text can’t be read, we’ll tell you.

Source of truth (for reviewers)

  • Chainlit UI: multipart upload + SSE streaming; temp file cleanup.
  • Gateway: mounts /api/chat/stream_files, early rejections, extraction, OCR (opt‑in), SSE.
  • Config: uploads enabled at the UI; Gateway enforces limits & types.
  • Transcript replay: per‑thread messages API + UI replay on load.