feat: PromptSearchList batch workflow support & image linking v3.2.2#138
feat: PromptSearchList batch workflow support & image linking v3.2.2#138
Conversation
Detection, metadata reading, trigger word cache, and prompt injection for ComfyUI-Lora-Manager. All functions return empty results when LoraManager is not installed.
Detection, enable/disable, scan/import, trigger word lookup, and cache refresh endpoints under /prompt_manager/lora/*.
When enabled, scans for <lora:NAME:WEIGHT> tags in prompt text and appends trigger words from LoraManager metadata. Behind config toggle.
Integrations section in settings with auto-detection badge, enable toggle, trigger word toggle, and Import LoRA Data button.
The styled div was intercepting clicks meant for the sr-only checkbox input, preventing toggle switches from being clickable.
…#52) Only models/loras under the ComfyUI root was checked. Now also parses extra_model_paths.yaml and uses folder_paths.get_folder_paths('loras') at runtime to find all configured LoRA directories.
… images (#52) - Prompt text now uses civitai example prompts (images[].meta.prompt) when available, falling back to model name instead of trigger words - Image serving now allows paths within LoRA directories when the integration is enabled, fixing 403 errors on preview images
- Guard against civitai: null in metadata (was crashing the scan) - Link all local preview images per LoRA, not just the first - Add get_civitai_image_urls() for future remote image support
Clicking Import LoRA Data now deletes all existing lora-manager category prompts before scanning, ensuring a clean reimport.
Images from civitai.images[] are downloaded to data/lora_images/ cache and linked to prompts alongside local preview files. Cached files are reused on reimport. Image serving allows the cache directory.
Closes settings modal and shows a dedicated progress modal with progress bar, status text, and processed/imported counts during LoRA import. Auto-closes after completion.
5K images at 15s timeout was painfully slow. Reduced to 5s fail-fast. Progress now updates for every LoRA with image count, not every 5th.
Full-size civitai images averaged 5.6MB each (27GB total for 5K images). Now requests /width=512/ thumbnails (~50-100KB) and downloads 8 in parallel per LoRA. Expected speedup: ~100x smaller + 8x parallel.
Civitai CDN returns 401 for /width=N/ thumbnail URLs with API key auth. Instead, download the original and resize to 512px via PIL before saving. Reduces disk usage from ~5MB to ~30-50KB per image.
- Add LoRA Manager integration section with setup guide and CivitAI key docs - Add folder filter section with rescan note for existing libraries - Add v3.2.1 changelog entry, split from v3.2.0 - Add WIP notice for LoRA Manager feature - Update AutoTag section to include WD14 models - Fix stale references (KikoTextEncode, outdated file structure) - Remove dated v2 development note - Add screenshots for settings, integration, and filtered results - Fix code review items: remove unused constant, add comments to empty excepts - Bump version to 3.2.1
- test_lora_utils.py: 32 tests covering trigger word extraction, example prompts, image URLs, metadata parsing, cache dir, TriggerWordCache - test_lora_database.py: 17 tests covering delete_prompts_by_category, folder filter search, get_prompt_subfolders, LoRA import workflow - test_config.py: 7 new IntegrationConfig tests for structure, enable, partial update, reset, and roundtrip
- Fix empty list crash (OUTPUT_IS_LIST requires at least one element) - Add partial tag matching (LIKE instead of exact) - Add preview and count outputs for batch visibility - Add skip_multipart filter for Clip_N video prompts - Add LoRA-only prompt filter - Collapse newlines for StringOutputList compatibility
Allows search_prompts to use LIKE matching for tags instead of exact. Used by PromptSearchList for discovery-style search. Existing callers default to exact matching (unchanged behavior).
Batch workflows encode all prompts first, then save all images later. The queue preserves encode order so each image links to the correct prompt by position (pop on save matches push on encode).
Previous approach relied on live prompt tracker state, which was always stale for batch workflows (all images linked to last prompt). New 5-strategy priority chain: 1. FIFO queue pop (batch-correct by position) 2. Metadata-based DB lookup (extracts prompt from PNG) 3. Prompt snapshot at file-creation time 4. Live prompt tracker 5. Most recent DB prompt fallback
- 6 queue tests: FIFO ordering, batch simulation, exhaustion - 15 filter tests: newline collapse, Clip_ filter, LoRA-only filter - Fix _make_tracker helper to include queue fields
| ] | ||
|
|
||
| # Filter out prompts that are only LoRA tags with no actual content | ||
| LORA_ONLY = re.compile(r"^(\s*<lora:[^>]+>\s*)+$") |
|
|
||
| # --- Helpers: replicate the filter logic from prompt_search_list.py --- | ||
|
|
||
| LORA_ONLY = re.compile(r"^(\s*<lora:[^>]+>\s*)+$") |
|
Superseded by clean PR from main — the old branch had LoRA commits already merged via #136, causing conflicts. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4a17360c09
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| # Strategy 1: Pop from batch queue (most reliable for batch workflows). | ||
| # Prompts are queued during CLIP encoding in order; images save in | ||
| # the same order, so FIFO pop gives the correct prompt per image. | ||
| current_prompt = self.prompt_tracker.pop_next_prompt() |
There was a problem hiding this comment.
Match queued prompts to generation count before popping
process_new_image consumes one queue entry for every saved file, but prompts are queued once per prompt execution, not once per output image. In workflows that produce multiple images from a single prompt (for example, batch_size > 1 or multiple save outputs), the second image from prompt A will pop prompt B and be linked to the wrong record. This causes deterministic cross-linking errors whenever images-per-prompt is not exactly 1.
Useful? React with 👍 / 👎.
| """ | ||
| with self._queue_lock: | ||
| if self._prompt_queue: | ||
| ctx = self._prompt_queue.pop(0) |
There was a problem hiding this comment.
Validate queued prompt freshness before returning it
pop_next_prompt returns the oldest queued context without checking whether it is still active or within timeout. If a run queues a prompt but no image is saved (cancel/error), that stale entry remains and will later be used to link an unrelated image. Because queue entries are not filtered against active_prompts/age here, delayed or failed runs can silently corrupt prompt-image associations.
Useful? React with 👍 / 👎.
| existing = await self._run_in_executor( | ||
| self.db.get_prompt_by_hash, prompt_hash | ||
| ) |
There was a problem hiding this comment.
Keep LoRA imports distinct instead of deduping by hash
The scan path deduplicates by raw prompt hash and reuses any existing prompt row. Different LoRAs can share the same example prompt text (or collide with preexisting non-LoRA prompts), so this branch skips creating a LoRA entry and links images to an unrelated prompt ID, dropping LoRA-specific tags/model separation. Deduplication should be scoped to LoRA identity (e.g., model/file + category), not only prompt text hash.
Useful? React with 👍 / 👎.
Summary
Test plan