Skip to content

FennelFetish/qapyq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

619 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

qapyq

(CapPic)
AI-assisted media curator for large image/video datasets. Streamlined captioning, cropping, masking for LoRA/diffusion training workflows.




Screenshot of qapyq with its 5 windows open.

Edit captions quickly with drag-and-drop support Select one-of-many Apply sorting and filtering rules

Quick cropping Image comparison Draw masks manually or apply automatic detection and segmentation

Transform tags using conditional rules Multi-Edit and Focus Mode

Features

  • Media Viewer: Display and navigate images and videos

    • Quick-starting desktop application built with Qt
    • Runs smoothly with a million files
    • Modular interface that lets you place windows on different monitors
    • Open multiple tabs
    • Zoom/pan and fullscreen mode
    • Gallery with thumbnails and captions ?
    • Semantic image sorting with text prompts ?
    • Compare two images ?
    • Measure size, area and pixel distances ?
    • Slideshow ?
  • Image/Mask Editor: Prepare media for training

    • Crop and save parts of images ?
    • Scale images, optionally using AI upscale models ?
    • Crop and scale videos, trimmed to exact frame count
    • Dynamic save paths with template variables ?
    • Manually edit masks with multiple layers ?
    • Generate masks with AI models ?
    • Record masking operations into macros ?
    • VAE-encode images and check their latent representation ?
  • Captioning: Describe media with text

    • Edit captions manually with drag-and-drop support ?
    • Save multiple captions in per-media JSON files ?
    • Multi-Edit Mode: Edit captions across multiple files simultaneously ?
    • Focus Mode: Add the same tags to many files quickly ?
    • Tag grouping, merging, sorting, filtering and replacement rules ?
    • Colored text highlighting
    • Autocomplete with tags from your groups and CSV files ?
    • CLIP Token Counter ?
    • Automated captioning with support for grounding ?
    • Dynamic prompts with templates and text transformations ?
    • Multi-turn conversations with VLMs ?
    • Further refinement with LLMs
  • Stats/Filters: Summarize your data and get an overview

    • List all tags, media resolutions, masked regions, or size of concept folders ?
    • Filter media and create subsets ?
    • Combine and chain filters
    • Export the summaries as CSV
  • Batch Processing: Process whole folders at once

    • Flexible batch captioning, tagging and transformation ?
    • Batch scaling of images
    • Batch masking with user-defined macros
    • Batch cropping of images using your macros
    • Copy, move and rename files, create symlinks, ZIP captions for backups
  • AI Assistance:

    • Support for state-of-the-art captioning and masking models
    • Model and sampling settings, GPU acceleration with CPU offload support
    • On-the-fly NF4 and INT8 quantization
    • Run inference locally and/or on multiple remote machines over SSH ?
    • Separate inference subprocess isolates potential crashes and allows complete VRAM cleanup

Supported Models

These are the supported architectures with links to the original models.
Find more specialized finetuned models on huggingface.co.

Setup

Requires Python 3.10 or later.
And for video processing you'll need ffmpeg added to your PATH environment variable.

By default, prebuilt packages for CUDA 12.8 are installed. If you need a different CUDA version, change the URLs in requirements-pytorch.txt and requirements-flashattn.txt before running the setup script.

  1. Git clone or download this repository.
  2. Run setup.sh on Linux, setup.bat on Windows.
    • Packages are installed into a virtual environment.
    • The setup script will ask you which components to install.
      • FlashAttention is optional for most models but recommended for speed.
      • You can choose to install only the GUI and media processing packages without AI assistance.
      • When installing on a headless server for remote inference, you can choose to install only the backend.

If the setup scripts didn't work for you, but you manually got it running, please share your solution and raise an issue.

Startup

  • Linux: run.sh
  • Windows: run.bat or run-console.bat

You can open files or folders directly in qapyq by associating the file types with the respective run script in your OS. For shortcuts, icons are available in the qapyq/res folder.

Update

If git was used to clone the repository, simply use git pull to update.
If the repository was downloaded as a zip archive, download it again and replace the installed files.

To update the installed packages in the virtual environment, run the setup script again.

New dependencies may be added. If the program fails to start or crashes, run the setup script to install the missing packages.

User Guide

More information is available in the Wiki.
Use the page index on the right side to find topics and navigate the Wiki.

How to:

If you have questions, please ask in the Discussions.

About

AI-assisted media curator for large image/video datasets. Streamlined captioning, cropping, masking for LoRA/diffusion training workflows.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages