Releases: ack00gar/FunGen-AI-Powered-Funscript-Generator
FunGen v1.0.2
Patch release on top of v1.0.1.
Fixed
-
Detection model rolled back to YOLO12 (FunGen-12s-pov-1.1.0). The YOLO26 nano model bundled in v1.0.0 / v1.0.1 turned out to be significantly worse on the FunGen tracker pipeline. Head-to-head benchmark on a VR clip showed confidence dropping ~50% on penis, glans, hand, foot classes -- exactly the classes the tracker uses to drive funscript output. Reported by vm; thanks for the screenshots. Speed gain (3x) was not worth the quality cliff. Investigation into what regressed during YOLO26 fine-tuning is in progress -- a corrected model will ship as a separate release once it benches as well as or better than YOLO12.
-
Mac ARM ModuleNotFoundError on first run (#122). First-run setup was downloading the .pt and trying to convert to .mlpackage via
YOLO.export(format='coreml'), which silently fails whencoremltoolsisn't installed. Now we download the prebuilt.mlpackage.zipdirectly from the release and unzip -- no coremltools dependency, smaller download (16 MB vs 19 MB). Reported by pallazola.
Update path
- Auto-installer / launcher:
git pull(or in-app updater) and relaunch. - Existing v1.0.0 / v1.0.1 users will keep their YOLO26 file in
models/(unused now); the app picks YOLO12 automatically once they pull.
Known still-open
- mpv A/V desync warning on M1 during arrow-key seeking (#121). Logs warning, playback continues. Not a hotfix candidate -- needs deeper investigation.
- d3d12va seek-after-settings-reapply behavior (Discord report). Needs reporter clarification on what UI element is broken.
Full v1.0.0 highlights: https://github.com/ack00gar/FunGen-AI-Powered-Funscript-Generator/releases/tag/v1.0.0
FunGen v1.0.1 (hotfix)
Day-1 hotfix on top of v1.0.0.
Fixed
- CUDA / NVDEC playback broken on Windows + Linux NVIDIA users. Pinning the ffmpeg hwaccel output format to cuda forced the decoder to keep frames in GPU memory, which the analysis filter chain (crop, v360, scale) rejects with "Function not implemented" / "Error reinitializing filters". ffmpeg exited before the first frame and the GUI saw no playback. Decode still runs on the GPU; ffmpeg now auto-downloads to CPU memory for the filter chain. Reported by Skrynews, rvsharedwife; root cause analysis by Krazy.
Known same-bug-different-codec
QSV (Intel) and VAAPI (Linux AMD/Intel) have the same architectural mismatch (GPU output format pin + CPU filter chain) and likely the same crash. No reports yet. If you hit it, the fix is identical: drop the -hwaccel_output_format pin in video/_vp_ffmpeg_builders.py.
Update path
- Auto-installer / launcher:
git pull(or in-app updater) and relaunch. - Manual: pull main, no dependencies changed.
Full v1.0.0 highlights: https://github.com/ack00gar/FunGen-AI-Powered-Funscript-Generator/releases/tag/v1.0.0
FunGen v1.0.0
First 1.0 release. The big themes: a single-shim installer that replaces miniconda, a video pipeline rewrite that ditched PyAV for a dedicated FFmpeg subprocess + libmpv render, a runtime GLSL VR dewarp shader, a wide GUI/perf sweep, and a fresh YOLO26 model line.
If you're upgrading from v0.9.x you can git pull and click the launcher -- the launcher self-heals: it sees there's no .venv, runs install.py once (~2 min), then starts the app. Your old ~/miniconda3/envs/FunGen is left in place; the installer asks once at the end whether to clean it up.
Highlights
- One-shim installer (uv + venv replaces miniconda). Download a single
install.bat/install.sh, double-click, done. The shim bootstrapsuv, builds a self-contained.venv, auto-detects your GPU, and writes launcher scripts. ~500 MB on disk, no admin rights, no PATH surgery.ffmpegandmpvare auto-installed via the OS package manager (winget, brew, apt/dnf/pacman) when missing. - Six PyTorch channels, auto-selected:
cuda_blackwell(RTX 50-series, cu129),cuda_stable(RTX 20/30/40, cu128),cuda_legacy(driver 525-559, cu124),cpu,mps(Apple Silicon),rocm(AMD on Linux). - YOLO26 nano fine-tuned model. Smaller (5.4 MB vs 18.9 MB), faster, fine-tuned on an updated VR POV dataset. Auto-downloaded on first run. ONNX export added for CPU users. See models-v2.0.0.
- VR dewarp shader with adaptive supersample. Runtime-compiled GLSL replaces the CPU
v360filter for in-GUI playback. Adaptive resolution, anisotropic filter cap, free IGN dither. Embedded fullscreen keeps the shader and adaptive quality active. - GUI perf sweep. Timeline draws via
rect_filledinstead ofcircle_filled(~2x cheaper); oscillation grid activation vectorized (1.4x); plugin runtimes fixed at the algorithm level (Resample 8.4x, Keyframes 5.2x, Dynamic Amplify 3.1x); cached u32 colors / chapter text widths / spline math throughout the draw loop. - Async tracker lifecycle. YOLO model preloads off the UI thread;
stop_trackingtears down asynchronously; post-session funscript save + autotune is async; mpv pause/resume is balanced; mpvhwdecdefaults toauto-safe. - Animated splash with 17 themes. Random per launch (or pin one with
FUNGEN_SPLASH_THEME=<name>): matrix, terminator, tron, starwars, breaking, invaders, mars, clippy, tetris, pacman, blade, bsod, sonic, xfiles, tmnt, et, mario. - Cock Hero Beat Tracker (offline). Audio-beat-driven funscript generator. Picks beats from the audio track and emits alternating peak/valley keyframes -- works well for music-video edits where visual flow alone is unreliable.
--watchactually processes videos. The watch-folder CLI now spawnsmain.pyworkers per queued item, up to--max-parallel N(default 1), reaps on exit, terminates inflight on Ctrl-C. Previously the queue filled forever with nothing draining it.- Async navigation. Arrow-key seeks fetch via a dedicated worker; tooltip dict refs captured before async hover-cancel; scrub cache keyed by requested frame index avoids respawning the FFmpeg source on hover-seek.
- Internal restructure (no behavior change).
app_logicsplit into 8 lifecycle modules; video display split into_core/controls/display_route/overlays; gui components reorganized.
Breaking changes
- The model URL pointer in
config/constants.pymoved frommodels-v1.1.0(FunGen-12s-pov-1.1.0) tomodels-v2.0.0(FunGen_YOLO26_VR_POV_n_finetuned). Auto-download will fetch the new model on first run; if you have a manual install, drop the YOLO26 file intoFunGen/models/. requirements/core.requirements.txt->requirements/base.txt;requirements/cuda.requirements.txt->requirements/cuda_stable.txt;requirements/cpu.requirements.txt->requirements/cpu.txt. The launcher handles this automatically; manual installers should pull the new filenames.
Known issues
- Shader dewarp on 8K VR can be choppy in some configs (open soft issue).
tensorrtinstall is best-effort. If the cu129 nightly wheel fails, the app falls back to standard PyTorch and continues.
Install
| Platform | One-liner |
|---|---|
| Windows | Download install.bat, double-click |
| macOS / Linux | curl -fsSL https://raw.githubusercontent.com/ack00gar/FunGen-AI-Powered-Funscript-Generator/main/install.sh | bash |
Full README: https://github.com/ack00gar/FunGen-AI-Powered-Funscript-Generator/blob/main/README.md
VR renamer tool
VR Video Format Detector & Renamer
Automatic VR video format detection and renaming tool using simple machine learning.
Feel like saying thank you? Support me here: https://ko-fi.com/k00gar
Features
- ML-based detection: Uses Random Forest classifier trained on quite some videos
- Detects: Projection type (fisheye, 180°), layout (SBS/TB), alpha channel (including alpha-packed)
- Fast: ~5 seconds per video (optimized FFmpeg extraction)
- Batch processing: Process single files or entire directories recursively
- Safe operation: Dry-run mode by default - preview changes before applying
- Cross-platform: Works on Windows, macOS, Linux
Requirements
- Python 3.7+
- ffmpeg/ffprobe (must be in PATH)
- Dependencies:
pip install scikit-learn numpy opencv-python
Quick Start
Analyze a single video (dry-run):
python3 vr_rename.py video.mp4Output example:
============================================================
File: video.mp4
Resolution: 5760x2880
Video Type: VR
Projection: fisheye
Layout: sbs
FOV: 190°
Alpha: Yes
Suggested name: video_FISHEYE_SBS_ALPHA.mp4
Rename a single video:
python3 vr_rename.py video.mp4 --renameProcess entire directory:
python3 vr_rename.py /path/to/videos/Recursively process all subdirectories:
python3 vr_rename.py /path/to/videos/ --recursive --renameUsage
usage: vr_rename.py [-h] [-r] [--rename] [-v] [-m MODEL] path
positional arguments:
path Video file or directory path
optional arguments:
-h, --help show this help message and exit
-r, --recursive Process directories recursively
--rename Actually rename files (default is dry-run)
-v, --verbose Verbose output
-m MODEL, --model MODEL
Path to model file (default: vr_detector_model_rf.pkl)
Supported Formats
- Projections: fisheye, hemisphere (180°)
- Layouts: SBS (side-by-side), TB (top-bottom)
- Alpha channel: Detects both native alpha and alpha-packed (fisheye only)
- Video formats: MP4, MKV, AVI, MOV, WebM, M4V
Naming Convention
Generated filenames follow the pattern:
{original_name}_{PROJECTION}_{LAYOUT}_{ALPHA}.{ext}
Examples:
video_FISHEYE_SBS.mp4scene_180_TB.mkvmovie_FISHEYE_SBS_ALPHA.mp4
Existing VR format tags in filenames are automatically cleaned up before adding new tags.
Model Information
- Type: Random Forest Classifier
- Training samples: 883 videos with manual validation
- Features: Histogram correlation, edge detection, brightness analysis (20 features)
- Performance: ~5 seconds per video (4K-8K)
Troubleshooting
"Model file not found"
- Ensure
vr_detector_model_rf.pklis in the same directory as the script
"ffprobe: command not found"
- Install ffmpeg: https://ffmpeg.org/download.html
- Ensure ffmpeg/ffprobe are in your system PATH
"Could not analyze video"
- Video file may be corrupted or unsupported format
- Check file permissions
Safety Features
- Dry-run by default: Preview changes before applying
- Error handling: Continues processing other files if one fails
- Verbose logging: Track what's happening with
-vflag
License
Free to use and modify.
models-v1.1.0
common: add ServiceState, BackgroundJob, cleanup_old_files, RateLimit…