Skip to content

thibaultre/sam-body4d

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🏂 SAM-Body4D

Mingqi Gao, Yunqi Miao, Jungong Han

SAM-Body4D is a training-free method for temporally consistent and robust 4D human mesh recovery from videos. By leveraging pixel-level human continuity from promptable video segmentation together with occlusion recovery, it reliably preserves identity and full-body geometry in challenging in-the-wild scenes.

[ 📄 Paper] [ 🌐 Project Page] [ 📝 BibTeX]

✨ Key Features

  • Temporally consistent human meshes across the entire video
  • Robust multi-human recovery under heavy occlusions
  • Robust 4D reconstruction under camera motion

🕹️ Gradio Demo

gradio_demo.mp4

📊 Resource & Profiling Summary

For detailed GPU/CPU resource usage, peak memory statistics, and runtime profiling, please refer to:

👉 resources.md

🖥️ Installation

1. Create and Activate Environment

conda create -n body4d python=3.12 -y
conda activate body4d

2. Install PyTorch (choose the version that matches your CUDA), Detectron, and SAM3

pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu118
pip install 'git+https://github.com/facebookresearch/detectron2.git@a1ce2f9' --no-build-isolation --no-deps
pip install -e models/sam3

If you are using a different CUDA version, please select the matching PyTorch build from the official download page: https://pytorch.org/get-started/previous-versions/

3. Install Dependencies

pip install -e .

🚀 Run the Demo

1. Setup checkpoints & config (recommended)

We provide an automated setup script that:

  • generates configs/body4d.yaml from a release template,
  • downloads all required checkpoints (existing files will be skipped).

Some checkpoints (SAM 3 and SAM 3D Body) require prior access approval on Hugging Face. Before running the setup script, please make sure you have accepted access on their Hugging Face pages.

If you plan to use these checkpoints, login once:

huggingface-cli login

Then run the setup script:

python scripts/setup.py --ckpt-root /path/to/checkpoints

2. Run

python app.py

Manual checkpoint setup (optional)

If you prefer to download checkpoints manually (SAM 3, SAM 3D Body, MoGe-2, Diffusion-VAS, Depth-Anything V2), please place them under the directory with the following structure:

${CKPT_ROOT}/
├── sam3/                                
│   └── sam3.pt
├── sam-3d-body-dinov3/
│   ├── model.ckpt
│   └── assets/
│       └── mhr_model.pt
├── moge-2-vitl-normal/
│   └── model.pt
├── diffusion-vas-amodal-segmentation/
│   └── (directory contents)
├── diffusion-vas-content-completion/
│   └── (directory contents)
└── depth_anything_v2_vitl.pth

After placing the files correctly, you can run the setup script again. Existing files will be detected and skipped automatically.

🤖 Auto Run

Run the full end-to-end video pipeline with a single command:

python scripts/offline_app.py --input_video <path>

where the input can be a directory of frames or an .mp4 file. The pipeline automatically detects humans in the initial frame, treats all detected humans as targets, and performs temporally consistent 4D reconstruction over the video.

📝 Citation

If you find this repository useful, please consider giving a star ⭐ and citation.

@article{gao2025sambody4d,
  title   = {SAM-Body4D: Training-Free 4D Human Body Mesh Recovery from Videos},
  author  = {Gao, Mingqi and Miao, Yunqi and Han, Jungong},
  journal = {arXiv preprint arXiv:2512.08406},
  year    = {2025},
  url     = {https://arxiv.org/abs/2512.08406}
}

👏 Acknowledgements

The project is built upon SAM-3, Diffusion-VAS and SAM-3D-Body. We sincerely thank the original authors for their outstanding work and contributions.

About

An attempt to fix sam-body4d (CPU) memory usage.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 98.0%
  • Cuda 1.6%
  • Other 0.4%