Skip to content

Integrate video2motion pipeline for custom video inputs#174

Open
Dankular wants to merge 1 commit intoaigc3d:mainfrom
Dankular:claude/integrate-video2motion-pipeline-bvgxH
Open

Integrate video2motion pipeline for custom video inputs#174
Dankular wants to merge 1 commit intoaigc3d:mainfrom
Dankular:claude/integrate-video2motion-pipeline-bvgxH

Conversation

@Dankular
Copy link
Copy Markdown

Allow passing a video file directly via motion_video_path config, which automatically runs the Video2MotionPipeline to extract SMPL-X params before animation — similar to how the Gradio app handles it.

Usage:

CLI config

python -m LHM.launch infer.human_lrm model_name=LHM-1B
image_input=./image.png motion_video_path=./dance.mp4

Shell script

bash inference.sh LHM-1B ./image.png --video ./dance.mp4

Allow passing a video file directly via motion_video_path config,
which automatically runs the Video2MotionPipeline to extract SMPL-X
params before animation — similar to how the Gradio app handles it.

Usage:
  # CLI config
  python -m LHM.launch infer.human_lrm model_name=LHM-1B \
    image_input=./image.png motion_video_path=./dance.mp4

  # Shell script
  bash inference.sh LHM-1B ./image.png --video ./dance.mp4

https://claude.ai/code/session_014DECBRsZKAP9FX7MCteC23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants