Hi, thank you for your great work!
In your paper, I saw that it’s possible to generate holistic motion using semantic inputs (like “how are you”, “walk”, etc.). Could you please share how to perform inference for this part?
Also, if I would like to input my own sign language video, what data or files would I need beforehand (e.g., .pkl files, annotations)?
Thank you very much for your time and help!
Hi, thank you for your great work!
In your paper, I saw that it’s possible to generate holistic motion using semantic inputs (like “how are you”, “walk”, etc.). Could you please share how to perform inference for this part?
Also, if I would like to input my own sign language video, what data or files would I need beforehand (e.g.,
.pklfiles, annotations)?Thank you very much for your time and help!