You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: app/(navbar)/question/page.mdx
-12Lines changed: 0 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,6 @@ The **SurveyWithCode Leaderboard** is a living benchmark platform designed to ev
4
4
5
5
The initial phase of the leaderboard will include results from previously published models adapted to the BEAT-2 dataset. Later, the platform will open for public submissions from the research community.
6
6
7
-
---
8
-
9
7
## Our Goals
10
8
11
9
- Establish a **continuously updated benchmark** of state-of-the-art gesture generation models, based on **human evaluation** using widely adopted datasets.
@@ -14,8 +12,6 @@ The initial phase of the leaderboard will include results from previously publis
14
12
-**Unify research communities** across computer vision, machine learning, NLP, HCI, robotics, and animation.
15
13
- Evolve dynamically with new datasets, metrics, and evaluation methodologies.
16
14
17
-
---
18
-
19
15
## Outcomes
20
16
21
17
Once operational, **SurveyWithCode Leaderboard** will allow you to:
@@ -25,13 +21,11 @@ Once operational, **SurveyWithCode Leaderboard** will allow you to:
25
21
-**Visualize** your synthetic motion and conduct **your own user studies** using our open-source tools.
26
22
- Access reproducible results and insights to **accelerate research iterations**.
27
23
28
-
---
29
24
30
25
## Setup & Timeline
31
26
32
27
We are currently inviting authors of gesture generation models to participate in an initial evaluation round. After this, the leaderboard will open to public submissions in **March 2025**, with continuous updates and support for new benchmarks and methods.
33
28
34
-
---
35
29
36
30
## Dataset: BEAT-2 (SMPL-X Format)
37
31
@@ -44,7 +38,6 @@ We benchmark models on the English test split of the [BEAT-2 dataset](https://pa
44
38
3. Compatible with SMPL-X and standard pose estimation pipelines.
45
39
4. Extensible for future evaluation tasks (e.g., facial expression).
46
40
47
-
---
48
41
49
42
## Submission Process
50
43
@@ -73,7 +66,6 @@ We benchmark models on the English test split of the [BEAT-2 dataset](https://pa
73
66
4.**Submit & report**
74
67
Upload motion outputs and a brief technical report describing training, architecture, and configuration.
75
68
76
-
---
77
69
78
70
## Post-Submission Process
79
71
@@ -95,7 +87,6 @@ We benchmark models on the English test split of the [BEAT-2 dataset](https://pa
95
87
6.**Community Reports**
96
88
Periodically, we co-author state-of-the-art survey papers summarizing leaderboard findings.
97
89
98
-
---
99
90
100
91
## Evaluation Methodology
101
92
@@ -116,7 +107,6 @@ Later tasks may include:
116
107
- Emotional alignment
117
108
- Semantic grounding of gestures
118
109
119
-
---
120
110
121
111
## Tooling
122
112
@@ -136,7 +126,6 @@ The leaderboard will include:
136
126
- Diversity metrics
137
127
...and more, using both classic and newly derived evaluation protocols.
138
128
139
-
---
140
129
141
130
## Frequently Asked Questions
142
131
@@ -158,7 +147,6 @@ We currently have academic funding for running the leaderboard for a period of t
0 commit comments