BasketLiDAR is a multimodal dataset pairing synchronized LiDAR point clouds with RGB cameras in professional basketball scenes, designed for Multi-Object Tracking (MOT) research.
It includes data, baseline notebooks, and standardized evaluation to facilitate reproducible experiments for LiDAR-only and camera–LiDAR fusion tracking.
If you use this dataset in your research, please cite our paper:
@inproceedings{hayashi2025basketlidar,
author = {Hayashi, Ryunosuke and Torimi, Kohei and Nagata, Rokuto and Ikeda, Kazuma
and Sako, Ozora and Nakamura, Taichi and Tani, Masaki
and Aoki, Yoshimitsu and Yoshioka, Kentaro},
title = {BasketLiDAR: The First LiDAR-Camera Multimodal Dataset for Professional Basketball MOT},
booktitle = {Proceedings of the 8th International ACM Workshop on Multimedia Content Analysis in Sports (MMSports '25)},
year = {2025},
pages = {78--86},
doi = {10.1145/3728423.3759401}
}We recommend using a conda environment.
# Create and activate a fresh environment
conda create -n basketlidar python=3.11 -y
conda activate basketlidar
# Install dependencies (requirements.txt is provided at the project root)
pip install -r requirements.txtconda create -n basketlidar python=3.11 -y
conda activate basketlidar
conda basketlidar update -f environment.ymlAccess to the dataset is granted upon request for research purposes.
-
Please contact us with your name, affiliation, and intended use.
※ (#contact) -
Once approved, we will share a Google Drive link for download.
Downloading and usage imply agreement with the dataset terms of use.
Redistribution is not allowed without permission.
Expected directory layout:
```text
D:.
└─dataset
├─frames_img
│ ├─far-end
│ │ ├─camera1
│ │ │ ├─day1_measurement3_scene_camera_frame_02524-02832
│ │ │ ├─day1_measurement3_scene_camera_frame_03649-04042
│ │ │ ├─day1_measurement3_scene_camera_frame_16273-16688
| | | ...
│ │ ├─camera2
│ │ │ ├─day1_measurement3_scene_camera_frame_02524-02832
│ │ │ ├─day1_measurement3_scene_camera_frame_03649-04042
│ │ │ ├─day1_measurement3_scene_camera_frame_07798-08274
| | | ...
│ │ ├─camera3
│ │ │ ├─day1_measurement3_scene_camera_frame_02524-02832
│ │ │ ├─day1_measurement3_scene_camera_frame_03649-04042
│ │ │ ├─day1_measurement3_scene_camera_frame_15075-15389
| | | ...
│ │ └─lidar
│ │ ├─day1_measurement3_scene_camera_frame_02524-02832
│ │ ├─day1_measurement3_scene_camera_frame_03649-04042
│ │ ├─day1_measurement3_scene_camera_frame_07798-08274
| | ...
│ └─near-end
│ ├─camera1
│ │ ├─day1_measurement1_scene_camera_frame_00523-01000
│ │ ├─day1_measurement1_scene_camera_frame_02262-02493
│ │ ├─day1_measurement1_scene_camera_frame_06815-07138
| | ...
│ ├─camera3
│ │ ├─day1_measurement1_scene_camera_frame_00523-01000
│ │ ├─day1_measurement1_scene_camera_frame_06815-07138
│ │ ├─day1_measurement1_scene_camera_frame_02262-02493
| | ...
│ ├─lidar
│ │ ├─day1_measurement1_scene_camera_frame_00523-01000
│ │ ├─day1_measurement1_scene_camera_frame_02262-02493
│ │ ├─day1_measurement1_scene_camera_frame_06815-07138
| | ...
│ └─camera2
│ ├─day1_measurement1_scene_camera_frame_11757-12355
│ ├─day1_measurement1_scene_camera_frame_06815-07138
│ ├─day1_measurement1_scene_camera_frame_00523-01000
| ...
├─gt_json
│ ├─near_end
│ └─far_end
├─pointcloud
└─lidar_extrinsics
lidar_only.mp4
This section describes the LiDAR-based tracking pipeline included in the BasketLiDAR dataset.
It consists of two major steps, both implemented as Jupyter notebooks.
The notebook pointcloud_to_BEV-detection.ipynb demonstrates how to process raw .lvx2 point cloud files into a bird’s-eye-view (BEV) representation and generate object detection labels.
This includes:
- Reading
.lvx2point cloud data - Converting point clouds into BEV maps
- Applying detection models to produce labeled bounding boxes
If you need more information about the .lvx2 file structure, please refer to the official LVX2 format specification (https://terra-1-g.djicdn.com/65c028cd298f4669a7f0e40e50ba1131/LVX2%20Specifications.pdf).
The notebook BEV-detection_to_track.ipynb takes the detection labels generated in the previous step as input and produces multi-object tracking results.
This step links detections across frames to maintain object identities over time.
A detailed step-by-step manual for the tracking configuration and evaluation will be added in future updates.
The notebook camfusion_postprocess.ipynb demonstrates how to perform tracking-by-fusion using both LiDAR and camera information.
This post-processing step refines LiDAR-based tracks by incorporating visual cues from synchronized RGB frames, improving object continuity and identity preservation.
As with the LiDAR-based pipeline, a more detailed explanation and reproducibility guide will be released progressively.
All source code in this repository is released under the MIT License.
See the LICENSE file for details.
The dataset (including LiDAR point clouds, camera images, and annotations)
is released for research and educational purposes only under the following terms:
- Redistribution, rehosting, or public sharing of the dataset, in whole or in part, is strictly prohibited without explicit permission from the authors.
- You may use the dataset internally for non-commercial research and academic publications.
- When publishing results based on this dataset, please cite the corresponding paper once it becomes available.
Summary:
- ✅ Code → MIT License
- 🚫 Data → Research use only / Redistribution prohibited
Questions, requests, or collaborations are welcome.
- Overview and Dataset request: https://sites.google.com/keio.jp/keio-csg/projects/basket-lidar
- Email: hayashi.ryu430@keio.jp
- Issues: Please open a GitHub issue with a clear title and description.
