This repository supplements research work under review.
This repository contains the 3D models and Fusion360 environment we use to define camera-eye geometry and create printable personalized eye-tracking headstages for various animals.
It also holds the associated Raspberry Pi code for eye cameras, and provides tools for synchronizing eye-tracking videos with arena videos and electrophysiology recordings, preprocessing eye-tracking data, and reproducing figures from our research.
The synchronization pipeline is designed to work with various recording formats, with Open Ephys provided as a reference implementation.
- Python 3.10 or higher
- Conda (recommended) or pip
You have three options for setting up the environment:
# Create environment from environment.yml
conda env create -f environment.yml
conda activate eye_repo
# Install package in development mode
pip install -e .pip install -e .This will install the package and all dependencies in development mode.
pip install -r requirements.txt
pip install -e .Run the smoke test to verify imports:
python examples/smoke_preprocessing_imports.pysrc/eye_tracking_system_tools/
├── preprocessing/ # Data synchronization and preprocessing
├── figures/ # Figure reproduction scripts
├── raspberry_pi/ # Raspberry Pi video capture utilities
└── utils/ # General utilities
├── 3D_printing_files/ # 3D printing files and virtual fitting guide
└── meshroom_pipeline_template.mg
The figure reproduction scripts are straightforward to use. Each script in src/eye_tracking_system_tools/figures/reproduction/main_figures/ can be run directly to reproduce the corresponding paper figure.
Example:
cd src/eye_tracking_system_tools/figures/reproduction/main_figures/Fig_1_e
python figure_1e.pyThe preprocessing module requires data organized in a specific folder structure that matches the BlockSync class expectations.
The BlockSync class expects the following folder structure:
path_to_animal_folder/
└── animal_call/
└── experiment_date/ (format: yyyy_mm_dd, or None for no date paradigm)
└── block_xxx/
├── arena_videos/ # External arena video outputs
├── eye_videos/
│ ├── LE/ # Left eye videos
│ │ └── video_folder/
│ │ ├── video.h264 # Video file
│ │ ├── video.mp4 # Video file (optional)
│ │ ├── DLC_analysis_file.csv # DeepLabCut pupil annotations
│ │ └── timestamps.csv # Video timestamps
│ └── RE/ # Right eye videos
│ └── video_folder/
│ ├── video.h264
│ ├── video.mp4
│ ├── DLC_analysis_file.csv
│ └── timestamps.csv
├── oe_files/ # Open Ephys recordings (or custom format)
│ └── experiment_datetime/
│ ├── events.csv # Event data (for Open Ephys)
│ └── settings.xml # Recording settings
└── analysis/ # Output directory (initially empty)
Important: This repository does not provide the pupil annotation model. Users must provide their own pupil annotations that adhere to the DeepLabCut .csv export format, with one annotation file per eye video.
If you use a different pupil annotation method, you can work around this requirement by converting your annotations to match the DeepLabCut format, or by modifying the preprocessing code to accept your format.
Basic Usage:
from eye_tracking_system_tools.preprocessing import BlockSync
# Initialize block synchronization
block = BlockSync(
animal_call="animal_name",
experiment_date="yyyy_mm_dd", # or None for no date paradigm
block_num="001",
path_to_animal_folder="/path/to/data",
channeldict=None # Optional: custom channel mapping
)
# Parse synchronization events
block.parse_open_ephys_events()
# Run synchronization
block.synchronize_block()Custom Synchronization Paradigms:
The synchronization pipeline is not limited to Open Ephys recording formats. While Open Ephys is provided as a reference implementation, users can parse their own synchronization paradigm by creating a parsed_events.csv file that matches the expected format.
The parsed_events.csv file should be a pandas DataFrame (saved as CSV) with the following structure:
- Timestamp columns: One column per synchronization channel containing timestamps (e.g.,
Arena_TTL,L_eye_TTL,R_eye_TTL) - Frame columns: Corresponding frame number columns with
_framesuffix (e.g.,Arena_TTL_frame,L_eye_TTL_frame,R_eye_TTL_frame)
To use a custom synchronization paradigm:
- Create your
parsed_events.csvfile in theoe_files/experiment_datetime/directory - Ensure it follows the format described above
- The
BlockSyncclass will automatically detect and use this file if it exists
For detailed examples and usage, see src/eye_tracking_system_tools/preprocessing/block_synchronization.ipynb.