Skip to content

Latest commit

 

History

History
159 lines (112 loc) · 5.76 KB

File metadata and controls

159 lines (112 loc) · 5.76 KB

PETS (Personalized Eye Tracking System)

This repository supplements research work under review.

Overview

This repository contains the 3D models and Fusion360 environment we use to define camera-eye geometry and create printable personalized eye-tracking headstages for various animals.
It also holds the associated Raspberry Pi code for eye cameras, and provides tools for synchronizing eye-tracking videos with arena videos and electrophysiology recordings, preprocessing eye-tracking data, and reproducing figures from our research. The synchronization pipeline is designed to work with various recording formats, with Open Ephys provided as a reference implementation.

Setup

Prerequisites

  • Python 3.10 or higher
  • Conda (recommended) or pip

Installation Options

You have three options for setting up the environment:

Option 1: Using Conda Environment (recommanded)

# Create environment from environment.yml
conda env create -f environment.yml
conda activate eye_repo

# Install package in development mode
pip install -e .

Option 2: Using pyproject.toml

pip install -e .

This will install the package and all dependencies in development mode.

Option 3: Using requirements.txt

pip install -r requirements.txt
pip install -e .

Verification

Run the smoke test to verify imports:

python examples/smoke_preprocessing_imports.py

Project Structure

src/eye_tracking_system_tools/
├── preprocessing/     # Data synchronization and preprocessing
├── figures/          # Figure reproduction scripts
├── raspberry_pi/      # Raspberry Pi video capture utilities
└── utils/            # General utilities
    ├── 3D_printing_files/  # 3D printing files and virtual fitting guide
    └── meshroom_pipeline_template.mg

Usage

Figure Reproduction

The figure reproduction scripts are straightforward to use. Each script in src/eye_tracking_system_tools/figures/reproduction/main_figures/ can be run directly to reproduce the corresponding paper figure.

Example:

cd src/eye_tracking_system_tools/figures/reproduction/main_figures/Fig_1_e
python figure_1e.py

Data Preprocessing and Synchronization

The preprocessing module requires data organized in a specific folder structure that matches the BlockSync class expectations.

Required Data Structure

The BlockSync class expects the following folder structure:

path_to_animal_folder/
└── animal_call/
    └── experiment_date/  (format: yyyy_mm_dd, or None for no date paradigm)
        └── block_xxx/
            ├── arena_videos/          # External arena video outputs
            ├── eye_videos/
            │   ├── LE/                 # Left eye videos
            │   │   └── video_folder/
            │   │       ├── video.h264  # Video file
            │   │       ├── video.mp4   # Video file (optional)
            │   │       ├── DLC_analysis_file.csv  # DeepLabCut pupil annotations
            │   │       └── timestamps.csv         # Video timestamps
            │   └── RE/                 # Right eye videos
            │       └── video_folder/
            │           ├── video.h264
            │           ├── video.mp4
            │           ├── DLC_analysis_file.csv
            │           └── timestamps.csv
            ├── oe_files/               # Open Ephys recordings (or custom format)
            │   └── experiment_datetime/
            │       ├── events.csv      # Event data (for Open Ephys)
            │       └── settings.xml    # Recording settings
            └── analysis/               # Output directory (initially empty)

Pupil Annotations

Important: This repository does not provide the pupil annotation model. Users must provide their own pupil annotations that adhere to the DeepLabCut .csv export format, with one annotation file per eye video.

If you use a different pupil annotation method, you can work around this requirement by converting your annotations to match the DeepLabCut format, or by modifying the preprocessing code to accept your format.

Synchronization Pipeline

Basic Usage:

from eye_tracking_system_tools.preprocessing import BlockSync

# Initialize block synchronization
block = BlockSync(
    animal_call="animal_name",
    experiment_date="yyyy_mm_dd",  # or None for no date paradigm
    block_num="001",
    path_to_animal_folder="/path/to/data",
    channeldict=None  # Optional: custom channel mapping
)

# Parse synchronization events
block.parse_open_ephys_events()

# Run synchronization
block.synchronize_block()

Custom Synchronization Paradigms:

The synchronization pipeline is not limited to Open Ephys recording formats. While Open Ephys is provided as a reference implementation, users can parse their own synchronization paradigm by creating a parsed_events.csv file that matches the expected format.

The parsed_events.csv file should be a pandas DataFrame (saved as CSV) with the following structure:

  • Timestamp columns: One column per synchronization channel containing timestamps (e.g., Arena_TTL, L_eye_TTL, R_eye_TTL)
  • Frame columns: Corresponding frame number columns with _frame suffix (e.g., Arena_TTL_frame, L_eye_TTL_frame, R_eye_TTL_frame)

To use a custom synchronization paradigm:

  1. Create your parsed_events.csv file in the oe_files/experiment_datetime/ directory
  2. Ensure it follows the format described above
  3. The BlockSync class will automatically detect and use this file if it exists

For detailed examples and usage, see src/eye_tracking_system_tools/preprocessing/block_synchronization.ipynb.