Skip to content

sispi-benchmark/sispi-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

SISPI Evaluation Code

This repository provides the evaluation scripts for the paper:
"Measuring Text-Image Retrieval Fairness with Synthetic Data"
Lluis Gomez – Computer Vision Center, Universitat Autonoma de Barcelona

📄 DOI: https://doi.org/10.1145/3726302.3730030
🌐 Project: https://sispi-benchmark.github.io/sispi-benchmark/
📦 Dataset: https://huggingface.co/datasets/lluisgomez/SISPI


🧪 Setup

Create a conda environment with all required dependencies:

conda env create -f environment.yml
conda activate sispi-eval

▶️ Run Evaluation

To evaluate using a pretrained CLIP model:

python eval_clip_demo.py

To evaluate a fine-tuned model:

python eval_clip_demo.py --train_output_dir path/to/checkpoint_dir

Results will be printed and saved (if output directory is provided).


📜 Citation

@inproceedings{10.1145/3726302.3730030,
  author = {Gomez, Lluis},
  title = {Measuring Text-Image Retrieval Fairness with Synthetic Data},
  year = {2025},
  url = {https://doi.org/10.1145/3726302.3730030},
  booktitle = {Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  series = {SIGIR '25'}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages