Skip to content

InterVisions/fairness_arena

Repository files navigation

Fairness Arena — CLIP Retrieval Fairness Evaluation

An LMSYS Chatbot Arena-style tool for evaluating the fairness of CLIP-based image retrieval models through human preference voting.

Participants see side-by-side image search results from two anonymous models, and vote for which set better represents the diversity of their community. Votes are aggregated using the Elo rating system to produce a fairness leaderboard.

Arena screenshot

Quick Start

There are two ways to run the server: live mode (GPU machine does everything) or bundle mode (pre-compute on GPU, serve from any CPU machine).

Option A: Bundle mode

Step 1 — On a GPU machine, pre-compute all embeddings and retrieval results:

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Precompute the active dataset (defined by active_dataset_id in config)
python precompute.py

# Or precompute all datasets defined in config in one run
python precompute.py --all-datasets

# Or target a specific dataset by id
python precompute.py --dataset-id fairface

This loads every CLIP model (see how to config below), embeds all dataset images, computes retrieval rankings for all (model × query) pairs, creates web-ready thumbnails, and packs everything into a single portable .npz file per dataset (data/arena_bundle_{dataset_id}.npz). For 2000 images × 4 models, expect ~5-15 minutes per dataset.

Step 2 — Copy the bundles to your server (any machine, no GPU needed):

scp data/arena_bundle_*.npz yourserver:/path/to/fairness-arena/data/

Step 3 — Run the server (CPU-only, no PyTorch needed at runtime):

# Multi-dataset mode (recommended) — enables switching datasets from the admin panel
python server.py --bundles-dir data/ --admin-token my_secret

# Legacy single-bundle mode (still supported)
python server.py --bundle data/arena_bundle_flickr30k.npz --admin-token my_secret

The bundle contains thumbnails, all retrieval rankings, image embeddings (for open queries), and the config snapshot. Startup takes a few seconds.

Option B: Live mode (single GPU machine)

pip install -r requirements.txt
python server.py --admin-token my_secret

This loads models, downloads the dataset, and embeds everything at startup. Requires GPU and takes several minutes to start.


Open http://localhost:8080 for the arena, /admin for the dashboard, /leaderboard for rankings.

Architecture

                    ┌─────────────────────────────────┐
                    │   GPU machine (one-time)         │
                    │                                  │
                    │   precompute.py --all-datasets   │
                    │   ├── Load CLIP models           │
                    │   ├── Load dataset (HF/local)    │
                    │   ├── Embed all images           │
                    │   ├── Compute all retrievals     │
                    │   └── Save arena_bundle_{id}.npz │
                    │       (one bundle per dataset)   │
                    └──────────────┬───────────────────┘
                                   │ scp
                    ┌──────────────▼───────────────────┐
                    │   Server (CPU, AWS, etc.)         │
                    │                                  │
Browser ──────────► │   server.py --bundles-dir data/   │
(participant)       │   ├── Load active bundle (fast)  │
                    │   ├── Serve image thumbnails      │
Browser ──────────► │   ├── Serve retrieval results    │
(admin)             │   ├── Switch dataset at runtime  │
                    │   ├── Record votes (SQLite)      │
                    │   └── Compute Elo ratings        │
                    └──────────────────────────────────┘

CLI Options

server.py

Flag Default Description
--bundles-dir None Directory containing per-dataset bundles (arena_bundle_{id}.npz). Enables dataset switching from the admin panel
--bundle None Path to a single pre-computed .npz bundle (legacy, still supported)
--config config/default_config.json Configuration file (used if no bundle or as overrides)
--port 8080 Server port
--host 0.0.0.0 Server host
--device auto PyTorch device (only relevant in live mode)
--admin-token changeme Token for admin API

precompute.py

Flag Default Description
--config config/default_config.json Configuration file (defines models, datasets)
--queries config/queries.txt Path to a text file with one query per line — these are baked into the bundle and shown in the UI dropdown
--dataset-id None ID of a specific dataset to precompute (must match an entry in config datasets). Defaults to the active dataset
--all-datasets False Precompute bundles for all datasets defined in config
--bundles-dir data Output directory for bundle files
--output None Explicit output path (single dataset only; overrides --bundles-dir)
--device auto PyTorch device
--thumbnail-size 400 Max thumbnail dimension in pixels
--batch-size 64 Batch size for image embedding

Configuration

Settings live in config/default_config.json (overridden at runtime by config/active_config.json if present, and editable via the admin panel):

  • Elo parameters: elo_k_factor, elo_initial_rating
  • Arena layout: images_per_model, grid_columns, max_scroll_images
  • Active dataset: active_dataset_id — which dataset is loaded at startup
  • Search label: search_query_label — text shown left of the query input (leave empty to hide)
  • Judge question: judge_question — prompt shown above the grids (leave empty to hide)
  • Open queries: allow_open_queries — if true, participants can type any free-text query (results are computed on-the-fly and cached in SQLite)
  • Matchmaking: matchmaking"uniform" picks model pairs at random
  • Why tags: enable_why_tags, why_tags — optional qualitative feedback shown after a vote (15% of votes)
  • Models: list of CLIP models (open_clip backend)
  • Datasets: list of datasets under "datasets" key — each with an id, name, source, and source-specific fields (hf_repo / folder_path)

Queries (the dropdown shown to participants) are not in the config JSON. They live in config/queries.txt, one query per line, and are baked into the bundle at precompute time. To add or change queries, edit queries.txt and re-run precompute.py.

Example datasets config:

"arena": {
  "active_dataset_id": "flickr30k"
},
"datasets": [
  {
    "id": "flickr30k",
    "name": "Flickr 30K",
    "source": "huggingface",
    "hf_repo": "nlphuji/flickr30k",
    "hf_split": "test",
    "image_column": "image",
    "max_images": 1000
  },
  {
    "id": "fairface",
    "name": "FairFace",
    "source": "huggingface",
    "hf_repo": "HuggingFaceM4/FairFace",
    "hf_config": "0.25",
    "hf_split": "train",
    "image_column": "image",
    "max_images": 1000
  }
]

Custom local folders are supported too via "source": "folder" and "folder_path": "/path/to/images".

What's Inside a Bundle

Each .npz file produced by precompute.py contains:

  • JPEG thumbnails of all dataset images (web-ready, no need to ship the original dataset)
  • Retrieval rankings for every (model × query) pair (pre-computed, served instantly)
  • Image embeddings per model in float16 (enables open queries without GPU — just NumPy matrix multiplication)
  • Config snapshot (models, queries, dataset metadata)
  • Dataset id so the server knows which dataset it belongs to

Typical bundle size: ~50-200 MB depending on dataset size and number of models.

Key Design Decisions

  • Side-by-side layout with randomised left/right assignment and position logging for bias detection
  • Pre-computed retrieval results via portable bundle for GPU-free serving
  • Multi-dataset support — define multiple datasets in config, precompute one bundle per dataset, and switch between them at runtime from the admin panel without restarting the server
  • Optional "why" tags for qualitative signal alongside the quantitative vote
  • Bradley-Terry analysis can be run post-hoc on the exported CSV for publishable confidence intervals
  • Admin dashboard with real-time stats, position bias monitoring, dataset switching, and data export

Project Structure

fairness-arena/
├── server.py              # FastAPI server (live or bundle mode)
├── precompute.py          # Offline: embed + retrieve + pack bundle
├── database.py            # SQLite + Elo logic
├── retrieval.py           # CLIP model loading + retrieval + bundle loading
├── requirements.txt
├── arena.service
├── config/
│   ├── default_config.json   # Base configuration
│   ├── active_config.json    # Runtime overrides (created by admin panel)
│   └── queries.txt           # One query per line — baked into bundles at precompute time
├── data/
│   ├── arena.db                      # Created at runtime (votes, ratings)
│   ├── arena_bundle_flickr30k.npz    # Created by precompute.py (one per dataset)
│   └── arena_bundle_fairface.npz
└── static/
    ├── arena.html          # Public voting interface
    ├── admin.html          # Admin dashboard
    └── leaderboard.html    # Public leaderboard

(Optional) Configure the systemd service

# Edit the service file to set your SECRET_KEY
vi arena.service
# Change ADMIN_TOKEN to a random string (generate one with: python3 -c "import secrets; print(secrets.token_hex(32))")

# Install the service
sudo cp arena.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable arena
sudo systemctl start arena

# Check it's running
sudo systemctl status arena

# View logs
sudo journalctl -u arena -f

Useful commands

sudo systemctl restart arena      # Restart after config changes
sudo systemctl stop arena         # Stop the server
sudo journalctl -u arena --since "1 hour ago"  # Recent logs

(Optional) Port forwarding (browser can access through port 80)

sudo sh -c 'echo "iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080" >> /etc/rc.local'
sudo chmod +x /etc/rc.local

Funding Acknowledgement

Co-funded by the European Union

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors