Skip to content

berenslab/fundus_image_toolbox

Repository files navigation

Fundus Image Toolbox

DOI   A Python package for fundus image processing pytorch cuda Logo

Fundus quality prediction A quality prediction model for fundus images (gradeable vs. ungradeable) based on an ensemble of 10 models (ResNets and EfficientNets) trained on DeepDRiD and DrimDB data. Can be just used for prediction or retrained.
Read more.
Example image
Fundus fovea and optic disc localization A model to predict the center coordinates of the fovea and the optic disc in fundus images based on a multi-task EfficientNet trained on ADAM, REFUGE and IDRID datasets. Can be just used for prediction or retrained.
Read more.
Example image
Example predictions from the external dataset "DeepDRiD".
Fundus registration Align a fundus photograph to another fundus photograph from the same eye using SuperRetina (Liu et al., 2022). Image registration also goes by the terms image alignment and image matching.
Read more.
Example image
Fundus vessel segmentation Segment the blood vessels in a fundus image using an ensemble of FR-U-Nets trained on the FIVES dataset (Köhler et al., 2024).
Read more.
Example image
Fundus circle crop Fastly crop fundus images to a circle and center it (Fu et al., 2019).
Read more.
Example image
Fundus utilities A collection of additional utilities that can come in handy when working with fundus images.
Read more.
  • ImageTorchUtils: Image manipulation based on Pytorch tensors.
  • Balancing: A script to balance a torch dataset by both oversampling the minority class and undersampling the majority class from imbalanced-dataset-sampler.
  • Fundus transforms: A collection of torchvision data augmentation transforms to apply to fundus images adapted from pytorch-classification.
  • Get pixel mean std: A script to calculate the mean and standard deviation of the pixel values of a dataset by channel.
  • Get efficientnet resnet: Getter for torchvision models with efficientnet and resnet architectures initialized with ImageNet weights.
  • Lr scheduler: Get a pytorch learning rate scheduler (plus a warmup scheduler) for a given optimizer: OneCycleLR, CosineAnnealingLR, CosineAnnealingWarmRestarts.
  • Multilevel 3-way split: Split a pandas dataframe into train, validation and test splits with the options to split by group (i.e. keep groups together) and stratify by label. Wrapper for multi_level_split.
  • Seed everything: Set seed for reproducibility in python, numpy and torch.

Usage

The following code summarises the usage of the toolbox. See the usage_all.ipynb for a tutorial notebook and examples directory for more detailed usage examples information on the respective packages.

# Get sample images. All methods work on path(s) to image(s) or on image(s) as numpy arrays, tensors or PIL images.
fundus1, fundus2 = "path/to/fundus1.jpg", "path/to/fundus2.jpg"
import fundus_image_toolbox as fit
fundus1_cropped = fit.crop(fundus1, size=512) # > np.ndarray (512, 512, 3) uint8
import fundus_image_toolbox as fit
model, _ = fit.load_fovea_od_model(device="cuda:0")
coordinates = model.predict([fundus1, fundus2]) # > List[np.ndarray[fovea_x,fovea_y,od_x,od_y], ...]
fit.plot_coordinates([fundus1, fundus2], coordinates)
import fundus_image_toolbox as fit
ensemble = fit.load_quality_ensemble(device="cuda:0")
confs, labels = fit.ensemble_predict_quality(
    ensemble, [fundus1, fundus2], threshold=0.5 #, img_size=512
) # > np.ndarray[conf1, conf2], np.ndarray[label1, label2]
for img, conf, label in zip([fundus1, fundus2], confs, labels):
    fit.plot_quality(img, conf, label, threshold=0.5)

img_size defaults to 512 for backward compatibility with v0.1.1. You can pass a custom value, e.g. to avoid forced upsampling when your inputs are smaller, but note that the model was trained at img_size=512. Prediction scores can shift with image size, so it is advised to keep it constant within a project.

import fundus_image_toolbox as fit

config = fit.get_registration_config()
# if wanted, change the config dictionary
model, matcher = fit.load_registration_model(config)

moving_image_aligned = fit.register(
    fundus1, 
    fundus2, 
    show=True, 
    show_mapping=False, 
    config=config, 
    model=model, 
    matcher=matcher
) # > np.ndarray (h_in, w_in, 3) uint8
import fundus_image_toolbox as fit
ensemble = fit.load_segmentation_ensemble(device=device)
vessel_masks = fit.ensemble_predict_segmentation(ensemble, [fundus1, fundus2], threshold=0.5, size=(512, 512)) # > np.ndarray[np.ndarray[h_in, w_in], ...] float64
fit.plot_masks([fundus1, fundus2], vessel_masks)

Installation

Install the toolbox

You can install the latest tagged version of the toolbox by running:

pip install fundus_image_toolbox

or the latest version development version on github by running:

pip install git+https://github.com/berenslab/fundus_image_toolbox

Create a virtual environment

Alternatively, create a new virtual environment including the toolbox with uv:

uv venv
source .venv/bin/activate

or add it to your current uv project with uv add fundus_image_toolbox.

Or, use conda (less recommended): Create a new conda environment and install the toolbox there:

conda create --name fundus_image_toolbox python=3.12 pip
conda activate fundus_image_toolbox

And then pip install fundus_image_toolbox or pip install . from inside the new environment.

Caching

  • Weights for registration, fundus_od_localization and quality_prediction models are stored into the OS default cache dir. Set the environment variable FIT_CACHE_DIR to configure it, or pass the cache_dir argument to the respective model loading functions.
  • If no cache_dir is passed and nothing is found in the default cache location, FIT also checks the legacy package-internal model paths for backward compatibility with versions <= 0.1.1.
  • FIT models were trained on Imagenet-initialized weights. Those torch / torchvision weights will be stored in the default pytorch cache dir, configurable by setting the TORCH_HOME environment variable.

Contribute

You are very welcome to contribute to the toolbox. Please raise an Issue for bugs, or create a Pull request for fixes and added features. For everything else, the Contribution Discussion is the right place. Please feel free to contact us there if you have any proposals, questions or need help.

Cite

If you use this toolbox in your research, please consider citing it:

Gervelmeyer et al., (2025). Fundus Image Toolbox: A Python package for fundus image processing. Journal of Open Source Software, 10(108), 7101, https://doi.org/10.21105/joss.07101

Bibtex
@article{Gervelmeyer2025-fit,
  title     = "Fundus Image Toolbox: A Python package for fundus image processing",
  author    = "Gervelmeyer, Julius and M{\"u}ller, Sarah and Huang, Ziwei and Berens, Philipp",
  journal   = "Journal of Open Source Software",
  publisher = "The Open Journal",
  volume    =  10,
  number    =  108,
  pages     = "7101",
  month     =  apr,
  year      =  2025,
  doi       = "https://doi.org/10.21105/joss.07101",
  }

If you use external parts of the toolbox that this toolbox provides an interface for, please consider citing the respective papers:

OS Compatibility

v0.1.2
Python Version Linux
Rocky 8.8, Kernel 4.18
macOS
Sequoia 15.7
Windows
11 Pro
3.9
3.10
3.11
3.12

✅ Supported & all tests successful   🔸 Partly supported: sample notebooks succeed but automatic tests fail partly   ❓ Untested/unknown   ❌ Not supported

v0.1.1
Python Version Linux
Rocky 8.8, Kernel 4.18
macOS
Sequoia 15.7
Windows
11 Pro
3.9
3.10
3.11
3.12

✅ Supported & all tests successful   🔸 Partly supported: sample notebooks succeed but automatic tests fail partly   ❓ Untested/unknown   ❌ Not supported

License

The toolbox is licensed under the MIT License. See the license file for more information.

About

A Python package for fundus image processing.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors