Z-SSMNet: Zonal-aware Self-Supervised Mesh Network for Prostate Cancer Detection and Diagnosis with Bi-parametric MRI
This repository provides the official implementation of the paper "Z-SSMNet: Zonal-aware Self-Supervised Mesh Network for Prostate Cancer Detection and Diagnosis with bpMRI". In this paper, we propose a new Zonal-aware Self-supervised Mesh Network that adaptively fuses multiple 2D/2.5D/3D CNNs to effectively balance representation for sparse inter-slice information and dense intra-slice information in bpMRI. A self-supervised learning (SSL) technique is further introduced to pre-train our network using unlabelled data to learn the generalizable image features. Furthermore, we constrained our network to understand the zonal-specific domain knowledge to improve the precision of diagnosis of clinically significant prostate cancer (csPCa). Our model was developed on the PI-CAI dataset while participating in the PI-CAI challenge.
Please cite the following paper if you use the Z-SSMNet
@article{yuan2025z,
title={Z-SSMNet: Zonal-aware Self-supervised Mesh Network for prostate cancer detection and diagnosis with Bi-parametric MRI},
author={Yuan, Yuan and Ahn, Euijoon and Feng, Dagan and Khadra, Mohamed and Kim, Jinman},
journal={Computerized Medical Imaging and Graphics},
pages={102510},
year={2025},
publisher={Elsevier}
}
Please feel free to raise any issues you encounter here.
Z-SSMNet can be pip-installed directly:
pip install git+https://github.com/yuanyuan29/Z-SSMNet.gitAlternatively, Z-SSMNet can be installed from source:
git clone https://github.com/yuanyuan29/Z-SSMNet.git
cd Z-SSMNet
pip install -e .This ensures the scripts are present locally, which enables you to run the provided Python scripts. Additionally, this allows you to modify the offered solutions, due to the -e option.
We define setup steps that must be completed before following the algorithm tutorials.
We define three main folders that must be prepared apriori:
/input/contains one of the PI-CAI datasets. This can be the Public Training and Development Dataset, the Private Training Dataset, the Hidden Validation and Tuning Cohort, or the Hidden Testing Cohort./workdir/stores intermediate results, such as preprocessed images and annotations./workdir/results/[model name]/stores model checkpoints/weights during training (enables the ability to pause/resume training).
/output/stores training output, such as trained model weights and preprocessing plan.
Unless specified otherwise, this tutorial assumes that the PI-CAI: Public Training and Development Dataset will be downloaded and unpacked. Before downloading the dataset, read its documentation and dedicated forum post (for all updates/fixes, if any). To download and unpack the dataset, run the following commands:
# download all folds
curl -C - "https://zenodo.org/record/6624726/files/picai_public_images_fold0.zip?download=1" --output picai_public_images_fold0.zip
curl -C - "https://zenodo.org/record/6624726/files/picai_public_images_fold1.zip?download=1" --output picai_public_images_fold1.zip
curl -C - "https://zenodo.org/record/6624726/files/picai_public_images_fold2.zip?download=1" --output picai_public_images_fold2.zip
curl -C - "https://zenodo.org/record/6624726/files/picai_public_images_fold3.zip?download=1" --output picai_public_images_fold3.zip
curl -C - "https://zenodo.org/record/6624726/files/picai_public_images_fold4.zip?download=1" --output picai_public_images_fold4.zip
# unzip all folds
unzip picai_public_images_fold0.zip -d /input/images/
unzip picai_public_images_fold1.zip -d /input/images/
unzip picai_public_images_fold2.zip -d /input/images/
unzip picai_public_images_fold3.zip -d /input/images/
unzip picai_public_images_fold4.zip -d /input/images/In case unzip is not installed, you can use Docker to unzip the files:
docker run --cpus=2 --memory=8gb --rm -v /path/to/input:/input yuanyuan29/z-ssmnet:latest unzip /input/picai_public_images_fold0.zip -d /input/images/
docker run --cpus=2 --memory=8gb --rm -v /path/to/input:/input yuanyuan29/z-ssmnet:latest unzip /input/picai_public_images_fold1.zip -d /input/images/
docker run --cpus=2 --memory=8gb --rm -v /path/to/input:/input yuanyuan29/z-ssmnet:latest unzip /input/picai_public_images_fold2.zip -d /input/images/
docker run --cpus=2 --memory=8gb --rm -v /path/to/input:/input yuanyuan29/z-ssmnet:latest unzip /input/picai_public_images_fold3.zip -d /input/images/
docker run --cpus=2 --memory=8gb --rm -v /path/to/input:/input yuanyuan29/z-ssmnet:latest unzip /input/picai_public_images_fold4.zip -d /input/images/Please follow the instructions here to set up the Docker container.
Also, collect the training annotations via the following command:
git clone https://github.com/DIAGNijmegen/picai_labels /input/labels/We use the PI-CAI challenge organizers prepared 5-fold cross-validation splits of all 1500 cases in the PI-CAI: Public Training and Development Dataset. There is no patient overlap between training/validation splits. You can load these splits as follows:
from z_ssmnet.splits.picai import train_splits, valid_splits
for fold, ds_config in train_splits.items():
print(f"Training fold {fold} has cases: {ds_config['subject_list']}")
for fold, ds_config in valid_splits.items():
print(f"Validation fold {fold} has cases: {ds_config['subject_list']}")Additionally, the organizers prepared 5-fold cross-validation splits of all cases with an expert-derived csPCa annotation. These splits are subsets of the splits above. You can load these splits as follows:
from z_ssmnet.splits.picai_nnunet import train_splits, valid_splitsWhen using picai_eval from the command line, we recommend saving the splits to disk. Then, you can pass these to picai_eval to ensure all cases were found. You can export the labelled cross-validation splits using:
python -m z_ssmnet.splits.picai_nnunet --output "/workdir/splits/picai_nnunet"We follow the nnU-Net Raw Data Archive format to prepare our dataset for usage. For this, you can use the picai_prep module. Note, the picai_prep module should be automatically installed when installing the Z-SSMNet module, and is installed within the z-ssmnet Docker container as well.
To convert the dataset in /input/ into the nnU-Net Raw Data Archive format, and store it in /workdir/nnUNet_raw_data, please follow the instructions provided here, or set your target paths in prepare_data_semi_supervised.py and execute it:
python src/z_ssmnet/prepare_data_semi_supervised.pyTo adapt/modify the preprocessing pipeline or its default specifications, please make changes to the prepare_data_semi_supervised.py script accordingly.
Alternatively, you can use Docker to run the Python script:
docker run --cpus=2 --memory=16gb --rm \
-v /path/to/input/:/input/ \
-v /path/to/workdir/:/workdir/ \
-v /path/to/Z-SSMNet:/scripts/Z-SSMNet/ \
yuanyuan29/z-ssmnet:latest python3 /scripts/Z-SSMNet/src/z_ssmnet/prepare_data_semi_supervised.pyIf you want to train the supervised model (only using the data with manual labels), prepare the dataset using prepare_data.py and replace Task2302_z-nnmnet with Task2301_z-nnmnet in the following commands.
The implementation of the model consists of three main parts:
The prostate area consists of the peripheral zone (PZ), transition zone (TZ), central zone (CZ) and anterior fibromuscular stroma (AFS). Prostate cancer (PCa) lesions located in different zones have different characteristics. Moreover, approximately 70%-75% of PCa originate in the PZ and 20%-30% in the TZ [1]. In this work, we trained a standard 3D nnU-Net [2] with external public datasets to generate binary prostate zonal anatomy masks (peripheral and rest (TZ, CZ, AFS) of the gland) as additional input information to guide the network to learn region-specific knowledge useful for clinically significant PCa (csPCa) detection and diagnosis.
→ Read the full documentation here.
SSL is a general learning framework that relies on surrogate (pretext) tasks that can be formulated using only unsupervised data. A pretext task is designed in a way that solving it requires learning of valuable image representations for the downstream (main) task, which contributes to improving the generalization ability and performance of the model. We introduced image restoration as the pretext task and pre-trained our zonal-aware mesh network in a self-supervised manner.
→ Read the full documentation here.
Considering the heterogeneous between data from multi-centres and multi-vendors, we integrated the zonal-aware mesh network into the famous nnU-Net framework, which provides a performant framework for medical image segmentation to form the Z-nnMNet that can pre-process the data adaptively. For large datasets with labels, the model can be trained from scratch. If the dataset is small or some labels of the data are noisy, fine-tuning based on the SSL pre-trained model can help achieve better performance.
→ Read the full documentation here.
[1] J. C. Weinreb, J. O. Barentsz, P. L. Choyke, F. Cornud, M. A. Haider, K. J. Macura, D. Margolis, M. D. Schnall, F. Shtern, C. M. Tempany, H. C. Thoeny, and S. Verma, “PI-RADS Prostate Imaging - Reporting and Data System: 2015, Version 2,” European Urology, vol. 69, no. 1, pp. 16-40, Jan, 2016.
[2] F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203-+, Feb, 2021.
[3] Z. Dong, Y. He, X. Qi, Y. Chen, H. Shu, J.-L. Coatrieux, G. Yang, and S. Li, “MNet: Rethinking 2D/3D Networks for Anisotropic Medical Image Segmentation,” Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}, Jul 2022, Vienna, Austria. pp.870-876.
[4] Z. Zhou, V. Sodha, J. Pang, M. B. Gotway, and J. Liang, “Models Genesis,” Med Image Anal, vol. 67, pp. 101840, Jan, 2021.
[5] A. Saha, J. J. Twilt, J. S. Bosma, B. van Ginneken, D. Yakar, M. Elschot, J. Veltman, J. J. Fütterer, M. de Rooij, H. Huisman, "Artificial Intelligence and Radiologists at Prostate Cancer Detection in MRI: The PI-CAI Challenge (Study Protocol)", DOI: 10.5281/zenodo.6667655