Skip to content

feat(panda): standardize nuclei seg data#9

Merged
xrusnack merged 18 commits intomasterfrom
panda/data-standardization
Apr 1, 2026
Merged

feat(panda): standardize nuclei seg data#9
xrusnack merged 18 commits intomasterfrom
panda/data-standardization

Conversation

@xrusnack
Copy link
Copy Markdown
Member

@xrusnack xrusnack commented Mar 24, 2026

Depends on PR #8

Summary by CodeRabbit

  • New Features

    • Added nuclei data standardization pipeline for automated segmentation processing
  • Chores

    • Updated PANDA dataset configuration with nuclei dataset paths and artifact references

@xrusnack xrusnack requested review from matejpekar and vejtek March 24, 2026 10:58
@xrusnack xrusnack self-assigned this Mar 24, 2026
@xrusnack xrusnack requested a review from a team March 24, 2026 10:58
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d608fafd-ab07-4268-aa39-f94c7f7dd245

📥 Commits

Reviewing files that changed from the base of the PR and between 3e85c4f and 874b5e3.

📒 Files selected for processing (3)
  • configs/data/sources/panda.yaml
  • exploration/panda/save_metadataset.py
  • preprocessing/nuclei_standardization.py
✅ Files skipped from review due to trivial changes (1)
  • exploration/panda/save_metadataset.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • configs/data/sources/panda.yaml
  • preprocessing/nuclei_standardization.py

📝 Walkthrough

Walkthrough

Introduces a nuclei standardization processing pipeline for the PANDA dataset with configuration, a Ray-backed preprocessing script that standardizes nuclei segmentation data into polygons and centroids, and a Kubernetes job submission script. Updates dataset configuration paths and MLflow artifact reference.

Changes

Cohort / File(s) Summary
Dataset Configuration
configs/data/sources/panda.yaml
Added nuclei dataset paths (raw and standardized output directories) and updated MLflow artifact URI for Radboud split reference.
Preprocessing Configuration
configs/preprocessing/nuclei_standardization.yaml
New Hydra configuration defining dataset-resolved parameters for nuclei standardization (metadata URI, source/output paths, concurrency limit).
Nuclei Standardization Processing
preprocessing/nuclei_standardization.py
New Ray-backed worker script that reads nuclei segmentation Parquet files, converts radial distance representations to polygon vertices via polar coordinate sampling, computes centroids, and writes standardized output with deterministic IDs. Includes Hydra/MLflow-logged main entrypoint with concurrent processing orchestration.
Kubernetes Job Submission
scripts/preprocessing/run_nuclei_standardization.py
New script submitting Kubernetes job to execute nuclei standardization pipeline with specified resource allocation (16 CPUs, 64Gi memory) and secure storage volume mounts.
Exploration Refactoring
exploration/panda/save_metadataset.py
Removed intermediate log_file variable binding; now passes error log path directly to get_dataframes().

Sequence Diagram

sequenceDiagram
    participant Config as Hydra Config
    participant Main as Main Process
    participant MLflow as MLflow
    participant Ray as Ray Cluster
    participant Disk as Disk Storage
    participant Worker as Ray Worker

    Main->>Config: Load nuclei_standardization config
    Main->>MLflow: Download metadata from config.metadata_uri
    MLflow-->>Main: Metadata with slide_id, segmentation_id
    Main->>Main: Filter rows where has_segmentation=true
    Main->>Ray: Dispatch work via process_items(max_concurrent=10)
    Note over Ray: Concurrent processing of slide/segmentation pairs
    Ray->>Worker: standardize_nuclei(item, output_dir, nuclei_dir)
    Worker->>Disk: Read nuclei.parquet (points, radial_distances)
    Worker->>Worker: Convert radial distances to polygon vertices<br/>(polar coordinate sampling)
    Worker->>Worker: Compute centroid from polygon vertices
    Worker->>Worker: Generate deterministic ID (SHA-256 hash)
    Worker->>Disk: Write standardized nuclei.parquet<br/>(id, polygon, centroid)
    Ray-->>Main: All items processed
    Main->>MLflow: Log run metrics
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • matejpekar
  • vejtek

Poem

🐰 Hop, hop, the nuclei now standardize!
Polygons bloom where radii did rise,
Ray workers dance through the Kubernetes sky,
Each centroid computed, each ID held high!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(panda): standardize nuclei seg data' clearly describes the main objective of the pull request: adding nuclei segmentation data standardization functionality for the PANDA dataset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch panda/data-standardization

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing data management and preprocessing capabilities, particularly for nuclei segmentation data. It introduces a standardized approach for handling the PANDA dataset by defining its specific data configurations and implementing a robust script to standardize nuclei segmentation files. This standardization involves converting radial distance representations into Cartesian polygons and assigning globally unique identifiers to each nucleus, which is crucial for consistent downstream analysis. Furthermore, the PR refactors the base data path configuration to improve flexibility and maintainability across different datasets.

Highlights

  • PANDA Dataset Integration: Introduced a new configuration file (configs/data/sources/panda.yaml) to define data paths and MLflow URIs specifically for the PANDA dataset.
  • Nuclei Data Standardization: Added a new Python script (preprocessing/nuclei_standardization.py) and its corresponding configuration (configs/preprocessing/standardize_nuclei.yaml) to process nuclei segmentation data, converting radial distances to Cartesian polygons and assigning unique IDs.
  • Generalized Data Path: Modified configs/base.yaml to generalize the data_path variable, and updated configs/data/sources/prostate_cancer.yaml to use this new, more flexible path structure.
  • Kubernetes Job for Standardization: Included a new script (scripts/preprocessing/run_nuclei_standardization.py) to facilitate running the nuclei standardization process as a Kubernetes job.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new feature to standardize nuclei segmentation data, specifically for the PANDA dataset. It includes updates to base and dataset-specific configurations, along with a new Python script for processing nuclei data and a job submission script. The changes correctly integrate new data paths and configurations within the existing Hydra and MLflow setup. The core logic for converting radial distances to Cartesian polygons and generating unique nucleus IDs is well-implemented. The overall structure and approach align well with the repository's research focus and existing conventions.

Comment thread preprocessing/nuclei_standardization.py Outdated
Comment thread preprocessing/nuclei_standardization.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
configs/data/sources/panda.yaml (2)

11-11: Consider using path interpolation for consistency.

This path is hardcoded while others use ${project_path} or ${data_path} interpolations. If this intentionally references a separate project location, consider documenting it with a comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@configs/data/sources/panda.yaml` at line 11, The slides_properties entry uses
a hardcoded absolute path instead of the project's interpolation variables;
update slides_properties to use the same interpolation pattern (e.g.,
${project_path} or ${data_path}) as other entries so it becomes consistent with
the rest of the config, or if it intentionally points to a different project,
add a brief inline comment next to slides_properties explaining why the absolute
path is required; locate the slides_properties key in the YAML to make the
change.

14-14: Consider using path interpolation for consistency.

Similar to slides_properties, this hardcoded path differs from the pattern used elsewhere. If this is the expected source location from another project, a brief comment would clarify the intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@configs/data/sources/panda.yaml` at line 14, The entry nuclei_seg_radial uses
a hardcoded absolute path; change it to use the same path interpolation pattern
as slides_properties (e.g., reference project/dataset variables or an
interpolated base path) so it remains consistent with other sources, or if this
exact external path is intentional add a one-line comment explaining it's an
expected external location; update the nuclei_seg_radial value accordingly and
ensure it follows the project's interpolation tokens/variable names used by
slides_properties.
scripts/preprocessing/run_nuclei_standardization.py (1)

11-16: Consider pinning to a specific branch or commit.

The script clones the default branch without specifying a version. For reproducibility, consider adding a branch/tag/commit reference, especially since this PR targets master but originates from panda/data-standardization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/preprocessing/run_nuclei_standardization.py` around lines 11 - 16,
The git clone in the script array currently clones the repository without a
pinned ref, which harms reproducibility; update the "git clone ..." command in
scripts/preprocessing/run_nuclei_standardization.py (the script variable/array)
to include a specific branch, tag, or commit (e.g., append --branch <branch> or
clone then checkout a commit) so the pipeline always uses the intended revision
from the panda/data-standardization work (ensure the chosen ref corresponds to
the PR's source branch or a stable tag).
preprocessing/nuclei_standardization.py (1)

22-23: Consider increasing memory allocation for larger partitions.

1 GiB memory per worker may be tight if partitions contain many nuclei with large radial distance arrays. Monitor for OOM issues during execution.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@preprocessing/nuclei_standardization.py` around lines 22 - 23, The Ray task
decorator on standardize_nuclei currently sets memory=(1 * 1024**3) which may be
insufficient for large partitions; update the `@ray.remote` annotation for
standardize_nuclei to allocate more memory (e.g., 2-4 GiB) or make the memory
value configurable via a constant/env var so it can be tuned at runtime, and
ensure any tests/launch scripts that spawn standardize_nuclei workers are
updated to use the new configurable value; reference the `@ray.remote` decorator
and the standardize_nuclei function when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@preprocessing/nuclei_standardization.py`:
- Around line 64-72: The slide directory is being constructed from the wrong
field (using row.id) causing a mismatch with the recorded slide_id; update the
slide_dir construction so it uses row.slide_id instead of row.id (the block that
sets slide_dir = nuclei_dir / f"slide_id={...}" and then globs partition_files)
so the directory naming matches the slide_id stored when appending to
items_to_process (keys: "nuclei_partition", "slide_id") and ensure no other
references to row.id remain in this loop.

---

Nitpick comments:
In `@configs/data/sources/panda.yaml`:
- Line 11: The slides_properties entry uses a hardcoded absolute path instead of
the project's interpolation variables; update slides_properties to use the same
interpolation pattern (e.g., ${project_path} or ${data_path}) as other entries
so it becomes consistent with the rest of the config, or if it intentionally
points to a different project, add a brief inline comment next to
slides_properties explaining why the absolute path is required; locate the
slides_properties key in the YAML to make the change.
- Line 14: The entry nuclei_seg_radial uses a hardcoded absolute path; change it
to use the same path interpolation pattern as slides_properties (e.g., reference
project/dataset variables or an interpolated base path) so it remains consistent
with other sources, or if this exact external path is intentional add a one-line
comment explaining it's an expected external location; update the
nuclei_seg_radial value accordingly and ensure it follows the project's
interpolation tokens/variable names used by slides_properties.

In `@preprocessing/nuclei_standardization.py`:
- Around line 22-23: The Ray task decorator on standardize_nuclei currently sets
memory=(1 * 1024**3) which may be insufficient for large partitions; update the
`@ray.remote` annotation for standardize_nuclei to allocate more memory (e.g., 2-4
GiB) or make the memory value configurable via a constant/env var so it can be
tuned at runtime, and ensure any tests/launch scripts that spawn
standardize_nuclei workers are updated to use the new configurable value;
reference the `@ray.remote` decorator and the standardize_nuclei function when
making the change.

In `@scripts/preprocessing/run_nuclei_standardization.py`:
- Around line 11-16: The git clone in the script array currently clones the
repository without a pinned ref, which harms reproducibility; update the "git
clone ..." command in scripts/preprocessing/run_nuclei_standardization.py (the
script variable/array) to include a specific branch, tag, or commit (e.g.,
append --branch <branch> or clone then checkout a commit) so the pipeline always
uses the intended revision from the panda/data-standardization work (ensure the
chosen ref corresponds to the PR's source branch or a stable tag).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: adcb34b0-e9e2-4aad-8cc5-3b3de10ee337

📥 Commits

Reviewing files that changed from the base of the PR and between a377343 and 0fb0412.

📒 Files selected for processing (6)
  • configs/base.yaml
  • configs/data/sources/panda.yaml
  • configs/data/sources/prostate_cancer.yaml
  • configs/preprocessing/nuclei_standardization.yaml
  • preprocessing/nuclei_standardization.py
  • scripts/preprocessing/run_nuclei_standardization.py

Comment thread preprocessing/nuclei_standardization.py Outdated
@xrusnack xrusnack marked this pull request as draft March 27, 2026 13:11
@xrusnack xrusnack marked this pull request as ready for review March 30, 2026 10:59
matejpekar
matejpekar previously approved these changes Mar 30, 2026
vejtek
vejtek previously approved these changes Mar 30, 2026
@xrusnack xrusnack dismissed stale reviews from vejtek and matejpekar via 874b5e3 April 1, 2026 08:09
@xrusnack xrusnack requested review from matejpekar and vejtek April 1, 2026 08:11
@xrusnack xrusnack merged commit 4cddd75 into master Apr 1, 2026
3 of 4 checks passed
@xrusnack xrusnack deleted the panda/data-standardization branch April 1, 2026 14:11
xrusnack added a commit that referenced this pull request Apr 14, 2026
xrusnack added a commit that referenced this pull request Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants