Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 42 minutes and 31 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (4)
📝 WalkthroughWalkthroughThis PR introduces a new Multiple Instance Learning (MIL) framework for nuclei graph analysis. It adds experiment and model configuration files, along with a new LightningModule that implements training, validation, testing, and prediction workflows with graph and nuclei-level metric tracking. Changes
Sequence DiagramsequenceDiagram
participant Batch as Data Batch
participant Forward as forward()
participant Net as Wrapped Network
participant Loss as Loss Computation
participant Metrics as Metric Updates
participant Epoch as Epoch Callbacks
Batch->>Forward: Supply batch data
activate Forward
Forward->>Forward: Extract block_mask
alt Not Training
Forward->>Forward: Apply mask_mixed_blocks
end
Forward->>Net: Forward pass
Net-->>Forward: Return logits & outputs
deactivate Forward
alt Training Step
Forward->>Loss: Compute BCEWithLogits<br/>on graph logits/targets
Loss-->>Forward: Training loss
Forward->>Metrics: Log train/graph/loss
else Validation/Test Step
Forward->>Loss: Compute graph loss
Loss-->>Metrics: Update graph metrics
Forward->>Loss: Compute nuclei loss<br/>(if supervised)
Loss-->>Metrics: Update nuclei metrics
Metrics-->>Epoch: Accumulate losses
end
Epoch->>Metrics: on_validation/test_epoch_end()
activate Epoch
Metrics->>Metrics: Compute metric collections
Metrics->>Metrics: Reset accumulators
alt Validation
Metrics->>Metrics: Track best graph loss
Metrics-->>Epoch: Log best/... metrics
end
deactivate Epoch
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a foundational Multiple Instance Learning (MIL) meta-architecture within the PyTorch Lightning framework. The primary goal is to enable robust training and evaluation of models for nuclei graph analysis, providing a structured approach to handle both graph-level and individual nuclei-level predictions. The changes facilitate the integration of self-attention transformer models and ensure comprehensive performance monitoring throughout the model lifecycle. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new meta-architecture for Multiple Instance Learning, NucleiMILMetaArch, along with its configuration. The implementation is well-structured, but there are opportunities to improve maintainability by reducing code duplication in the validation_step/test_step and on_validation_epoch_end/on_test_epoch_end methods. I've also suggested a cleaner and more robust way to separate optimizer parameters.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
nuclei_graph/nuclei_mil_meta_arch.py (1)
46-50: Unused metric collections.
predict_graph_metricsandpredict_nuclei_metricsare instantiated but never used —predict_stepreturns raw outputs without updating these metrics.Consider removing them to avoid unnecessary memory allocation, or implement metric updates in
predict_stepif prediction-time metrics are intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nuclei_graph/nuclei_mil_meta_arch.py` around lines 46 - 50, predict_graph_metrics and predict_nuclei_metrics are created but never updated in predict_step; either remove their creation or update predict_step to record prediction-time metrics. Locate the metric initializations (predict_graph_metrics, predict_nuclei_metrics) and either delete those lines and any related references, or modify predict_step to compute and call the appropriate update/compute methods on predict_graph_metrics and predict_nuclei_metrics (matching how val/test metrics are updated) so prediction outputs are logged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nuclei_graph/nuclei_mil_meta_arch.py`:
- Line 60: Fix the typo in the inline comment in nuclei_mil_meta_arch.py where
"pediction" is written; update the comment text to read "prediction" (this is
the comment near the mixed blocks handling in the nuclei_mil_meta_arch module).
---
Nitpick comments:
In `@nuclei_graph/nuclei_mil_meta_arch.py`:
- Around line 46-50: predict_graph_metrics and predict_nuclei_metrics are
created but never updated in predict_step; either remove their creation or
update predict_step to record prediction-time metrics. Locate the metric
initializations (predict_graph_metrics, predict_nuclei_metrics) and either
delete those lines and any related references, or modify predict_step to compute
and call the appropriate update/compute methods on predict_graph_metrics and
predict_nuclei_metrics (matching how val/test metrics are updated) so prediction
outputs are logged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 40b9e3f8-39b8-4758-877e-11417be23865
📒 Files selected for processing (4)
configs/experiment/modeling/training/crop_level.yamlconfigs/model/meta_archs/nuclei_mil.yamlnuclei_graph/__init__.pynuclei_graph/nuclei_mil_meta_arch.py
Summary by CodeRabbit
Release Notes