Skip to content

feat: implement scoring module for segments#7

Open
dhalmazna wants to merge 14 commits intomasterfrom
feat/segment-and-hyperpixel-scoring
Open

feat: implement scoring module for segments#7
dhalmazna wants to merge 14 commits intomasterfrom
feat/segment-and-hyperpixel-scoring

Conversation

@dhalmazna
Copy link
Copy Markdown
Collaborator

@dhalmazna dhalmazna commented Mar 13, 2026

Context:
This PR introduces the scoring/ module, which provides the core mechanisms to evaluate the importance of individual image segments during the explainer's execution. It also establishes the foundation for selecting the initial starting points (seeds) for the main search algorithms by creating the surrogate dataset.

What was added:

  • scoring/segments.py: Contains the logic for scoring base segments (create_surrogate_dataset, calculate_segment_scores). It evaluates each individual segment by masking its local neighborhood and measuring the model's confidence drop. These importance scores are crucial for selecting the initial "seeds" from which the main algorithms (like MCGS) will grow.
  • scoring/hyperpixel.py: Handles the evaluation of discovered hyperpixels (calculating deltas and selecting the top-k most important regions).
  • algorithm/graph.py: Introduces the ImageGraph class, which encapsulates the static image topology (the segments tensor and a frozenset-based adj_list).

Related Task:
XAI-29

Summary by CodeRabbit

  • Documentation

    • Updated README to revise algorithm description and project structure references.
  • New Features

    • Added scoring tools for computing per-segment and per-candidate impact scores and selecting top contributors.
    • Added graph utilities for exploring and sampling connected segment neighborhoods.
    • Added surrogate dataset generation for segment-level analysis.
  • Refactor

    • Segmentation output now uses a graph-based adjacency representation instead of bitmask adjacency.

@dhalmazna dhalmazna self-assigned this Mar 13, 2026
Copilot AI review requested due to automatic review settings March 13, 2026 15:05
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes the foundational scoring mechanisms for the explainer, enabling the evaluation of individual image segments and hyperpixels. By introducing a dedicated scoring module and efficient bitmask-based graph utilities, it provides the core logic for determining segment importance and selecting initial starting points for subsequent search algorithms, thereby enhancing the interpretability capabilities of the system.

Highlights

  • New Scoring Module: Introduced a new scoring/ module to house functionalities for evaluating the importance of image segments and hyperpixels, replacing the previous evaluation/ directory.
  • Segment Scoring and Surrogate Dataset Creation: Implemented scoring/segments.py which includes logic for creating a surrogate dataset by masking local neighborhoods of segments and calculating their importance scores based on model confidence drops. These scores are crucial for selecting initial 'seeds' for main search algorithms.
  • Hyperpixel Evaluation: Added scoring/hyperpixel.py to handle the evaluation of discovered hyperpixels, including calculating deltas and selecting the top-k most important regions.
  • Efficient Bitmask Graph Utilities: Integrated algorithm/bitmask_graph.py with highly efficient, low-level bitwise graph utilities (get_frontier, iter_bits) to enable fast and memory-efficient BFS traversal during surrogate dataset creation, avoiding heavy external graph libraries.
Changelog
  • README.md
    • Refactored evaluation/ directory to scoring/ in the project structure documentation.
    • Updated file paths for surrogate.py to segments.py and hyperpixel.py within the new scoring/ module.
  • ciao/algorithm/init.py
    • Added get_frontier and iter_bits from bitmask_graph to the module's public API.
  • ciao/algorithm/bitmask_graph.py
    • Introduced iter_bits function for efficient iteration over set bits in an integer mask.
    • Implemented get_frontier function to compute the expansion frontier for graph traversal using bitmasks.
  • ciao/scoring/init.py
    • Added imports for calculate_hyperpixel_deltas, select_top_hyperpixels from hyperpixel.py.
    • Added imports for calculate_segment_scores, create_surrogate_dataset from segments.py.
    • Defined __all__ to expose the new scoring functions publicly.
  • ciao/scoring/hyperpixel.py
    • Implemented calculate_hyperpixel_deltas to compute masking deltas for hyperpixel candidates using batched inference.
    • Added select_top_hyperpixels function to sort and select the most important hyperpixels based on their scores.
  • ciao/scoring/segments.py
    • Implemented create_surrogate_dataset to generate a dataset of masked image samples and their corresponding model confidence drops using a BFS-like approach with bitmasks.
    • Added calculate_segment_scores to derive neighborhood-smoothed importance scores for segments from the generated surrogate dataset.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new scoring module, which is a core component for evaluating image segment importance. The implementation is well-structured and includes efficient bitmask-based graph utilities. My review focuses on a couple of performance optimizations. Specifically, I've suggested removing some expensive logging operations that trigger redundant model forward passes and vectorizing a loop that applies masks to an image batch to better leverage PyTorch's capabilities. Overall, the changes are a solid addition to the project.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces a new scoring/ module to quantify segment and hyperpixel importance for CIAO’s explainer workflow, plus bitmask-based graph traversal utilities to support efficient neighborhood masking.

Changes:

  • Added surrogate dataset generation and per-segment score aggregation (ciao/scoring/segments.py).
  • Added batched masking delta computation and top-k hyperpixel selection helpers (ciao/scoring/hyperpixel.py).
  • Added bitmask graph primitives for fast neighborhood/frontier expansion (ciao/algorithm/bitmask_graph.py) and exported new modules via package __init__.py files; updated README tree.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
ciao/scoring/segments.py Builds local neighborhood masking groups and produces a surrogate dataset + segment score aggregation.
ciao/scoring/hyperpixel.py Computes masking deltas for candidate hyperpixels and provides a top-k selection helper.
ciao/scoring/init.py Exposes scoring utilities as package-level exports.
ciao/algorithm/bitmask_graph.py Adds iter_bits and get_frontier utilities for bitmask-based traversal.
ciao/algorithm/init.py Exposes algorithm utilities as package-level exports.
README.md Updates module layout documentation to reflect the new scoring/ package.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR replaces bitmask-based adjacency with an ImageGraph dataclass, adds a scoring package (hyperpixel and segment-level functions), refactors segmentation to return ImageGraph objects, and exposes ImageGraph via the algorithm package init; README updated to reflect the new structure.

Changes

Cohort / File(s) Summary
Documentation
README.md
Updated descriptions and project layout: removed bitset-efficiency claim, renamed evaluation/scoring/, and updated module names (lookahead_bitset.pylookahead.py, bitmask_graph.pygraph.py).
Algorithm Init
ciao/algorithm/__init__.py
New package init re-exporting ImageGraph in __all__.
Graph Utility
ciao/algorithm/graph.py
New ImageGraph dataclass with segments and adj_list (list[frozenset[int]]); adds num_segments, get_frontier(), and sample_connected_superset().
Scoring Package Init
ciao/scoring/__init__.py
New package init re-exporting calculate_hyperpixel_deltas, select_top_hyperpixels, calculate_segment_scores, and create_surrogate_dataset.
Hyperpixel Scoring
ciao/scoring/hyperpixel.py
New module implementing batched calculate_hyperpixel_deltas() that masks segment groups and computes logit deltas, plus select_top_hyperpixels() for top-N selection.
Segment Scoring
ciao/scoring/segments.py
New module implementing create_surrogate_dataset() (expands neighborhoods via ImageGraph, builds binary design matrix X and target y) and calculate_segment_scores() (per-segment mean deltas with validation).
Segmentation Refactor
ciao/data/segmentation.py
Reworked to return ImageGraph. Replaced bitmask adjacency with list[frozenset[int]] adjacency, moved square adjacency logic to torch tensors, renamed hex adjacency builder, and removed bitmask-related functions.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant Surrogate as "create_surrogate_dataset\n(routine)"
    participant Graph as "ImageGraph"
    participant Hyperpixel as "calculate_hyperpixel_deltas\n(routine)"
    participant Predictor as "ModelPredictor"

    Client->>Surrogate: call(input_batch, image_graph, target_class_idx)
    Surrogate->>Graph: get_frontier(current, used)
    Graph-->>Surrogate: frontier
    Note over Surrogate,Graph: expand neighborhoods up to N
    Surrogate->>Hyperpixel: local_groups, input_batch
    loop batched mask inference
        Hyperpixel->>Predictor: get_class_logit_batch(masked_inputs)
        Predictor-->>Hyperpixel: masked_logits
        Note over Hyperpixel: compute delta = original_logit - masked_logits
    end
    Hyperpixel-->>Surrogate: hyperpixel_deltas
    Surrogate-->>Client: X (binary masks), y (delta scores)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • vejtek
  • Adames4

Poem

🐰 I hop through graphs of segments bright,
Frozensets gather in the moonlight,
I mask and measure deltas small,
Build surrogates that tell it all,
A joyful hop — new structure, right!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: introducing a new scoring module for segments with supporting utilities for segment evaluation and importance scoring.
Docstring Coverage ✅ Passed Docstring coverage is 95.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/segment-and-hyperpixel-scoring

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Base automatically changed from feat/model-predictor to master March 16, 2026 16:35
@vejtek vejtek requested a review from a team March 16, 2026 16:35
]
all_deltas.extend(batch_deltas)

del batch_inputs, masked_logits, mask_tensor
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

del? is not that forbidden? :D

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it? Idk, I'm not a programmer :D chatbot told me to keep it, but I can remove it if you say so

@dhalmazna dhalmazna force-pushed the feat/segment-and-hyperpixel-scoring branch from f30a530 to 2c6a6ab Compare March 17, 2026 13:46
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
ciao/scoring/hyperpixel.py (1)

107-115: ⚠️ Potential issue | 🟡 Minor

Reject negative max_hyperpixels to avoid unintended slicing behavior.

A negative value at Line 115 returns “all but last N”, which is unexpected for a max-count parameter.

Suggested fix
 def select_top_hyperpixels(
     hyperpixels: list[dict[str, object]], max_hyperpixels: int = 10
 ) -> list[dict[str, object]]:
     """Select top hyperpixels by their primary algorithm-specific score."""
+    if max_hyperpixels < 0:
+        raise ValueError(
+            f"max_hyperpixels must be >= 0, got {max_hyperpixels}"
+        )
     return sorted(
         hyperpixels,
         key=lambda hp: abs(hp["hyperpixel_score"]),  # type: ignore[arg-type]
         reverse=True,
     )[:max_hyperpixels]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ciao/scoring/hyperpixel.py` around lines 107 - 115, The function
select_top_hyperpixels currently allows a negative max_hyperpixels which causes
Python slicing to return "all but last N"; add an explicit validation at the
start of select_top_hyperpixels to reject negative values (e.g., if
max_hyperpixels < 0: raise ValueError("max_hyperpixels must be non-negative"))
so callers get a clear error instead of surprising slicing behavior; keep the
rest of the logic (sorting by abs(hp["hyperpixel_score"]) and slicing)
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ciao/algorithm/bitmask_graph.py`:
- Around line 20-25: The generator iter_bits should guard against negative masks
to avoid infinite loops: add an explicit check at the start of iter_bits that
validates mask is non-negative (e.g., if mask < 0: raise ValueError) before
assigning temp and entering the while loop; reference the iter_bits function and
variables mask, temp, low_bit, and node_id when locating the change so the loop
only runs for valid non-negative bitmasks.

In `@ciao/scoring/hyperpixel.py`:
- Around line 69-71: Before the batching loop that uses range(0, num_masks,
batch_size), validate that batch_size is an integer greater than 0 and raise a
clear error (e.g., ValueError) if not; update the code around the loop that
iterates over batch_start/batch_end/current_batch_size to perform this check
(validate batch_size, optionally coerce to int) so range(...) cannot be called
with 0 or negative values and silent empty results are avoided.

In `@ciao/scoring/segments.py`:
- Around line 51-57: Check inputs at start of the routine that uses adj_masks
and neighborhood_distance (where num_segments = len(adj_masks) and the loops
over segment_id/visited_mask/current_layer_mask run); if adj_masks is empty or
neighborhood_distance is negative, raise a clear ValueError instead of
proceeding, so downstream calculations (and the log at the block around lines
91-93) do not produce NaNs; implement the check early (before computing
num_segments or entering the segment loop) and reference the variables adj_masks
and neighborhood_distance in the error message.

---

Duplicate comments:
In `@ciao/scoring/hyperpixel.py`:
- Around line 107-115: The function select_top_hyperpixels currently allows a
negative max_hyperpixels which causes Python slicing to return "all but last N";
add an explicit validation at the start of select_top_hyperpixels to reject
negative values (e.g., if max_hyperpixels < 0: raise ValueError("max_hyperpixels
must be non-negative")) so callers get a clear error instead of surprising
slicing behavior; keep the rest of the logic (sorting by
abs(hp["hyperpixel_score"]) and slicing) unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8a7c0b34-9906-4862-a691-9ea5d24c2c2a

📥 Commits

Reviewing files that changed from the base of the PR and between 260e605 and 2c6a6ab.

📒 Files selected for processing (6)
  • README.md
  • ciao/algorithm/__init__.py
  • ciao/algorithm/bitmask_graph.py
  • ciao/scoring/__init__.py
  • ciao/scoring/hyperpixel.py
  • ciao/scoring/segments.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
ciao/scoring/segments.py (1)

49-55: ⚠️ Potential issue | 🟡 Minor

Also reject empty adj_masks here.

If adj_masks is empty, this skips group generation entirely and Lines 94-95 end up logging nan stats instead of surfacing invalid input.

🛡️ Proposed fix
     if neighborhood_distance < 0:
         raise ValueError("neighborhood_distance cannot be negative")
+    if not adj_masks:
+        raise ValueError("adj_masks must contain at least one segment")
 
     # BFS algorithm using low-level bitmask graph operations
     local_groups = []
     num_segments = len(adj_masks)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ciao/scoring/segments.py` around lines 49 - 55, The function currently checks
neighborhood_distance but doesn't validate adj_masks, so an empty adj_masks
causes no groups to be generated and later NaN stats; add a guard that raises a
ValueError when adj_masks is empty (e.g., if not adj_masks: raise
ValueError("adj_masks cannot be empty")) near the existing neighborhood_distance
check so the subsequent BFS/grouping code (local_groups, num_segments, and any
downstream logic that iterates over adj_masks) always operates on non-empty
input.
🧹 Nitpick comments (1)
ciao/data/segmentation.py (1)

89-93: Avoid per-edge .item() if this runs on CUDA.

segments is now created on input_tensor.device, so this loop does a device sync for every edge on GPU inputs. Move the edge tensor to CPU once before iterating to avoid the slowdown.

♻️ Proposed change
-    all_edges = torch.vstack(edge_arrays)
-    for edge in all_edges:
-        seg1, seg2 = edge[0].item(), edge[1].item()
+    all_edges = torch.vstack(edge_arrays).cpu().tolist()
+    for seg1, seg2 in all_edges:
         adjacency_sets[seg1].add(seg2)
         adjacency_sets[seg2].add(seg1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ciao/data/segmentation.py` around lines 89 - 93, The loop over all_edges
causes a device sync on CUDA because each edge uses .item(); fix this by moving
the edge tensor to CPU once before iterating (e.g., replace iterating over
all_edges with a CPU-backed tensor like all_edges_cpu = all_edges.cpu() or
all_edges.to('cpu')) and then iterate over all_edges_cpu to extract seg1/seg2
(using .item() or Python ints) so adjacency_sets updates
(adjacency_sets[seg1].add(seg2), adjacency_sets[seg2].add(seg1)) no longer incur
per-edge GPU sync; update references to all_edges in the function in
segmentation.py accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ciao/scoring/hyperpixel.py`:
- Around line 107-115: The function select_top_hyperpixels currently allows
negative max_hyperpixels which changes slice semantics; add an explicit check at
the start of select_top_hyperpixels to raise a ValueError (or similar) if
max_hyperpixels is negative so callers get a clear error instead of a surprising
slice (e.g., raise ValueError("max_hyperpixels must be non-negative") when
max_hyperpixels < 0), then proceed with the existing sorting and slicing logic
unchanged.

---

Duplicate comments:
In `@ciao/scoring/segments.py`:
- Around line 49-55: The function currently checks neighborhood_distance but
doesn't validate adj_masks, so an empty adj_masks causes no groups to be
generated and later NaN stats; add a guard that raises a ValueError when
adj_masks is empty (e.g., if not adj_masks: raise ValueError("adj_masks cannot
be empty")) near the existing neighborhood_distance check so the subsequent
BFS/grouping code (local_groups, num_segments, and any downstream logic that
iterates over adj_masks) always operates on non-empty input.

---

Nitpick comments:
In `@ciao/data/segmentation.py`:
- Around line 89-93: The loop over all_edges causes a device sync on CUDA
because each edge uses .item(); fix this by moving the edge tensor to CPU once
before iterating (e.g., replace iterating over all_edges with a CPU-backed
tensor like all_edges_cpu = all_edges.cpu() or all_edges.to('cpu')) and then
iterate over all_edges_cpu to extract seg1/seg2 (using .item() or Python ints)
so adjacency_sets updates (adjacency_sets[seg1].add(seg2),
adjacency_sets[seg2].add(seg1)) no longer incur per-edge GPU sync; update
references to all_edges in the function in segmentation.py accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a11cf4a5-fbd6-417e-addf-debcdf287454

📥 Commits

Reviewing files that changed from the base of the PR and between 2c6a6ab and 897beae.

📒 Files selected for processing (4)
  • ciao/algorithm/bitmask_graph.py
  • ciao/data/segmentation.py
  • ciao/scoring/hyperpixel.py
  • ciao/scoring/segments.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
ciao/scoring/segments.py (1)

91-93: Use integer literal for int8 array assignment.

Assigning 1.0 (float) to an int8 array works due to NumPy's implicit truncation, but using 1 is cleaner and consistent with the declared dtype.

Suggested fix
     # Fast vectorized indicator matrix filling
     for i, masked_segments in enumerate(local_groups):
-        X[i, masked_segments] = 1.0
+        X[i, masked_segments] = 1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ciao/scoring/segments.py` around lines 91 - 93, Replace the float literal 1.0
with integer 1 when filling the int8 indicator matrix to match the declared
dtype: in the loop that iterates over local_groups and assigns into X (the
indicator array), change the assignment X[i, masked_segments] = 1.0 to X[i,
masked_segments] = 1 so the values are integer literals consistent with X's int8
dtype.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@ciao/scoring/segments.py`:
- Around line 91-93: Replace the float literal 1.0 with integer 1 when filling
the int8 indicator matrix to match the declared dtype: in the loop that iterates
over local_groups and assigns into X (the indicator array), change the
assignment X[i, masked_segments] = 1.0 to X[i, masked_segments] = 1 so the
values are integer literals consistent with X's int8 dtype.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 15d4fe96-e282-4e2e-8736-08e511905860

📥 Commits

Reviewing files that changed from the base of the PR and between 2ef03d7 and 74a9c07.

📒 Files selected for processing (6)
  • README.md
  • ciao/algorithm/__init__.py
  • ciao/algorithm/graph.py
  • ciao/data/segmentation.py
  • ciao/scoring/hyperpixel.py
  • ciao/scoring/segments.py
✅ Files skipped from review due to trivial changes (1)
  • README.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • ciao/scoring/hyperpixel.py

@dhalmazna dhalmazna requested a review from Adames4 March 23, 2026 06:30
dhalmazna and others added 2 commits March 24, 2026 15:02
@dhalmazna dhalmazna requested a review from Adames4 March 24, 2026 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants