Skip to content

Fix equal metrics logging#1161

Open
Aurashk wants to merge 2 commits intomainfrom
fix-equal-metrics-logging
Open

Fix equal metrics logging#1161
Aurashk wants to merge 2 commits intomainfrom
fix-equal-metrics-logging

Conversation

@Aurashk
Copy link
Collaborator

@Aurashk Aurashk commented Mar 3, 2026

Description

Add the fallback check before we report the nondeterministic behaviour of two metrics being equal to minimise unnecessary logs

Fixes #1133

Type of change

  • Bug fix (non-breaking change to fix an issue)
  • New feature (non-breaking change to add functionality)
  • Refactoring (non-breaking, non-functional change to improve maintainability)
  • Optimization (non-breaking change to speed up the code)
  • Breaking change (whatever its nature)
  • Documentation (improve or add documentation)

Key checklist

  • All tests pass: $ cargo test
  • The documentation builds and looks OK: $ cargo doc
  • Update release notes for the latest release if this PR adds a new feature or fixes a bug
    present in the previous release

Further checks

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

@Aurashk Aurashk requested review from alexdewar, Copilot and tsmbland and removed request for Copilot March 3, 2026 12:57
Copilot AI review requested due to automatic review settings March 3, 2026 12:58
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces noisy “equal metric” debug logs during investment selection by only logging when the metric tie also cannot be resolved by the existing asset fallback ordering (commission status/year).

Changes:

  • Added a helper to count top appraisal outputs that are indistinguishable by both metric and fallback ordering.
  • Updated equal-metric logging to use the new helper (so logs only appear when selection is truly arbitrary).
  • Expanded/updated investment documentation, including a description of the equal-metric fallback behavior.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
src/simulation/investment/appraisal.rs Adds count_equal_and_best_appraisal_outputs helper used to detect true (fallback-unbreakable) ties.
src/simulation/investment.rs Switches equal-metric logging to rely on the new tie counter helper.
docs/model/investment.md Updates investment tool documentation and documents the equal-metric fallback behavior.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +360 to +367
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
outputs[1..]
.iter()
.take_while(|output| {
output.compare_metric(&outputs[0]).is_eq()
&& compare_asset_fallback(&output.asset, &outputs[0].asset).is_eq()
})
.count()
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are extensive existing unit tests in this module, but the new tie-detection logic in count_equal_and_best_appraisal_outputs isn’t covered. Adding tests for cases like: empty/singleton slices, equal metric but fallback differentiates, and equal metric + fallback tie would help prevent regressions in the logging behavior.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tests would be nice. Copilot would probably do an ok job!

Comment on lines +360 to +366
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
outputs[1..]
.iter()
.take_while(|output| {
output.compare_metric(&outputs[0]).is_eq()
&& compare_asset_fallback(&output.asset, &outputs[0].asset).is_eq()
})
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

count_equal_and_best_appraisal_outputs will panic on an empty slice because it indexes outputs[0] / outputs[1..]. Since this is a pub function, it should defensively handle outputs.is_empty() (e.g., return 0 early) to avoid exposing a panic as part of the public API.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess in practice, we won't call this function with an empty slice. I'd probably just handle this case in the function anyway, personally, but you could also just mention that it panics in the doc comment instead.

Comment on lines +358 to +359
/// Counts the number of top appraisal outputs in a sorted slice that are indistinguishable
/// by both metric and fallback ordering.
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs for count_equal_and_best_appraisal_outputs are ambiguous about whether the returned count includes the best element at outputs[0]. The implementation returns the number of additional indistinguishable outputs after the first; please clarify the docstring (or adjust the return value) so callers don’t misinterpret it.

Suggested change
/// Counts the number of top appraisal outputs in a sorted slice that are indistinguishable
/// by both metric and fallback ordering.
/// Counts how many *additional* top appraisal outputs in a sorted slice
/// (after `outputs[0]`) are indistinguishable from the best one by both
/// metric and fallback ordering.
///
/// The returned count does **not** include `outputs[0]` itself. To obtain
/// the total number of tied best outputs, add 1 to the return value when
/// `outputs` is non-empty.
///
/// # Panics
///
/// Panics if `outputs` is empty, as the function unconditionally accesses
/// `outputs[0]`.

Copilot uses AI. Check for mistakes.

/// Counts the number of top appraisal outputs in a sorted slice that are indistinguishable
/// by both metric and fallback ordering.
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This helper is only used within the crate (by investment.rs) and doesn’t appear to be intended as part of the external API, but it’s declared pub in a public module. Consider reducing visibility to pub(crate) (or pub(super)) to avoid expanding the crate’s public surface area unnecessarily.

Suggested change
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
pub(crate) fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is overkill

@codecov
Copy link

codecov bot commented Mar 3, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.32%. Comparing base (28dee31) to head (2207c82).
⚠️ Report is 44 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1161      +/-   ##
==========================================
- Coverage   87.50%   87.32%   -0.18%     
==========================================
  Files          57       57              
  Lines        7675     7726      +51     
  Branches     7675     7726      +51     
==========================================
+ Hits         6716     6747      +31     
- Misses        657      673      +16     
- Partials      302      306       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@alexdewar
Copy link
Collaborator

The branch history is a bit of a mess (I guess you accidentally based it on your other branch?). It looks like there's just one commit you actually want here, so you can cherry-pick that commit to it then force-push to this branch.

Tidiness aside, the last commit where you undo your changes will probably lead to merge conflicts when we merge #1125.

Here's how I'd do it:

# Update main
git checkout main
git pull

# Make a new branch with just this commit
git checkout -b tmp
git cherry-pick 9b824cc8016b8568e4ed0c1e6eaa26b5658b5ccc

# Go back to your topic branch, reset it
git checkout fix-equal-metrics-logging
git reset --hard tmp

# Force push to remote
git push --force

@Aurashk Aurashk force-pushed the fix-equal-metrics-logging branch from 2c91e8c to 514ddc1 Compare March 3, 2026 14:14
@Aurashk
Copy link
Collaborator Author

Aurashk commented Mar 3, 2026

The branch history is a bit of a mess (I guess you accidentally based it on your other branch?). It looks like there's just one commit you actually want here, so you can cherry-pick that commit to it then force-push to this branch.

Tidiness aside, the last commit where you undo your changes will probably lead to merge conflicts when we merge #1125.

Here's how I'd do it:

# Update main
git checkout main
git pull

# Make a new branch with just this commit
git checkout -b tmp
git cherry-pick 9b824cc8016b8568e4ed0c1e6eaa26b5658b5ccc

# Go back to your topic branch, reset it
git checkout fix-equal-metrics-logging
git reset --hard tmp

# Force push to remote
git push --force

Thanks @alexdewar, that worked. Think I did git switch -c from the feature branch by accident

Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I've made some small suggestions. Tests would be nice, but don't feel obliged if it's annoying.


/// Counts the number of top appraisal outputs in a sorted slice that are indistinguishable
/// by both metric and fallback ordering.
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is overkill

Comment on lines +360 to +366
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
outputs[1..]
.iter()
.take_while(|output| {
output.compare_metric(&outputs[0]).is_eq()
&& compare_asset_fallback(&output.asset, &outputs[0].asset).is_eq()
})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess in practice, we won't call this function with an empty slice. I'd probably just handle this case in the function anyway, personally, but you could also just mention that it panics in the doc comment instead.

Comment on lines +360 to +367
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
outputs[1..]
.iter()
.take_while(|output| {
output.compare_metric(&outputs[0]).is_eq()
&& compare_asset_fallback(&output.asset, &outputs[0].asset).is_eq()
})
.count()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tests would be nice. Copilot would probably do an ok job!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't warn any more... how about renaming it to log_on_equal_appraisal outputs or something instead?

@alexdewar
Copy link
Collaborator

Some of my comments were replies to Copilot, but for some reason you can't see the whole threads...

Copilot AI review requested due to automatic review settings March 4, 2026 12:44
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +981 to +983
let outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);

Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new tests call count_equal_and_best_appraisal_outputs on outputs that haven’t been sorted with sort_appraisal_outputs_by_investment_priority, but the helper’s docstring explicitly requires a sorted slice. This makes the tests depend on the construction order rather than the intended contract; consider sorting outputs in these tests before asserting (same applies to the other new count_equal_best_* tests below).

Suggested change
let outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);
let mut outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);
sort_appraisal_outputs_by_investment_priority(&mut outputs);

Copilot uses AI. Check for mistakes.
@@ -651,7 +652,7 @@ fn get_candidate_assets<'a>(
}

/// Print debug message if there are multiple equally good outputs
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc comment still says “Print debug message…”, but this helper is named log_on_equal_appraisal_outputs and uses the logger. Updating the comment wording (and/or the comment at the call site that says “Warn”) would avoid confusion about log level/behavior.

Suggested change
/// Print debug message if there are multiple equally good outputs
/// Log a debug-level message if there are multiple equally good outputs

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

False warnings about equal appraisals

3 participants