Skip to content

Fix 8 HPC-sensitive bugs: GPU kernels, MPI broadcast, domain decomposition#1242

Open
sbryngelson wants to merge 14 commits intoMFlowCode:masterfrom
sbryngelson:fix/hpc-bugfixes-batch
Open

Fix 8 HPC-sensitive bugs: GPU kernels, MPI broadcast, domain decomposition#1242
sbryngelson wants to merge 14 commits intoMFlowCode:masterfrom
sbryngelson:fix/hpc-bugfixes-batch

Conversation

@sbryngelson
Copy link
Member

@sbryngelson sbryngelson commented Feb 22, 2026

User description

Summary

Batches 8 bug fixes that touch GPU parallel regions, MPI broadcasts, or compiler-sensitive code paths. These need HPC CI validation across NVHPC, CCE, and AMD compilers.

Not included: #1225 (3D viscous GPU private clauses) and #1181 (MUSCL-THINC right-state + GPU privates) remain as separate PRs due to higher risk — they add new private clause variables to GPU parallel loops, which is the most compiler-sensitive change possible.

GPU kernel fixes

Fix z-boundary backward FD stencil (#1173)

  • File: src/common/m_finite_differences.fpp
  • Bug: At the z-domain boundary, the backward FD stencil reads fields(2) (y-velocity) instead of fields(3) (z-velocity), corrupting divergence/gradient calculations for all 3D viscous, hyperelastic, and hypoelastic simulations.
  • Fix: fields(2)%sf(x, y, z-2)fields(3)%sf(x, y, z-2)
  • Impact: Runs inside GPU_PARALLEL_LOOP in m_viscous.fpp, m_hypoelastic.fpp, m_hyperelastic.fpp, and m_derived_variables.fpp. Affects every 3D simulation using these modules.

Fix GRCBC subsonic inflow using wrong L index (#1224)

  • File: src/simulation/m_cbc.fpp
  • Bug: In the GRCBC characteristic wave amplitude loop do i = 2, momxb, the code writes L(2) = ... instead of L(i) = .... Only the last species' wave amplitude survives; all others are overwritten.
  • Fix: L(2)L(i)
  • Impact: Inside a GPU_PARALLEL_LOOP (collapse=2 CBC loop with L as private). Affects multi-species GRCBC subsonic inflow boundary conditions.

Fix G_K exponential degradation in damage model (#1227)

  • File: src/common/m_variables_conversion.fpp
  • Bug: G_K = G_K * max((1-damage), 0) is inside the do i = strxb, strxe stress component loop, applying the damage factor N times (exponential (1-damage)^N) instead of once.
  • Fix: Move the damage line before the loop in both s_convert_conservative_to_primitive and s_convert_primitive_to_conservative.
  • Impact: Inside GPU_PARALLEL_LOOP(collapse=3) with G_K as private. Produces exponentially wrong stiffness degradation for multi-component stress tensors.

MPI broadcast fixes

Fix bc_x%ve3 never MPI-broadcast (#1175)

  • File: src/simulation/m_mpi_proxy.fpp
  • Bug: A fypp loop broadcasts bc_x%ve2 twice and bc_x%ve3 (z-velocity component) never. All non-root ranks receive uninitialized garbage, which is then pushed to GPU via GPU_UPDATE.
  • Fix: Change the duplicate 'bc_x%ve2' entry to 'bc_x%ve3' in the fypp broadcast list.
  • Impact: Every multi-rank 3D simulation using velocity boundary conditions on x-boundaries reads garbage ve3 on non-root ranks.

Fix fluid_rho broadcast using MPI_LOGICAL instead of mpi_p (#1176)

  • File: src/pre_process/m_mpi_proxy.fpp
  • Bug: MPI_BCAST(fluid_rho(1), num_fluids_max, MPI_LOGICAL, ...) broadcasts a real(wp) array with the wrong MPI datatype. On 64-bit wp, MPI_LOGICAL (typically 4 bytes) only transfers half the bytes, silently corrupting density values on non-root ranks.
  • Fix: MPI_LOGICALmpi_p
  • Impact: MPI-implementation-dependent. May appear to work with some stacks (if byte layout aligns) and produce garbage with others. Affects pre-process perturbation IC density initialization.

MPI / compiler-sensitive fixes

Fix loc_violations used uninitialized in MPI_Allreduce (#1186)

  • File: src/pre_process/m_data_output.fpp
  • Bug: loc_violations is passed to s_mpi_allreduce_sum without initialization. Whether this matters depends on the compiler: GCC may zero-initialize stack variables in debug mode, NVHPC and CCE may not.
  • Fix: Initialize to 0 and restructure as a local variable (not module-level, avoiding implicit SAVE).
  • Impact: Pre-process grid validation. Compiler-dependent behavior makes HPC testing essential.

Fix domain decomposition overwriting muscl_order (#1229)

  • File: src/common/m_mpi_common.fpp
  • Bug: An else branch in s_mpi_decompose_computational_domain unconditionally sets recon_order = weno_order for all non-IGR cases, even when recon_type == MUSCL_TYPE where muscl_order was already correctly assigned.
  • Fix: Remove the else branch (2 lines deleted).
  • Impact: Wrong reconstruction order causes incorrect ghost-cell counts in domain decomposition for MUSCL cases. Needs multi-rank MPI validation.

Fix NaN check reading wrong index in MPI unpack (#1231)

  • File: src/common/m_mpi_common.fpp
  • Bug: ieee_is_nan(q_comm(i)%sf(j, k, l)) should be q_comm(i)%sf(j + unpack_offset, k, l) — the check reads a stale element instead of the just-unpacked one.
  • Fix: Add unpack_offset to the appropriate index in all 3 directions (x, y, z).
  • Impact: Only compiles under #if defined(__INTEL_COMPILER). Dead code on NVHPC/CCE/GCC, but the diagnostic would miss real NaNs in ghost cells on Intel builds.

Supersedes

Closes #1173, closes #1175, closes #1176, closes #1186, closes #1224, closes #1227, closes #1229, closes #1231, fixes #1194, fixes #1195, fixes #1198

Test plan

  • All GitHub CI checks pass
  • HPC (NVHPC): 3D viscous case, GRCBC multi-species case, damage model case
  • HPC (CCE/AMD): Multi-rank 3D with velocity BCs (validates bc_x%ve3 broadcast)
  • HPC (any): MUSCL multi-rank case (validates domain decomposition fix)
  • Verify no regressions in existing test suite

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes
    • Corrected a velocity-component indexing error affecting divergence near Z-boundaries.
    • Strengthened NaN detection in parallel receives to trigger safe aborts.
    • Preserved user-selected reconstruction order when domain decomposition flags are unset.
    • Fixed timing of damage-related material-property adjustments for stable elastic behavior.
    • Initialized local violation counters to prevent spurious downsampling issues.
    • Fixed fluid property datatype in broadcasts and ensured correct boundary-condition vector broadcast.

CodeAnt-AI Description

Fix several HPC-sensitive bugs that caused incorrect physics, corrupted MPI data, and unreliable aborts

What Changed

  • Z-boundary backward difference now uses the z-velocity component so 3D divergence/gradient calculations are correct at upper z boundaries
  • Multi-species GRCBC inflow now assigns each species' wave amplitude correctly so species are not overwritten
  • Boundary velocity lists and fluid density broadcasts now send the correct variables and MPI data type so remote ranks receive correct bc_x%ve3 and fluid_rho values
  • Damage factors for elastic moduli are applied once per cell (not repeatedly inside loops), preventing exponential weakening of elastic contributions
  • Downsample check variable is initialized to zero to avoid spurious warnings on non-divisible local sizes
  • MPI unpack NaN checks use the correct unpacked indices, and runtime NaN errors now trigger an MPI-aware abort instead of a plain error stop

Impact

✅ Fewer incorrect 3D divergence/gradient errors in viscous and elastic simulations
✅ Correct boundary velocity behavior in multi-rank runs with x-boundary conditions
✅ No silent corruption of fluid densities across MPI ranks

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 22, 2026

📝 Walkthrough

Walkthrough

Fixes multiple bugs across finite-difference, MPI communication, damage-model, CBC boundary handling, and data-output code paths; changes are localized edits to indexing, initialization, MPI datatypes/broadcast lists, reconstruction-order logic, and loop indexing.

Changes

Cohort / File(s) Summary
Finite-difference
src/common/m_finite_differences.fpp
Corrected z-upper-boundary backward FD term to use fields(3)%sf(..., z-2) (was fields(2)).
MPI unpack & domain-decomp
src/common/m_mpi_common.fpp
Adjusted NaN checks to reference q_comm(i)%sf(...+unpack_offset,...) and replaced Intel-only error stops with call s_mpi_abort(...); removed an else that unintentionally forced recon_order = weno_order.
MPI proxy / broadcasts
src/pre_process/m_mpi_proxy.fpp, src/simulation/m_mpi_proxy.fpp
Changed MPI_BCAST datatype for fluid_rho from MPI_LOGICAL to mpi_p; fixed duplicated bc_x%ve2 entry to broadcast bc_x%ve3.
Damage model / variables conversion
src/common/m_variables_conversion.fpp
Moved continuous-damage application to G_K/G outside the per-stress-component loop (apply once before loop instead of per-iteration).
GRCBC boundary handling
src/simulation/m_cbc.fpp
In subsonic inflow loop, assign L(i) for i=2..momxb instead of repeatedly writing L(2).
Data output / MPI_Allreduce
src/pre_process/m_data_output.fpp
Initialized loc_violations = 0._wp before downsampling logic to avoid uninitialized values in subsequent MPI_Allreduce.

Sequence Diagram(s)

(Skipped — changes are localized fixes and do not introduce a new multi-component control-flow feature requiring sequence diagrams.)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related issues

  • #1173: Fix z-boundary backward FD stencil — direct match to the fields(3) index correction in m_finite_differences.fpp.
  • #1175: Fix bc_x%ve3 never MPI-broadcast — matches the corrected broadcast list in m_mpi_proxy.fpp that now includes bc_x%ve3.
  • #1176: Fix fluid_rho broadcast using MPI_LOGICAL instead of mpi_p — matches the datatype fix in m_mpi_proxy.fpp.
  • #1186: Fix loc_violations used uninitialized in MPI_Allreduce — matches initialization of loc_violations in m_data_output.fpp.
  • #1224: Fix GRCBC subsonic inflow using wrong L index — matches changing L(2) to L(i) in m_cbc.fpp.
  • #1227: Fix G_K exponential degradation in damage model — matches moving damage application for G_K/G outside the stress-component loop.
  • #1229: Fix domain decomposition overwriting muscl_order — matches removal of unconditional recon_order = weno_order else-branch in m_mpi_common.fpp.
  • #1231: Fix NaN check reading wrong index in MPI unpack — matches NaN-check indexing change to use unpack_offset in m_mpi_common.fpp.

Possibly related PRs

  • #1116 — edits overlapping Fortran modules (m_finite_differences, m_mpi_common, m_variables_conversion) and likely touches similar guarded/diagnostic paths.
  • #1087 — related changes to MPI recv/unpack sections in m_mpi_common.fpp.
  • #1029 — related modifications to divergence / finite-difference boundary indexing in m_finite_differences.fpp.

Suggested labels

Review effort 3/5, size:M

Poem

🐇 I fixed the stencils, broadcasts, and more,

fields now point where they ought to before.
No NaNs hiding in offsets' embrace,
damage applied once — steadier pace.
Hoppity hops — the code runs apace! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: batching 8 HPC-sensitive bug fixes covering GPU kernels, MPI broadcasts, and domain decomposition.
Linked Issues check ✅ Passed The PR addresses all 8 linked issues (#1173, #1175, #1176, #1186, #1224, #1227, #1229, #1231) with corresponding code changes in the raw summary that match each issue's requirements.
Out of Scope Changes check ✅ Passed All changes in the 7 modified files directly address the stated objectives. No extraneous modifications outside the scope of the 8 bug fixes are present.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description check ✅ Passed Pull request provides comprehensive description with clear summary of all 8 bug fixes, detailed motivation, affected files, impacts, test plan, and issue references.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codeant-ai codeant-ai bot added the size:S This PR changes 10-29 lines, ignoring generated files label Feb 22, 2026
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 7 files

Confidence score: 5/5

  • Automated review surfaced no issues in the provided summaries.
  • No files require special attention.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 22, 2026

Nitpicks 🔍

🔒 No security issues identified
⚡ Recommended areas for review

  • Potential OOB (L array)
    The new GRCBC inflow loop assigns to L(i) for i=2..momxb and later accesses L at indices like momxb+1, momxb+2, and advxe. L is declared conditionally as dimension(sys_size) or dimension(20) (for AMD). Verify that in all build/config combinations L has sufficient length (>= max(advxe, momxb+2)) to avoid out-of-bounds writes/reads when the GRCBC branch is enabled. Also ensure GPU/private usage of L matches the declared size (private arrays in GPU regions must be properly sized).

  • MPI count mismatch
    The broadcast of fluid_rho uses num_fluids_max as the count. If the actual number of fluids is num_fluids (broadcast earlier), broadcasting the full max length may send or expect uninitialised padding. Verify intended behaviour and consider using num_fluids to limit the count to active entries.

  • Boundary stencil safety
    The z-boundary backward finite-difference stencil was corrected to reference fields(3) (z-velocity). Verify that the index z-2 is always valid for the code path that runs this branch (no out-of-bounds reads). Confirm the boundary indexes and halo/padded ranges used by the GPU kernel guarantee z-2 is in-bounds in all configurations (including small local subdomains).

  • MPI broadcast list consistency
    The MPI broadcast list was corrected to include distinct bc_x%vb*/bc_x%ve* members (previously there was a duplicate). Confirm that the newly added members exist on the derived types, that their Fortran types match the MPI datatype mpi_p used in the call, and that the ordering/number of MPI_BCAST calls matches the receiving side expectations. A mismatch in types/counts or a missing member can lead to silent data corruption across ranks.

  • MPI datatype correctness
    The call uses mpi_p as the MPI datatype token for fluid_rho. Confirm mpi_p matches the Fortran kind real(wp) (e.g. MPI_DOUBLE_PRECISION or an appropriate alias). A mismatch between mpi_p and the variable kind can lead to incorrect data transfer across MPI implementations or compilers.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR batches several correctness fixes in GPU-executed kernels and MPI communication paths (broadcasts + halo exchange diagnostics), plus a domain-decomposition bug affecting reconstruction order—aimed at eliminating silent corruption and compiler/MPI-stack-dependent behavior in HPC runs.

Changes:

  • Fix multiple GPU-path physics/kernel correctness issues (FD stencil component, GRCBC wave amplitudes indexing, damage-model shear modulus degradation).
  • Fix MPI communication correctness (missing broadcast field, wrong MPI datatype, Intel-only NaN check indexing) and a domain decomposition reconstruction-order overwrite.
  • Fix compiler-sensitive undefined behavior in pre_process by initializing a reduction input.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/common/m_finite_differences.fpp Corrects z-boundary backward FD stencil to use fields(3) (z-velocity) consistently.
src/simulation/m_cbc.fpp Fixes GRCBC subsonic inflow loop to write L(i) instead of overwriting L(2).
src/common/m_variables_conversion.fpp Applies continuous damage factor to G_K/G once (outside the stress-component loop) to avoid exponential compounding.
src/simulation/m_mpi_proxy.fpp Fixes user-input broadcast list to include bc_x%ve3 (was duplicating bc_x%ve2).
src/pre_process/m_mpi_proxy.fpp Broadcasts fluid_rho with the correct MPI real datatype (mpi_p) instead of MPI_LOGICAL.
src/pre_process/m_data_output.fpp Initializes loc_violations before MPI reduction to avoid undefined values on some compilers.
src/common/m_mpi_common.fpp Fixes Intel-only NaN check to read the just-unpacked element and prevents non-IGR runs from clobbering MUSCL recon order.

@codecov
Copy link

codecov bot commented Feb 23, 2026

Codecov Report

❌ Patch coverage is 50.00000% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 44.05%. Comparing base (df28255) to head (9f19bc8).
⚠️ Report is 20 commits behind head on master.

Files with missing lines Patch % Lines
src/common/m_variables_conversion.fpp 0.00% 1 Missing and 1 partial ⚠️
src/common/m_finite_differences.fpp 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1242      +/-   ##
==========================================
- Coverage   44.05%   44.05%   -0.01%     
==========================================
  Files          70       70              
  Lines       20496    20496              
  Branches     1989     1991       +2     
==========================================
- Hits         9030     9029       -1     
  Misses      10328    10328              
- Partials     1138     1139       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@sbryngelson sbryngelson force-pushed the fix/hpc-bugfixes-batch branch from 3adf7d6 to 239e520 Compare February 23, 2026 14:48
@codeant-ai codeant-ai bot added size:S This PR changes 10-29 lines, ignoring generated files and removed size:S This PR changes 10-29 lines, ignoring generated files labels Feb 23, 2026
@MFlowCode MFlowCode deleted a comment from github-actions bot Feb 23, 2026
@sbryngelson sbryngelson force-pushed the fix/hpc-bugfixes-batch branch 2 times, most recently from 0289d4f to 1c2b7ce Compare February 24, 2026 16:03
@codeant-ai codeant-ai bot added size:S This PR changes 10-29 lines, ignoring generated files and removed size:S This PR changes 10-29 lines, ignoring generated files labels Feb 24, 2026
sbryngelson and others added 4 commits February 24, 2026 11:50
The z-direction upper-boundary backward difference at iz_s%end uses
fields(2) (y-component) instead of fields(3) (z-component) in the
third term, corrupting the divergence in all 3D simulations using
this finite difference routine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The bc_x velocity-end broadcast list has bc_x%ve2 duplicated where
bc_x%ve3 should be. bc_y and bc_z rows are correct. Non-root ranks
get uninitialized bc_x%ve3 in multi-rank 3D runs with x-boundary
velocity BCs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
fluid_rho is a real(wp) array but is broadcast with MPI_LOGICAL type,
silently corrupting reference densities via bit reinterpretation on
non-root ranks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
loc_violations is never set to 0 before the conditional that may or
may not assign it. Non-violating ranks sum garbage in the reduction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
sbryngelson and others added 2 commits February 24, 2026 11:50
The NaN diagnostic check used q_comm(i)%sf(j, k, l) but the value
was unpacked into q_comm(i)%sf(j + unpack_offset, k, l). This meant
the check was reading a stale or unrelated cell instead of the just-
received value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CLAUDE.md Critical Rules forbid error stop. Use s_mpi_abort for
proper MPI-aware termination in the Intel NaN detection blocks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@sbryngelson sbryngelson force-pushed the fix/hpc-bugfixes-batch branch from 19a2f11 to 8efac1d Compare February 24, 2026 16:50
@github-actions
Copy link

Claude Code Review

7 files changed across src/common/ (3 files), src/pre_process/ (2 files), and src/simulation/ (2 files).

Summary

  • Bundles 8 targeted correctness fixes touching GPU kernels, MPI broadcasts, and domain decomposition logic; all changes are individually small and focused.
  • Every bug identified in the PR description is real and each fix is mechanically correct — no false positives.
  • Three files are in src/common/ and thus affect all three executables (pre_process, simulation, post_process); the test plan focuses on HPC validation but should explicitly confirm all three targets are exercised.
  • The domain decomposition fix (m_mpi_common.fpp) is safe: a prior block at lines 1141–1145 already sets recon_order from recon_type (WENO or MUSCL), so removing the else recon_order = weno_order branch only prevents the overwrite of MUSCL cases — WENO cases remain correctly served by the first block.
  • The damage-scaling fix in m_variables_conversion.fpp is physically correct, but introduces a minor behavioral change: G_K/G is now multiplied by (1 - damage) even when G_K/G ≤ verysmall (before the if (G_K > verysmall) guard fires). This is harmless in practice but is a semantic drift from the original guard structure.

Findings

Minor concern — m_data_output.fpp:485 — description overstates the fix

The PR description says loc_violations is "restructured as a local variable (not module-level, avoiding implicit SAVE)." The diff only adds the initialization line loc_violations = 0._wp; the declaration is unchanged, so the implicit-SAVE concern remains if it is module-level. The initialization does fix the uninitialized-use bug, but reviewers should verify whether loc_violations is a module variable or subroutine-local. If module-level, consider moving the declaration into the subroutine to fully address the SAVE issue.

Minor concern — m_mpi_common.fpp:936,991,1050print * retained before s_mpi_abort in Intel-only block

print *, "Error", j, k, l, i
call s_mpi_abort("NaN(s) in recv")

print * from non-root ranks can interleave arbitrarily in MPI output. This was pre-existing, but since the block is being touched, the fix is an opportunity to use @:LOG (active only in MFC_DEBUG builds) or pass the indices into the abort message string directly.

Minor concern — m_variables_conversion.fpp:852 — damage applied outside G_K > verysmall guard

In s_convert_conservative_to_primitive, the old code gated damage scaling with if (G_K > verysmall). After the move:

if (cont_damage) G_K = G_K*max((1._wp - qK_cons_vf(damage_idx)%sf(j, k, l)), 0._wp)
$:GPU_LOOP(parallelism='[seq]')
do i = strxb, strxe
    if (G_K > verysmall) then
        ...

When the incoming G_K is already ≤ verysmall, the damage line now runs (multiplying a near-zero value by ≤ 1). The result is still ≤ verysmall, so the subsequent guard and computations are unaffected. This is safe but worth a comment acknowledging the intentional guard relaxation.


Improvement opportunities

  1. src/simulation/m_cbc.fpp:930 — The loop do i = 2, momxb starts at 2 (skipping L(1)) with no explanation of what L(1) represents. A one-line comment (e.g., ! L(1) is the acoustic wave; L(2:momxb) are species continuity waves) would help future readers and reduce the chance of this pattern being misread as another off-by-one.

  2. src/common/m_mpi_common.fpp:1141–1158 — Now that the else branch is removed from the second block, lines 1141–1145 (the first WENO/MUSCL assignment) are the sole setter for non-IGR cases. These two blocks are close together but serve different purposes. A short comment like ! IGR overrides WENO/MUSCL order if enabled above the if (igr) block would clarify the layering intent.

  3. src/common/m_variables_conversion.fpp — After the damage scaling is moved outside the loop, consider adding @:ASSERT(G_K >= 0._wp, "Negative G_K after damage scaling") to catch cases where damage_idx returns a value > 1 (which max(..., 0) already guards against, but the assertion would make the invariant explicit and aid debugging).


Verdict

All 8 fixes are correct. The src/common/ changes carry the widest blast radius — confirm pre_process and post_process build and run cleanly in addition to the HPC simulation tests called out in the test plan.

CLAUDE.md forbids error stop; use s_mpi_abort for proper MPI-aware
termination. The print * before the abort is retained since it logs
the exact coordinates (j,k,l), variable index, rank, and timestep —
information that doesn't fit in the abort message string.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@codeant-ai codeant-ai bot added size:S This PR changes 10-29 lines, ignoring generated files and removed size:S This PR changes 10-29 lines, ignoring generated files labels Feb 24, 2026
@github-actions
Copy link

Claude Code Review

Head SHA: 944c6adbed9834e10c0a89721a9e07cee30118c9

Files changed: 8

  • src/common/m_finite_differences.fpp
  • src/common/m_mpi_common.fpp
  • src/common/m_variables_conversion.fpp
  • src/pre_process/m_data_output.fpp
  • src/pre_process/m_mpi_proxy.fpp
  • src/simulation/m_cbc.fpp
  • src/simulation/m_mpi_proxy.fpp
  • src/simulation/m_start_up.fpp

Summary:

  • Batches 8 targeted bug fixes across GPU kernels, MPI broadcasts, and domain decomposition; all changes are small and individually well-scoped.
  • Fixes span src/common/ (affects all 3 executables), src/pre_process/, and src/simulation/.
  • All error stopcall s_mpi_abort(...) replacements correctly follow the project coding convention.
  • The MPI_LOGICALmpi_p and duplicate-ve2ve3 broadcast fixes are clearly correct and could cause silent data corruption in multi-rank runs.
  • No new parameters introduced; no precision-discipline violations observed.

Findings

1. [Correctness – High Confidence] m_variables_conversion.fpp: Damage factor moved correctly, but guard condition changes

In the original code the damage line was inside if (G_K > verysmall):

! old (inside loop + inside G_K > verysmall guard)
if (cont_damage) G_K = G_K*max((1._wp - qK_cons_vf(damage_idx)%sf(j,k,l)), 0._wp)

After the fix the damage application is outside the loop and outside the if (G_K > verysmall) guard. Functionally this is fine — if G_K = 0, multiplying by any scalar still gives 0 — but the semantic change (applying damage even when the material has no shear stiffness) should be noted for reviewers familiar with the damage model.

2. [Correctness – Needs verification] m_mpi_common.fpp: Removing the else branch for recon_order

The removed block:

! removed
else
    recon_order = weno_order
end if

The fix is correct when recon_type == MUSCL_TYPE (the branch was clobbering a previously-assigned muscl_order). However, for the default WENO path when igr is false, recon_order now relies on being initialized elsewhere before s_mpi_decompose_computational_domain is called. Please confirm that recon_order is always set to weno_order at module initialisation or in m_start_up.fpp for non-IGR, non-MUSCL cases so no uninitialized-value risk exists on any compiler.

3. [Convention] m_mpi_common.fpp: print * left in Intel-compiler NaN path

All three corrected NaN-check blocks still contain:

#if defined(__INTEL_COMPILER)
    if (ieee_is_nan(q_comm(i)%sf(j + unpack_offset, k, l))) then
        print *, "Error", j, k, l, i   ! <-- bare print
        call s_mpi_abort("NaN(s) in recv")
    end if
#endif

The print * is pre-existing (not introduced by this PR), so no blocker, but it is worth noting for a future clean-up pass — unguarded print * in GPU-parallel paths can serialize output from all threads.


Summary verdict

All 8 individual fixes are directionally correct. The two items above worth a second look before merge:

  1. Confirm the damage model semantic is intentional (applying G_K scaling even when G_K ≤ verysmall before the loop).
  2. Verify recon_order has a safe default for the standard WENO path now that the else branch is removed.

No precision violations, no raw !/! directives, no forbidden intrinsics, and @:ALLOCATE/@:DEALLOCATE symmetry is unaffected. The HPC test plan in the PR description is appropriate given the GPU and multi-rank scope of these fixes.

@codeant-ai codeant-ai bot added size:S This PR changes 10-29 lines, ignoring generated files and removed size:S This PR changes 10-29 lines, ignoring generated files labels Feb 25, 2026
if (ieee_is_nan(q_comm(i)%sf(j + unpack_offset, k, l))) then
print *, "Error", j, k, l, i
error stop "NaN(s) in recv"
call s_mpi_abort("NaN(s) in recv")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one of the AI reviewers commented on this, but I'm not sure that s_mpi_abort is supported in GPU regions. I suppose this doesn't matter since we don't support GPUs with Intel compilers, but it could arise as a problem down the road.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think either is supported, actually

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but thanks for the comment. of course reply if you know differently

@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 25, 2026
@github-actions
Copy link

Claude Code Review

Head SHA: 9f19bc8
Files changed: 8 — src/common/m_finite_differences.fpp, src/common/m_mpi_common.fpp, src/common/m_variables_conversion.fpp, src/pre_process/m_data_output.fpp, src/pre_process/m_mpi_proxy.fpp, src/simulation/m_cbc.fpp, src/simulation/m_mpi_proxy.fpp, src/simulation/m_start_up.fpp


Summary

  • All 8 bug fixes are directionally correct and well-described; the PR is a high-quality batch of targeted repairs.
  • The fields(2)fields(3) fix (m_finite_differences.fpp:53), the L(2)L(i) fix (m_cbc.fpp:930), the ve2ve3 typo fix, and the MPI_LOGICALmpi_p dtype fix are all clear one-line correctness fixes with no ambiguity.
  • The NaN-check index fixes (m_mpi_common.fpp:939,994,1053) and error stops_mpi_abort conversions are also clearly correct per project conventions.
  • Two findings require verification before merge; neither blocks the PR if confirmed safe.

Findings

1. Domain decomposition: recon_order may be unset for non-IGR WENO cases (verify only)

File: src/common/m_mpi_common.fpp, near line 1153

The removed else branch was:

else
    recon_order = weno_order

This was correctly identified as overwriting muscl_order for MUSCL cases. However, for non-IGR, non-MUSCL (plain WENO) cases, recon_order was previously set via this else. After the fix, nothing sets it in this block for that path.

This is safe only if recon_order is initialized to weno_order earlier in the subroutine or at declaration. Please confirm: is recon_order assigned weno_order at declaration, or is there an earlier assignment for the plain WENO path? If recon_order is uninitialized on entry to this block for non-IGR WENO runs, the fix introduces a subtle regression for those cases. The PR description doesn't address this path explicitly.

2. Damage model fix: G_K > verysmall guard removed from damage application

File: src/common/m_variables_conversion.fpp, s_convert_primitive_to_conservative (≈ line 1123)

In the prim→cons direction, the original code placed the damage application inside if (G > verysmall), meaning G was not multiplied by the damage factor unless it was already nonzero. The new code applies the damage factor unconditionally before the loop. This is arguably more correct (multiply by damage, then check the result), but it is a subtle behavior change: a cell with G == 0 now enters G*max(1-damage, 0) == 0 explicitly, which is the same result. This is fine mathematically; just flag it so reviewers are aware the guard semantics changed.

In the cons→prim direction (s_convert_conservative_to_primitive, ≈ line 852) the damage application was already outside the G_K > verysmall guard in a GPU_LOOP(seq) context, so that change is straightforwardly correct.

3. Minor: print * before s_mpi_abort produces interleaved MPI output

File: src/common/m_mpi_common.fpp, lines 940, 995, 1054

print *, "Error", j, k, l, i
call s_mpi_abort("NaN(s) in recv")

print * uses default unit (stdout) without rank prefix. In a multi-rank MPI run detecting NaNs, output from multiple ranks will interleave unreadably. This is pre-existing code (not introduced by this PR) but worth noting. Using write(0, *) proc_rank, "NaN recv:", j, k, l, i or a dedicated s_mpi_print helper would be cleaner. Not a blocker.


Assessment

The fixes for fields(3), L(i), bc_x%ve3, fluid_rho dtype, NaN offsets, and error stop replacements are all clean and correct. The damage model loop refactor is correct. The only substantive question before merge is whether recon_order has a defined value for non-IGR WENO runs after removing the else branch — this is easily verified by checking the initialization site of recon_order in s_mpi_decompose_computational_domain.

@MFlowCode MFlowCode deleted a comment from github-actions bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
@MFlowCode MFlowCode deleted a comment from codeant-ai bot Feb 26, 2026
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GRCBC with num_fluids > 1 is generally not defined, so this change is redundant, but you can keep it for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S This PR changes 10-29 lines, ignoring generated files

4 participants