Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions src/loss_functions/tf_losses.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import tensorflow as tf

def AsymmetricLoss(gamma_neg=4.0, gamma_pos=1.0): # Wrapper for ASL function
""""
Tensorflow adaptation of "Official Pytorch Implementation of: 'Asymmetric Loss For Multi-Label Classification'(ICCV, 2021) paper" --> https://github.com/Alibaba-MIIL/ASL/blob/main/src/loss_functions/losses.py
Returns a loss function with asymmetric, specifiable emphases for false negatives & false positives. Output can be passed in as loss function for model.compile().
----------
Parameters
----------
gamma_neg: asymmetric emphasis on false negatives
gamma_pos: assymetric emphasis on false positives
"""

# Return ASL function with custom emphases
def ASL_func(y, x):
""""
Parameters
----------
x: input logits (y hat)
y: targets (multi-label binarized vector)
"""

# Calculating Probabilities
xs_pos = x
xs_neg = 1 - x

Comment on lines +23 to +26
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Logits used as probs 🐞 Bug ✓ Correctness

ASL_func’s docstring says x is logits, but the implementation uses x directly as a probability
(xs_pos=x, xs_neg=1-x) and omits the sigmoid step used by the reference ASL, producing an incorrect
loss when model outputs are logits.
Agent Prompt
## Issue description
`src/loss_functions/tf_losses.py` documents `x` as logits but computes probabilities as `xs_pos = x` / `xs_neg = 1 - x`, which breaks ASL semantics when the model outputs logits.

## Issue Context
The repo’s reference ASL (`src/loss_functions/losses.py`) applies `sigmoid` to logits inside the loss, and training code passes logits with an explicit comment that sigmoid happens in the loss.

## Fix Focus Areas
- src/loss_functions/tf_losses.py[15-26]
- src/loss_functions/tf_losses.py[23-31]
- src/loss_functions/losses.py[23-27]
- train.py[110-113]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

# Basic CE calculation
los_pos = y * tf.math.log(xs_pos)
los_neg = (1 - y) * tf.math.log(xs_neg)
loss = los_pos + los_neg
Comment on lines +23 to +30
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. tf.math.log() can hit -inf 📘 Rule violation ⛯ Reliability

The loss computes tf.math.log(xs_pos) and tf.math.log(xs_neg) without clipping/validating that
x is in (0,1), which can produce -inf/NaN and break training. This is missing boundary/edge-case
handling and input validation.
Agent Prompt
## Issue description
`tf.math.log()` is applied to `x` and `1 - x` without ensuring values are strictly within (0,1), which can produce `-inf/NaN` for boundary/out-of-range inputs.

## Issue Context
This function is intended to be used as a Keras loss. Model outputs may be logits (unbounded) or probabilities that can reach exactly 0/1 depending on activations and numerics.

## Fix Focus Areas
- src/loss_functions/tf_losses.py[23-30]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


# Asymmetric Focusing
if gamma_neg > 0 or gamma_pos > 0:
pt0 = xs_pos * y
pt1 = xs_neg * (1 - y) # pt = p if t > 0 else 1-p
pt = pt0 + pt1
one_sided_gamma = gamma_pos * y + gamma_neg * (1 - y)
one_sided_w = tf.math.pow(1 - pt, one_sided_gamma)
loss *= one_sided_w

return -tf.math.reduce_sum(loss)
return ASL_func