Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
fe9cfe7
Add comprehensive image normalization documentation
mdlinville Aug 22, 2025
0f05c59
Fix broken relref links in Japanese and Korean media guides
mdlinville Aug 22, 2025
f988610
docs: address review feedback - use sentence case, avoid 'we', improv…
mdlinville Aug 22, 2025
5e1e10c
docs: address additional review feedback - use sentence case for head…
mdlinville Aug 22, 2025
437e947
Artifacts: When to use save() vs .log() (#1557)
ngrayluna Aug 22, 2025
a9f6290
Updated new view for lineage and dataset mismatch in keras tutorials …
knisar Aug 22, 2025
6093944
test: temporarily use mock preview URL for testing path matching
mdlinville Aug 21, 2025
fc5abb0
revert: remove mock URL test changes
mdlinville Aug 21, 2025
48196ce
docs(weave): fix broken relref links in Japanese and Korean media guides
mdlinville Aug 22, 2025
585403f
docs: address review feedback - move detailed normalization content t…
mdlinville Aug 25, 2025
fdebb34
feat: add image normalization demo notebook for testing and user exam…
mdlinville Aug 25, 2025
71f1092
refactor: move detailed normalization docs to dedicated guide and sim…
mdlinville Aug 25, 2025
eb9f9a5
docs: add visual contrast effects and guidance on when to use differe…
mdlinville Aug 25, 2025
bf500a7
docs: add improved image normalization demo notebook with detailed ex…
mdlinville Aug 25, 2025
6804a5c
docs: update notebook with detailed explanations for all examples
mdlinville Aug 25, 2025
6c5b58b
docs: use --quiet --upgrade flags for pip install to reduce output noise
mdlinville Aug 25, 2025
3bb8f89
docs: add reassuring note about import cell success
mdlinville Aug 25, 2025
6b42746
docs: add explanation about expected deprecation warning in Example 2
mdlinville Aug 25, 2025
59d90c7
docs: add reassuring note about import cell success
mdlinville Aug 25, 2025
c0bab04
docs: restore --quiet --upgrade flags to pip install command
mdlinville Aug 25, 2025
c1f8928
docs: remove --upgrade flag from pip install to avoid dependency conf…
mdlinville Aug 25, 2025
6b14efa
docs: remove notebook (moved to examples repo PR #609)
mdlinville Aug 25, 2025
394761f
fix: correct GITHUB_TOKEN parameter name in calibreapp image actions …
mdlinville Aug 25, 2025
b0eed78
Merge branch 'main' into DOCS-1016
mdlinville Aug 25, 2025
7508402
docs: address reviewer feedback - combine sections, fix terminology, …
mdlinville Aug 25, 2025
f6c3a69
docs: remove # characters from headings in reference documentation
mdlinville Aug 25, 2025
f5cfa3f
Apply suggestions from code review
mdlinville Aug 25, 2025
7b3b451
docs: remove remaining # characters from reference documentation head…
mdlinville Aug 25, 2025
dbd68d4
docs: remove # characters from English reference documentation headings
mdlinville Aug 25, 2025
4a45f33
docs: improve normalization table with matrix format for better clarity
mdlinville Aug 25, 2025
0bca154
docs: convert numbered normalization algorithm steps to bullet points…
mdlinville Aug 25, 2025
90d2d1d
docs: remove accidentally committed test notebook
mdlinville Aug 25, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/calibreapp-image-actions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@ jobs:
- name: Compress Images
uses: calibreapp/image-actions@main
with:
# The `GITHUB_TOKEN` is automatically generated by GitHub and scoped only to the repository that is currently running the action. By default, the action cant update Pull Requests initiated from forked repositories.
# The `GITHUB_TOKEN` is automatically generated by GitHub and scoped only to the repository that is currently running the action. By default, the action can't update Pull Requests initiated from forked repositories.
# See https://docs.github.com/en/actions/reference/authentication-in-a-workflow and https://help.github.com/en/articles/virtual-environments-for-github-actions#token-permissions
githubToken: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
141 changes: 141 additions & 0 deletions content/en/guides/models/track/log/image-normalization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
---
title: "Image Normalization Guide"
description: "Learn how wandb.Image handles normalization for different input types and how to control this behavior"
---


When you pass PyTorch tensors or NumPy arrays to `wandb.Image`, the pixel values are automatically normalized to the range [0, 255] unless you set `normalize=False`. This guide explains how image normalization works and how to control it.

## When normalization is applied

| Input Type | Format | Normalization Applied | Notes |
|------------|--------|----------------------|-------|
| **PyTorch tensors** | `(channel, height, width)` | ✅ Yes | Automatically normalized to [0, 255] range |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove emojis

| **NumPy arrays** | `(height, width, channel)` | ✅ Yes | Automatically normalized to [0, 255] range |
| **PIL Images** | PIL Image object | ❌ No | Passed as-is without modification |
| **File paths** | String path to image file | ❌ No | Loaded as-is without modification |

## Normalization algorithm

The normalization algorithm automatically detects the input range and applies the appropriate transformation:

- **If data is in range [0, 1]**: Values are multiplied by 255 and converted to uint8
```python
normalized_data = (data * 255).astype(np.uint8)
```

- **If data is in range [-1, 1]**: Values are rescaled to [0, 255] using:
```python
normalized_data = (255 * 0.5 * (data + 1)).astype(np.uint8)
```

- **For any other range**: Values are clipped to [0, 255] and converted to uint8
```python
normalized_data = data.clip(0, 255).astype(np.uint8)
```

## Examples of normalization effects

### Example 1: [0, 1] range data

```python
import torch
import wandb

# Create tensor with values in [0, 1] range
tensor_0_1 = torch.rand(3, 64, 64) # Random values between 0 and 1

# This will multiply all values by 255
image = wandb.Image(tensor_0_1, caption="Normalized from [0,1] range")
```

### Example 2: [-1, 1] range data

```python
import torch
import wandb

# Create tensor with values in [-1, 1] range
tensor_neg1_1 = torch.rand(3, 64, 64) * 2 - 1 # Random values between -1 and 1

# This will rescale: -1 → 0, 0 → 127.5, 1 → 255
image = wandb.Image(tensor_neg1_1, caption="Normalized from [-1,1] range")
```

**Note on visual contrast**: The [-1, 1] normalization creates higher visual contrast compared to [0, 1] normalization. This is because:
- Negative values (like -0.8) become very dark (around 25)
- Positive values (like 0.8) become very bright (around 230)
- Values near 0 become mid-gray (127.5)

This "stretches" the visual range, making differences between pixel values more pronounced. This is particularly useful for highlighting subtle patterns in machine learning data, but if you want less contrast, consider preprocessing your data to a [0, 1] range before logging.

### Example 3: Avoid normalization with PIL Images
Normalization is not applied to PIL Images.

Comment thread
mdlinville marked this conversation as resolved.
```python
import torch
from PIL import Image as PILImage
import wandb

# Create tensor with values in [0, 1] range
tensor_0_1 = torch.rand(3, 64, 64)

# Convert to PIL Image to avoid normalization
pil_image = PILImage.fromarray((tensor_0_1.permute(1, 2, 0).numpy() * 255).astype('uint8'))
image = wandb.Image(pil_image, caption="No normalization applied")
```

### Example 4: Using normalize=False
To explicitly turn off image normalization without converting the image, set `normalize=False`.

Comment thread
mdlinville marked this conversation as resolved.
```python
import torch
import wandb

# Create tensor with values in [0, 1] range
tensor_0_1 = torch.rand(3, 64, 64)

# Disable normalization - values will be clipped to [0, 255]
image = wandb.Image(tensor_0_1, normalize=False, caption="Normalization disabled")
```

## When to use different approaches

### Use PIL conversion when:
- You want complete control over pixel values
- You need custom preprocessing (filters, brightness adjustments, etc.)
- You want to use PIL's image processing capabilities
- You're debugging and want to see exact values being logged

### Use normalize=False when:
- You want to see raw tensor values as they are
- Your data is already in the correct range (like [0, 255] integers)
- You're debugging normalization issues
- Quick testing without additional processing steps

### Use automatic normalization when:
- You want consistent behavior across different input types
- Your data is in standard ranges ([0, 1] or [-1, 1])
- You want W&B to handle the conversion automatically

## Troubleshooting

### Best practices

1. **For consistent results**: Pre-process your data to the expected [0, 255] range before logging
2. **To avoid normalization**: Convert tensors to PIL Images using `PILImage.fromarray()`
3. **For debugging**: Use `normalize=False` to see the raw values (they will be clipped to [0, 255])
4. **For precise control**: Use PIL Images when you need exact pixel values
5. **For highlighting subtle patterns**: Use [-1, 1] normalization to increase visual contrast
6. **For natural-looking images**: Use [0, 1] normalization or preprocess to [0, 255] range
7. **For custom processing**: Use PIL conversion when you need to apply filters or adjustments

### Common issues and solutions

- **Unexpected brightness**: If your tensor values are in [0, 1] range, they will be multiplied by 255, making the image much brighter. **Solution**: Preprocess to [0, 255] range or use PIL conversion.
- **Data loss**: Values outside the [0, 255] range will be clipped, potentially losing information. **Solution**: Check your data range and preprocess appropriately.
- **Inconsistent behavior**: Different input types (tensor vs PIL vs file path) may produce different results. **Solution**: Use consistent input types or understand the normalization behavior for each type.

## Testing your code

You can test the normalization behavior using our [Image Normalization Demo Notebook](https://github.com/wandb/wandb/blob/main/wandb_image_normalization_demo.ipynb) which demonstrates all the examples above with visual output.
3 changes: 1 addition & 2 deletions content/en/guides/models/track/log/media.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ with wandb.init(project="image-log-example") as run:
run.log({"examples": images})
```

We assume the image is gray scale if the last dimension is 1, RGB if it's 3, and RGBA if it's 4. If the array contains floats, we convert them to integers between `0` and `255`. If you want to normalize your images differently, you can specify the [`mode`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes) manually or just supply a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html), as described in the "Logging PIL Images" tab of this panel.
W&B assumes the image is gray scale if the last dimension is 1, RGB if it's 3, and RGBA if it's 4. If the array contains floats, W&B automatically normalizes them to integers between `0` and `255`. For detailed information about normalization with PyTorch tensors and NumPy arrays, see the [Image normalization guide]({{< relref "/guides/models/track/log/image-normalization.md" >}}). To normalize your images differently, you can specify the [`mode`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes) manually or supply a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html), as described in the "Logging PIL Images" tab.
{{% /tab %}}
{{% tab header="Logging PIL Images" %}}
For full control over the conversion of arrays to images, construct the [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) yourself and provide it directly.
Expand Down Expand Up @@ -97,7 +97,6 @@ with wandb.init(project="") as run:
{{% /tab %}}
{{< /tabpane >}}


## Image overlays


Expand Down
9 changes: 6 additions & 3 deletions content/en/ref/python/sdk/data-types/Image.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ data_type_classification: class



## <kbd>class</kbd> `Image`
<kbd>class</kbd> `Image`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting is correct. Undo changes. (Note: Anything in content/en/ref/python/* autogenerated. To make changes, make a PR in wandb/wandb).

A class for logging images to W&B.

### <kbd>method</kbd> `Image.__init__`
<kbd>method</kbd> `Image.__init__`

```python
__init__(
Expand Down Expand Up @@ -103,10 +103,13 @@ with wandb.init() as run:
run.log({"examples": examples})
```

Image normalization

When you pass PyTorch tensors or NumPy arrays to `wandb.Image`, the pixel values are automatically normalized to the range [0, 255] unless you set `normalize=False`. For detailed information about normalization behavior, examples, and best practices, see the [Image Normalization Guide]({{< relref "/guides/models/track/log/image-normalization.md" >}}).

---

### <kbd>property</kbd> Image.image
<kbd>property</kbd> Image.image



Expand Down
2 changes: 1 addition & 1 deletion content/ja/guides/integrations/pytorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ for batch_idx, (data, target) in enumerate(train_loader):

## 画像とメディアのログ

画像データを持つ PyTorch `Tensors` を [`wandb.Image`]({{< relref path="/ref/python/data-types/image.md" lang="ja" >}}) に渡すことができ、[`torchvision`](https://pytorch.org/vision/stable/index.html) のユーティリティが自動的に画像に変換します。
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest undoing changes to JA and KO docs. They will get overwritten. Also, keeps the PR cleaner for review.

画像データを持つ PyTorch `Tensors` を [`wandb.Image`]({{< relref "/ref/python/data-types/image.md" >}}) に渡すことができ、[`torchvision`](https://pytorch.org/vision/stable/index.html) のユーティリティが自動的に画像に変換します。

```python
images_t = ... # PyTorch Tensors として画像を生成またはロードする
Expand Down
6 changes: 3 additions & 3 deletions content/ja/guides/models/track/log/media.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ images = wandb.Image(image_array, caption="Top: Output, Bottom: Input")
wandb.log({"examples": images})
```

最後の次元が1の場合はグレースケール、3の場合はRGB、4の場合はRGBAと仮定します。配列が浮動小数点数を含む場合、それらを`0`から`255`の整数に変換します。異なる方法で画像を正規化したい場合は、[`mode`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes)を手動で指定するか、`"Logging PIL Images"`タブで説明されているように、単に[`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html)を提供することができます。
最後の次元が1の場合はグレースケール、3の場合はRGB、4の場合はRGBAと仮定します。配列が浮動小数点数を含む場合、正規化アルゴリズムを使用して自動的に`0`から`255`の整数に変換します。PyTorchテンソルとNumPy配列での正規化の動作についての詳細は、[Imageリファレンスの画像正規化セクション]({{< relref "/ref/python/data-types/image.md#image-normalization" >}})を参照してください。異なる方法で画像を正規화したい場合は、[`mode`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes)を手動で指定するか、`"Logging PIL Images"`タブで説明されているように、単に[`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html)を提供することができます。
{{% /tab %}}
{{% tab header="PIL Imagesをログする" %}}
配列から画像への変換を完全に制御するために、[`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html)を自分で構築し、直接提供してください。
Expand Down Expand Up @@ -564,7 +564,7 @@ runが終了すると、UIで分子の3D可視化と対話できるようにな

### PNG 画像

[`wandb.Image`]({{< relref path="/ref/python/data-types/image.md" lang="ja" >}})は`numpy`配列や`PILImage`のインスタンスをデフォルトでPNGに変換します。
`wandb.Image`は`numpy`配列や`PILImage`のインスタンスをデフォルトでPNGに変換します。

```python
wandb.log({"example": wandb.Image(...)})
Expand All @@ -584,7 +584,7 @@ wandb.log({"example": wandb.Video("myvideo.mp4")})

## 分子の2Dビュー

[`wandb.Image`]({{< relref path="/ref/python/data-types/image.md" lang="ja" >}})データ型と[`rdkit`](https://www.rdkit.org/docs/index.html)を使用して分子の2Dビューをログできます:
[`wandb.Image`]({{< relref "/ref/python/data-types/image.md" >}})データ型と[`rdkit`](https://www.rdkit.org/docs/index.html)を使用して分子の2Dビューをログできます:

```python
molecule = rdkit.Chem.MolFromSmiles("CC(=O)O")
Expand Down
2 changes: 1 addition & 1 deletion content/ja/launch/integration-guides/dagster.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ def create_dataset():
W&B は複数のピクルスベースのシリアライズモジュール([pickle](https://docs.python.org/3/library/pickle.html), [dill](https://github.com/uqfoundation/dill), [cloudpickle](https://github.com/cloudpipe/cloudpickle), [joblib](https://github.com/joblib/joblib)) をサポートしています。また、[ONNX](https://onnx.ai/) や [PMML](https://en.wikipedia.org/wiki/Predictive_Model_Markup_Language) といったより高度なシリアライズも利用できます。[Serialization]({{< relref path="#serialization-configuration" lang="ja" >}}) セクションを参照してください。
{{% /tab %}}
{{% tab "W&B Object" %}}
ネイティブ W&B オブジェクト (例: [Table]({{< relref path="/ref/python/data-types/table.md" lang="ja" >}}), [Image]({{< relref path="/ref/python/data-types/image.md" lang="ja" >}}), or [Graph]({{< relref path="/ref/python/data-types/graph.md" lang="ja" >}})) のいずれかが作成された Artifact にインテグレーションによって追加されます。以下は Table を使った例です。
ネイティブ W&B オブジェクト (例: [Table]({{< relref path="/ref/python/data-types/table.md" lang="ja" >}}), [Image]({{< relref "/ref/python/data-types/image.md" >}}), or [Graph]({{< relref path="/ref/python/data-types/graph.md" lang="ja" >}})) のいずれかが作成された Artifact にインテグレーションによって追加されます。以下は Table を使った例です。

```python
import wandb
Expand Down
Loading