Skip to content

Commit 08ca087

Browse files
authored
Merge branch 'main' into main
2 parents f7c2afa + 44fa04a commit 08ca087

208 files changed

Lines changed: 7922 additions & 3136 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/pr-style-bot.yml

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# To run this bot, comment "@bot /style" on a PR
2+
name: Style Bot
3+
4+
on:
5+
issue_comment:
6+
types: [created]
7+
8+
permissions:
9+
contents: write
10+
pull-requests: write
11+
12+
jobs:
13+
style:
14+
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
15+
with:
16+
python_quality_dependencies: "[quality]"
17+
style_command_type: "default"
18+
secrets:
19+
bot_token: ${{ secrets.GITHUB_TOKEN }}

.github/workflows/self-comment-ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ jobs:
2929
runs-on: ubuntu-22.04
3030
name: Get PR number
3131
# For security: only allow team members to run
32-
if: ${{ github.event.issue.state == 'open' && contains(fromJSON('["ydshieh", "ArthurZucker", "zucchini-nlp", "qubvel", "molbap", "gante", "LysandreJik", "Cyrilvallez", "Rocketknight1", "SunMarc", "muellerzr", "eustlb", "MekkCyber"]'), github.actor) && (startsWith(github.event.comment.body, 'run-slow') || startsWith(github.event.comment.body, 'run slow') || startsWith(github.event.comment.body, 'run_slow')) }}
32+
if: ${{ github.event.issue.state == 'open' && contains(fromJSON('["ydshieh", "ArthurZucker", "zucchini-nlp", "qubvel", "molbap", "gante", "LysandreJik", "Cyrilvallez", "Rocketknight1", "SunMarc", "muellerzr", "eustlb", "MekkCyber", "manueldeprada"]'), github.actor) && (startsWith(github.event.comment.body, 'run-slow') || startsWith(github.event.comment.body, 'run slow') || startsWith(github.event.comment.body, 'run_slow')) }}
3333
outputs:
3434
PR_NUMBER: ${{ steps.set_pr_number.outputs.PR_NUMBER }}
3535
steps:

README.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,12 @@ Install Transformers from source if you want the latest changes in the library o
9898
```shell
9999
git clone https://github.com/huggingface/transformers.git
100100
cd transformers
101+
102+
# pip
101103
pip install .[torch]
104+
105+
# uv
106+
uv pip install .[torch]
102107
```
103108

104109
## Quickstart
@@ -120,7 +125,7 @@ To chat with a model, the usage pattern is the same. The only difference is you
120125
> [!TIP]
121126
> You can also chat with a model directly from the command line.
122127
> ```shell
123-
> transformers chat --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct
128+
> transformers chat Qwen/Qwen2.5-0.5B-Instruct
124129
> ```
125130
126131
```py

docker/transformers-all-latest-gpu/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,9 @@ RUN python3 -m pip install --no-cache-dir g2p-en
7171
# For Some bitsandbytes tests
7272
RUN python3 -m pip install --no-cache-dir einops
7373

74+
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
75+
RUN python3 -m pip uninstall -y kernels
76+
7477
# When installing in editable mode, `transformers` is not recognized as a package.
7578
# this line must be added in order for python to be aware of transformers.
7679
RUN cd transformers && python3 setup.py develop

docker/transformers-pytorch-deepspeed-latest-gpu/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,9 @@ RUN python3 -m pip uninstall -y deepspeed
4545
# TODO: Find out why test fail.
4646
RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1 python3 -m pip install deepspeed --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check 2>&1
4747

48+
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
49+
RUN python3 -m pip uninstall -y kernels
50+
4851
# When installing in editable mode, `transformers` is not recognized as a package.
4952
# this line must be added in order for python to be aware of transformers.
5053
RUN cd transformers && python3 setup.py develop

docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,9 @@ RUN python3 -m pip uninstall -y deepspeed
5757
#RUN git clone https://github.com/pytorch/TensorRT.git
5858
#RUN cd TensorRT/py && python3 setup.py install --fx-only
5959

60+
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
61+
RUN python3 -m pip uninstall -y kernels
62+
6063
# When installing in editable mode, `transformers` is not recognized as a package.
6164
# this line must be added in order for python to be aware of transformers.
6265
RUN cd transformers && python3 setup.py develop

docker/transformers-pytorch-gpu/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,9 @@ RUN python3 -m pip uninstall -y tensorflow flax
2828
RUN python3 -m pip install --no-cache-dir git+https://github.com/facebookresearch/detectron2.git pytesseract
2929
RUN python3 -m pip install -U "itsdangerous<2.1.0"
3030

31+
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
32+
RUN python3 -m pip uninstall -y kernels
33+
3134
# When installing in editable mode, `transformers` is not recognized as a package.
3235
# this line must be added in order for python to be aware of transformers.
3336
RUN cd transformers && python3 setup.py develop

docker/transformers-quantization-latest-gpu/Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,9 @@ RUN python3 -m pip install --no-cache-dir "auto-round>=0.5.0"
9090
# Add transformers in editable mode
9191
RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch]
9292

93+
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
94+
RUN python3 -m pip uninstall -y kernels
95+
9396
# When installing in editable mode, `transformers` is not recognized as a package.
9497
# this line must be added in order for python to be aware of transformers.
9598
RUN cd transformers && python3 setup.py develop

docs/source/en/_toctree.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,8 @@
3939
title: Tokenizers
4040
- local: image_processors
4141
title: Image processors
42+
- local: video_processors
43+
title: Video processors
4244
- local: backbones
4345
title: Backbones
4446
- local: feature_extractors
@@ -362,7 +364,9 @@
362364
title: Feature Extractor
363365
- local: main_classes/image_processor
364366
title: Image Processor
365-
title: Main classes
367+
- local: main_classes/video_processor
368+
title: Video Processor
369+
title: Main Classes
366370
- sections:
367371
- sections:
368372
- local: model_doc/albert

docs/source/en/conversations.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ This guide shows you how to quickly start chatting with Transformers from the co
2727

2828
## transformers CLI
2929

30-
Chat with a model directly from the command line as shown below. It launches an interactive session with a model. Enter `clear` to reset the conversation, `exit` to terminate the session, and `help` to display all the command options.
30+
After you've [installed Transformers](./installation.md), chat with a model directly from the command line as shown below. It launches an interactive session with a model, with a few base commands listed at the start of the session.
3131

3232
```bash
3333
transformers chat Qwen/Qwen2.5-0.5B-Instruct
@@ -37,6 +37,12 @@ transformers chat Qwen/Qwen2.5-0.5B-Instruct
3737
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/transformers-chat-cli.png"/>
3838
</div>
3939

40+
You can launch the CLI with arbitrary `generate` flags, with the format `arg_1=value_1 arg_2=value_2 ...`
41+
42+
```bash
43+
transformers chat Qwen/Qwen2.5-0.5B-Instruct do_sample=False max_new_tokens=10
44+
```
45+
4046
For a full list of options, run the command below.
4147

4248
```bash

0 commit comments

Comments
 (0)