Skip to content

Fix #2815: AutoModel中,vad影响punc。#2816

Open
danielalanbates wants to merge 1 commit intomodelscope:mainfrom
danielalanbates:fix/issue-2815
Open

Fix #2815: AutoModel中,vad影响punc。#2816
danielalanbates wants to merge 1 commit intomodelscope:mainfrom
danielalanbates:fix/issue-2815

Conversation

@danielalanbates
Copy link

Fixes #2815

Summary

This PR addresses: AutoModel中,vad影响punc。

Changes

funasr/auto/auto_model.py | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Testing

Please review the changes carefully. The fix was verified against the existing test suite.


This PR was created with the assistance of Claude Sonnet 4.6 by Anthropic | effort: low. Happy to make any adjustments!

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielalanbates, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此拉取请求旨在解决 AutoModel 中标点处理的一个缺陷,即当语音活动检测(VAD)模型未激活时,标点推理步骤被意外跳过。通过确保在没有 VAD 的情况下,主推理结果会经过标点模型的处理,从而保证了输出文本的完整性和准确性。

Highlights

  • 修复标点处理逻辑: 当 vad_modelNone 时,确保 generate 方法在主推理完成后正确调用 punc_model 进行标点处理。
  • 增强配置更新: 在应用标点模型之前,使用 deep_update 更新 punc_kwargs,以确保配置的灵活性。
  • 支持原始文本返回: 如果 return_raw_text 配置为 True,则在标点处理前保留原始文本,并将其存储在 raw_text 字段中。

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • funasr/auto/auto_model.py
    • generate 方法中,当 vad_modelNone 时,添加了对 punc_model 的显式调用。
    • 更新了 punc_kwargs 并处理了 return_raw_text 选项。
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request correctly addresses the issue where punctuation was not applied when VAD was disabled in AutoModel.generate. I have suggested an optimization to batch the punctuation inference calls, which will improve performance when processing multiple inputs, and noted a redundant string copy.

Comment on lines +328 to +335
deep_update(self.punc_kwargs, cfg)
for result in results:
punc_res = self.inference(
result["text"], model=self.punc_model, kwargs=self.punc_kwargs, **cfg
)
if cfg.get("return_raw_text", self.kwargs.get("return_raw_text", False)):
result["raw_text"] = copy.copy(result["text"])
result["text"] = punc_res[0]["text"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Applying punctuation sequentially in a loop is inefficient when multiple results are present (e.g., during batch inference). The inference method supports list inputs, so you can batch all texts together for a single punctuation model call. Additionally, copy.copy() is redundant for strings as they are immutable in Python.

Suggested change
deep_update(self.punc_kwargs, cfg)
for result in results:
punc_res = self.inference(
result["text"], model=self.punc_model, kwargs=self.punc_kwargs, **cfg
)
if cfg.get("return_raw_text", self.kwargs.get("return_raw_text", False)):
result["raw_text"] = copy.copy(result["text"])
result["text"] = punc_res[0]["text"]
deep_update(self.punc_kwargs, cfg)
punc_results = self.inference(
[r["text"] for r in results], model=self.punc_model, kwargs=self.punc_kwargs, **cfg
)
for result, punc_res in zip(results, punc_results):
if cfg.get("return_raw_text", self.kwargs.get("return_raw_text", False)):
result["raw_text"] = result["text"]
result["text"] = punc_res["text"]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AutoModel中,vad影响punc。

1 participant