Skip to content

Add MmapUseMlockIgnoreErrors as default load mode when creating creat…#17613

Merged
psiddh merged 4 commits intomainfrom
android_improv
Feb 24, 2026
Merged

Add MmapUseMlockIgnoreErrors as default load mode when creating creat…#17613
psiddh merged 4 commits intomainfrom
android_improv

Conversation

@psiddh
Copy link
Copy Markdown
Contributor

@psiddh psiddh commented Feb 22, 2026

…e_text_llm_runner

Users can still override the default mode

…e_text_llm_runner

Users can still override the default mode
Copilot AI review requested due to automatic review settings February 22, 2026 04:17
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Feb 22, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17613

Note: Links to docs will display an error until the docs builds have been completed.

❌ 7 New Failures, 1 Unrelated Failure

As of commit 496a321 with merge base 9a58ce8 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 22, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new optional parameter load_mode to the create_text_llm_runner factory functions, with a default value of Module::LoadMode::MmapUseMlockIgnoreErrors (previously, the load mode was hardcoded to File). This change allows users to customize the memory loading strategy while providing a more efficient default for LLM workloads.

Changes:

  • Added load_mode parameter to both overloads of create_text_llm_runner with default value MmapUseMlockIgnoreErrors
  • Updated function implementation to use the new parameter when constructing Module instances
  • Removed hardcoded Module::LoadMode::File usage in module construction

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
extension/llm/runner/llm_runner_helper.h Added load_mode parameter with default value to both create_text_llm_runner function declarations
extension/llm/runner/llm_runner_helper.cpp Implemented parameter propagation and replaced hardcoded LoadMode::File with configurable load_mode parameter

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

float temperature = -1.0f,
const std::string& method_name = "forward");
const std::string& method_name = "forward",
Module::LoadMode load_mode = Module::LoadMode::MmapUseMlockIgnoreErrors);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ignore errors variant over MmapUseMlock?

Copy link
Copy Markdown
Contributor Author

@psiddh psiddh Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The codebase only uses MmapUseMlockIgnoreErrors over MmapUseMlock for LLM runners. I feel that is the right default.

  • MmapUseMlock: mlock failure → logs error, unmaps the pages, returns Error::NotSupported.
    The model fails to load entirely. (Stricter!)
  • MmapUseMlockIgnoreErrors: mlock failure → logs at Debug level and continues. The model
    loads via normal mmap, just without pages pinned in RAM.

For LLM runners, a hard failure is almost never the right behavior. Failing to load the model at all will be
bad UX / functionality experience for the End User.

By using mmap-based loading in our LLM runners, we avoid loading the entire model into RAM
upfront, which reduces peak memory usage and OOM risk. The MmapUseMlockIgnoreErrors variant
additionally attempts to pin pages in memory for better inference latency, but gracefully
falls back to standard mmap if the system can't support it, giving us the best of both
worlds without hard failures.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realistically, I think that we're unlikely to actually be able to lock the entire LLM PTE on most systems, so using the base mmap might be better? The mlock ignore errors variant seems functionally fine, though. It'll just fall through to mmap pretty much 100% of the time.

Comment thread extension/llm/runner/llm_runner_helper.h
Added load_mode parameter to create_text_llm_runner function for specifying loading strategy of the model file.
Copilot AI review requested due to automatic review settings February 23, 2026 18:46
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread extension/llm/runner/llm_runner_helper.h Outdated
Comment thread extension/llm/runner/llm_runner_helper.h Outdated
Copilot AI review requested due to automatic review settings February 23, 2026 18:52
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread extension/llm/runner/llm_runner_helper.h Outdated
Updated documentation for load_mode parameter to clarify its default behavior.
Copilot AI review requested due to automatic review settings February 23, 2026 18:58
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@psiddh psiddh requested a review from GregoryComer February 23, 2026 21:14
@psiddh psiddh merged commit a0b4408 into main Feb 24, 2026
199 of 207 checks passed
@psiddh psiddh deleted the android_improv branch February 24, 2026 04:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants