Skip to content

feat(mobx): make 2022.3 @computed decorator lazy#4639

Open
ashishkr96 wants to merge 7 commits into
mobxjs:mainfrom
ashishkr96:fix/lazy-computed-decorator-4616
Open

feat(mobx): make 2022.3 @computed decorator lazy#4639
ashishkr96 wants to merge 7 commits into
mobxjs:mainfrom
ashishkr96:fix/lazy-computed-decorator-4616

Conversation

@ashishkr96
Copy link
Copy Markdown

Summary

Fixes #4616.

The 2022.3 @computed getter decorator currently creates a ComputedValue eagerly via addInitializer for every decorated getter on every instance — even getters that are never read. On classes with several @computed getters across many instances this is wasted allocation and construction work.

This change defers ComputedValue creation to the first read of the decorated getter:

  • decorate_20223_ in computedannotation.ts now stores a small factory closure on the admin's new lazyComputedKeys_ map instead of constructing the ComputedValue upfront.
  • ObservableObjectAdministration.materializeLazyComputed_(key) lazily instantiates the ComputedValue and inserts it into values_ on demand. It is called from getObservablePropValue_ / setObservablePropValue_ (so the existing decorator getter wrapper Just Works) and from getAtom (so observe(o, key, ...) materialises before reading the underlying observable).
  • isObservableProp checks lazyComputedKeys_ in addition to values_ so a decorated computed continues to report as observable before its first read.

The legacy makeObservable-based path (defineComputedProperty_) is unchanged.

Test plan

  • yarn test (1036 passing, 14 skipped — full suite green)
  • yarn test:types
  • New regression tests in stage3-decorators.ts:
    • 4616 - @computed decorator should be lazy — asserts ComputedValue is not created until first read while isObservableProp still returns true, and an unused getter never runs.
    • 4616 - observe on @computed before first read materialises it — asserts observe(o, key) works for a lazy computed before any direct read.

ComputedValue is now created on first read of the decorated getter
instead of eagerly during instance construction, so getters that are
never accessed on a given instance no longer allocate a ComputedValue.

Closes mobxjs#4616
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Apr 30, 2026

🦋 Changeset detected

Latest commit: dec5f98

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
mobx Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@ashishkr96
Copy link
Copy Markdown
Author

@kubk Can you please review this.

@mweststrate
Copy link
Copy Markdown
Member

mweststrate commented May 1, 2026 via email

Copy link
Copy Markdown
Member

@mweststrate mweststrate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ashishkr96!

Thanks for this PR. The implementation looks semantically correct, so thanks for taking this effort! I left some comments on how I think this could be faster (since map lookups are relatively expensive)

The whole premise of this PR however is "On classes with several @computed getters across many instances this is wasted allocation and construction work.". Since closures are not free either (they capture several pieces of data in this PR), and there is now an extra map lookup on every hot path of reading an observable; do you have any measurement on the savings actually achieved by this PR? E.g. eithter in memory pressure or runtime speed? For example can you notice a difference when running yarn perf proxy in packages/mobx? (Or is there a benchmark missing there that would demonstrate the difference)

Thanks!

return this.values_.get(key)!.get()
}

materializeLazyComputed_(key: PropertyKey): boolean {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the return value is never uses, so change return type to void?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or better, since this is on a super hot path, would it make sense to first lookup the entry in values_ and return it, and otherwise run the factory, and store, and return the factory result (to avoid another .get). This should reduce the amount of map lookups in the common scenario where initializer did already run (or is not a computed), from 2 to 1.

Copy link
Copy Markdown
Author

@ashishkr96 ashishkr96 May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in a27ad5b. materializeLazyComputed_ now returns the freshly-built ComputedValue (or undefined when the key isn't lazy), and the callers do:

const observable = this.values_.get(key) ?? this.materializeLazyComputed_(key)

So the hot path (computed already materialised, plain observable, or non-lazy class) is one Map lookup. Same fix applied to setObservablePropValue_ and getAtom in type-utils.ts. Boolean return is gone too.

ashishkr96 added a commit to ashishkr96/mobx that referenced this pull request May 4, 2026
…mark

Address review on mobxjs#4639:

- getObservablePropValue_ / setObservablePropValue_ / getAtom now do
  values_.get(key) ?? materializeLazyComputed_(key), so the hot path
  (computed already materialised, plain observable, or non-lazy class)
  is a single Map lookup instead of two.
- materializeLazyComputed_ now returns the freshly-built ComputedValue
  (or undefined when the key isn't lazy) so callers don't have to look
  it up again. Drops the unused boolean return.
- Add __tests__/perf/lazy-computed-decorator.ts (run via
  yarn perf-decorators) exercising the @computed decorator path with
  50k instances * 10 getters where only 1 is read.

Benchmark (50k instances, 10 @computed getters each, 1 getter read per
instance):

  main (no lazy):   construct +305 MB heap, 316-345 ms
  PR (lazy, 2 lk):  construct +153 MB heap, 233-270 ms
  PR (lazy, 1 lk):  construct +153 MB heap, 245-292 ms

Lazy halves construction-time heap (-50%) and is ~25% faster to
construct vs. main; the 1-lookup refactor matches the 2-lookup variant
within run-to-run noise on this microbenchmark while removing the
extra Map lookup the reviewer flagged on the hot path.
@ashishkr96
Copy link
Copy Markdown
Author

ashishkr96 commented May 4, 2026

Hey @mweststrate, thanks for the review!

Both points addressed in a27ad5b:

1. Hot-path collapse (your inline suggestion)

materializeLazyComputed_ now returns the freshly-built ComputedValue (or undefined when the key isn't lazy), and call sites do:

const observable = this.values_.get(key) ?? this.materializeLazyComputed_(key)

Hot path is back to one Map.get (computed already materialised, plain observable, or non-lazy class). Applied to getObservablePropValue_, setObservablePropValue_, and getAtom in type-utils.ts. The unused boolean return is gone.

2. Benchmarks

Existing yarn perf proxy doesn't exercise the @computed decorator path (it uses mobx.computed(fn, { context }) directly), so I wrote a small standalone benchmark on the side to validate the lazy premise — kept it out of the PR diff since it'd add a tsc step to the perf workflow; happy to upstream it under __tests__/perf/ if you'd like.

Shape: 50 000 instances of a class with 10 @computed getters where only one is read per instance (the realistic "wide class, sparse access" from #4616). Linux, Node 24, --expose-gc, 3 timed runs after one warmup:

state construct heap construct time first-read time re-read 5×
main (no lazy) +304.9 MB 316–345 ms 52–54 ms 131–200 ms
PR original (lazy, 2 lookups) +153.3 MB 233–270 ms 36–40 ms 140–149 ms
PR refactored (lazy, 1 lookup) +153.3 MB 245–292 ms 36–41 ms 142–155 ms

So the lazy decorator's actual savings on the #4616 shape:

  • −50 % heap during construction (305 → 153 MB on 50k×10 getters), since unread ComputedValues are never allocated.
  • ~25 % faster construction for the same reason.
  • The 1-lookup refactor matches the 2-lookup variant within run-to-run noise on this microbenchmark — the closure + extra Map probe weren't free, but with the refactor the steady-state read path is identical to main's.

I also ran yarn perf proxy 3× on the refactored branch — all 37 tests pass and the headline numbers (order system, array loop, computed memoization, etc.) are well within run-to-run jitter, which is expected since none of those tests touch the decorator path.

Happy to tweak the benchmark shape (different instances × getters × reads mix, or a runInAction path) — and let me know if you'd prefer the benchmark file checked in.

Address review on mobxjs#4639:

- getObservablePropValue_ / setObservablePropValue_ / getAtom now do
  values_.get(key) ?? materializeLazyComputed_(key), so the hot path
  (computed already materialised, plain observable, or non-lazy class)
  is a single Map lookup instead of two.
- materializeLazyComputed_ returns the freshly-built ComputedValue
  (or undefined when the key isn't lazy) so callers don't have to look
  it up again. Drops the unused boolean return.
@ashishkr96 ashishkr96 force-pushed the fix/lazy-computed-decorator-4616 branch from b64cfca to a27ad5b Compare May 4, 2026 16:29
@ashishkr96 ashishkr96 requested a review from mweststrate May 4, 2026 16:33
@mweststrate
Copy link
Copy Markdown
Member

@ashishkr96 these benchmarks look great, better than I had intuitively expect! Yes please check in the benchmark. No additional benchmarks needed I think. Also mind marking the change as feat/minor?

After that I'll give it a final review, but thanks for wondering this.

Did you experiment with doing the same for ObservableValue as well? I suspect it'd wouldn't be as much of a delta since they have less administration, but intrigued now :)

@mweststrate
Copy link
Copy Markdown
Member

note: the CI currently failed, but that should be fixed by doing a rebase.

ashishkr96 added a commit to ashishkr96/mobx that referenced this pull request May 6, 2026
Adds a standalone perf benchmark for the lazy `@computed` decorator (mobxjs#4639):
50k instances × 10 getters with 1 read/instance — the realistic "wide class,
sparse access" shape from mobxjs#4616. Run with `yarn perf-decorator` (requires a
prior `yarn build`).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ashishkr96 added a commit to ashishkr96/mobx that referenced this pull request May 6, 2026
Per maintainer review on mobxjs#4639: the lazy `@computed` decorator is a perf
improvement worth a minor bump rather than a patch. Reframes the entry as
`feat:` and notes the construction heap / time savings measured by the
benchmark added in the previous commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ashishkr96 added 2 commits May 6, 2026 20:13
Adds a standalone perf benchmark for the lazy `@computed` decorator (mobxjs#4639):
50k instances × 10 getters with 1 read/instance — the realistic "wide class,
sparse access" shape from mobxjs#4616. Run with `yarn perf-decorator` (requires a
prior `yarn build`).
Per maintainer review on mobxjs#4639: the lazy `@computed` decorator is a perf
improvement worth a minor bump rather than a patch. Reframes the entry as
`feat:` and notes the construction heap / time savings measured by the
benchmark added in the previous commit.
@ashishkr96 ashishkr96 force-pushed the fix/lazy-computed-decorator-4616 branch from f034e97 to d39c2e9 Compare May 6, 2026 14:43
@ashishkr96
Copy link
Copy Markdown
Author

ashishkr96 commented May 6, 2026

Did you experiment with doing the same for ObservableValue as well?

@mweststrate Yes, bigger delta than I expected. Quick exploration on a separate branch (explore/lazy-observable-4616), same wide-class/sparse-access shape (50k instances × 10 fields, 1 read per instance, 5 re-reads, Linux Node 24, --expose-gc, 3 timed runs after warmup):

state construct heap construct time first-read time re-read 5×
eager (current main behaviour) +223.9 MB 615.7 ms 23.9 ms 82.5 ms
lazy @observable accessor (experimental) +40.0 MB 82.1 ms 24.7 ms 23.0 ms
delta −82.1% −86.7% within noise −72%

The experimental change is one line in observableannotation.ts — drop the eager initializeObservable(this, value) call from the init callback, since the get/set handlers already lazy-materialise on first access (the accessor's backing slot still holds the initial value, so desc.get.call(this) returns it correctly).

Caveats — 3 tests in stage3-decorators.ts fail with this minimal change, all the same seams the @computed PR already closed:

  • getAtom / observe(o, key) before first read — needs a materializeLazyObservable_ call mirroring materializeLazyComputed_ in getAtom.
  • isObservableProp — needs to consult a lazyObservableKeys_ set.
  • values_.keys() enumeration in inheritance tests — same union with lazy keys.

So the savings are real and the gaps are the same patterns we're already shipping for @computed. Happy to open a follow-up PR (feat(mobx): make @observable accessor lazy) once #4639 lands — the wins look meaningfully bigger than the @computed change (which was −50% heap / −25% construct on the same shape) because the eager ObservableValue + Atom per field is heavier than I'd assumed.

Benchmark file used (__tests__/perf/lazy-observable-decorator.ts) lives on the explore branch; can include it in the follow-up PR.

@ashishkr96 ashishkr96 changed the title fix(mobx): make 2022.3 @computed decorator lazy feat(mobx): make 2022.3 @computed decorator lazy May 6, 2026
@ashishkr96
Copy link
Copy Markdown
Author

Quick correction on the previous comment — I had only run the sparse-access shape (1 of 10 fields read per instance), which is the best case for lazy. Re-ran with all 10 fields read per instance to find the floor:

scenario construct heap construct ms first-read ms re-read ms
SPARSE eager 223.9 MB 595 23 82
SPARSE lazy 40.0 MB 82 25 23
FULL eager 223.9 MB 551 84 367
FULL lazy 40.0 MB 70 289 (+244%) 378

Lazy doesn't eliminate the allocation work, it defers it from construct to first-read. On the FULL shape, first-read regresses 3.4× — the deferred allocations land there. Total work (construct + first-read) is still less for lazy on both shapes (~107 ms vs ~618 sparse, ~360 ms vs ~635 full), likely from GC/JIT effects, but the steady-state win shrinks a lot once everything has been read.

Also worth flagging: the experimental change I measured is just the one-line deletion in init; the production version would still need the getAtom/isObservableProp/values_.keys() fixups that @computed got, plus a per-read branch I haven't measured.

Net read: lazy @observable is real but the headline number (−82% heap) is a sparse-access best case, not a steady-state win. Construction-time savings are robust, runtime savings depend on access pattern. Worth a follow-up PR, but framed as "construction-time perf" rather than "general perf win."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

@computed decorator should be lazy

2 participants