feat(mobx): make 2022.3 @computed decorator lazy#4639
Conversation
ComputedValue is now created on first read of the decorated getter instead of eagerly during instance construction, so getters that are never accessed on a given instance no longer allocate a ComputedValue. Closes mobxjs#4616
🦋 Changeset detectedLatest commit: dec5f98 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
@kubk Can you please review this. |
|
I can take a look on Monday as well, I cleared my schedule to do OSS
maintenance that I didnt get go lately
Op vr 1 mei 2026, 12:39 schreef Ashish kumar choubey <
***@***.***>:
… *ashishkr96* left a comment (mobxjs/mobx#4639)
<#4639 (comment)>
@kubk <https://github.com/kubk> Can you please review this.
—
Reply to this email directly, view it on GitHub
<#4639 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN4NBASRLX6IMEWNSZLMPL4YR5F5AVCNFSM6AAAAACYLSLUTCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGNJYHEZDMNZXHE>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
mweststrate
left a comment
There was a problem hiding this comment.
Hey @ashishkr96!
Thanks for this PR. The implementation looks semantically correct, so thanks for taking this effort! I left some comments on how I think this could be faster (since map lookups are relatively expensive)
The whole premise of this PR however is "On classes with several @computed getters across many instances this is wasted allocation and construction work.". Since closures are not free either (they capture several pieces of data in this PR), and there is now an extra map lookup on every hot path of reading an observable; do you have any measurement on the savings actually achieved by this PR? E.g. eithter in memory pressure or runtime speed? For example can you notice a difference when running yarn perf proxy in packages/mobx? (Or is there a benchmark missing there that would demonstrate the difference)
Thanks!
| return this.values_.get(key)!.get() | ||
| } | ||
|
|
||
| materializeLazyComputed_(key: PropertyKey): boolean { |
There was a problem hiding this comment.
It looks like the return value is never uses, so change return type to void?
There was a problem hiding this comment.
Or better, since this is on a super hot path, would it make sense to first lookup the entry in values_ and return it, and otherwise run the factory, and store, and return the factory result (to avoid another .get). This should reduce the amount of map lookups in the common scenario where initializer did already run (or is not a computed), from 2 to 1.
There was a problem hiding this comment.
done in a27ad5b. materializeLazyComputed_ now returns the freshly-built ComputedValue (or undefined when the key isn't lazy), and the callers do:
const observable = this.values_.get(key) ?? this.materializeLazyComputed_(key)So the hot path (computed already materialised, plain observable, or non-lazy class) is one Map lookup. Same fix applied to setObservablePropValue_ and getAtom in type-utils.ts. Boolean return is gone too.
…mark Address review on mobxjs#4639: - getObservablePropValue_ / setObservablePropValue_ / getAtom now do values_.get(key) ?? materializeLazyComputed_(key), so the hot path (computed already materialised, plain observable, or non-lazy class) is a single Map lookup instead of two. - materializeLazyComputed_ now returns the freshly-built ComputedValue (or undefined when the key isn't lazy) so callers don't have to look it up again. Drops the unused boolean return. - Add __tests__/perf/lazy-computed-decorator.ts (run via yarn perf-decorators) exercising the @computed decorator path with 50k instances * 10 getters where only 1 is read. Benchmark (50k instances, 10 @computed getters each, 1 getter read per instance): main (no lazy): construct +305 MB heap, 316-345 ms PR (lazy, 2 lk): construct +153 MB heap, 233-270 ms PR (lazy, 1 lk): construct +153 MB heap, 245-292 ms Lazy halves construction-time heap (-50%) and is ~25% faster to construct vs. main; the 1-lookup refactor matches the 2-lookup variant within run-to-run noise on this microbenchmark while removing the extra Map lookup the reviewer flagged on the hot path.
|
Hey @mweststrate, thanks for the review! Both points addressed in a27ad5b: 1. Hot-path collapse (your inline suggestion)
const observable = this.values_.get(key) ?? this.materializeLazyComputed_(key)Hot path is back to one 2. BenchmarksExisting Shape: 50 000 instances of a class with 10
So the lazy decorator's actual savings on the #4616 shape:
I also ran Happy to tweak the benchmark shape (different |
Address review on mobxjs#4639: - getObservablePropValue_ / setObservablePropValue_ / getAtom now do values_.get(key) ?? materializeLazyComputed_(key), so the hot path (computed already materialised, plain observable, or non-lazy class) is a single Map lookup instead of two. - materializeLazyComputed_ returns the freshly-built ComputedValue (or undefined when the key isn't lazy) so callers don't have to look it up again. Drops the unused boolean return.
b64cfca to
a27ad5b
Compare
|
@ashishkr96 these benchmarks look great, better than I had intuitively expect! Yes please check in the benchmark. No additional benchmarks needed I think. Also mind marking the change as feat/minor? After that I'll give it a final review, but thanks for wondering this. Did you experiment with doing the same for ObservableValue as well? I suspect it'd wouldn't be as much of a delta since they have less administration, but intrigued now :) |
|
note: the CI currently failed, but that should be fixed by doing a rebase. |
Adds a standalone perf benchmark for the lazy `@computed` decorator (mobxjs#4639): 50k instances × 10 getters with 1 read/instance — the realistic "wide class, sparse access" shape from mobxjs#4616. Run with `yarn perf-decorator` (requires a prior `yarn build`). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per maintainer review on mobxjs#4639: the lazy `@computed` decorator is a perf improvement worth a minor bump rather than a patch. Reframes the entry as `feat:` and notes the construction heap / time savings measured by the benchmark added in the previous commit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a standalone perf benchmark for the lazy `@computed` decorator (mobxjs#4639): 50k instances × 10 getters with 1 read/instance — the realistic "wide class, sparse access" shape from mobxjs#4616. Run with `yarn perf-decorator` (requires a prior `yarn build`).
Per maintainer review on mobxjs#4639: the lazy `@computed` decorator is a perf improvement worth a minor bump rather than a patch. Reframes the entry as `feat:` and notes the construction heap / time savings measured by the benchmark added in the previous commit.
f034e97 to
d39c2e9
Compare
@mweststrate Yes, bigger delta than I expected. Quick exploration on a separate branch (
The experimental change is one line in Caveats — 3 tests in
So the savings are real and the gaps are the same patterns we're already shipping for Benchmark file used ( |
|
Quick correction on the previous comment — I had only run the sparse-access shape (1 of 10 fields read per instance), which is the best case for lazy. Re-ran with all 10 fields read per instance to find the floor:
Lazy doesn't eliminate the allocation work, it defers it from construct to first-read. On the FULL shape, first-read regresses 3.4× — the deferred allocations land there. Total work (construct + first-read) is still less for lazy on both shapes (~107 ms vs ~618 sparse, ~360 ms vs ~635 full), likely from GC/JIT effects, but the steady-state win shrinks a lot once everything has been read. Also worth flagging: the experimental change I measured is just the one-line deletion in Net read: lazy |
Summary
Fixes #4616.
The 2022.3
@computedgetter decorator currently creates aComputedValueeagerly viaaddInitializerfor every decorated getter on every instance — even getters that are never read. On classes with several@computedgetters across many instances this is wasted allocation and construction work.This change defers
ComputedValuecreation to the first read of the decorated getter:decorate_20223_incomputedannotation.tsnow stores a small factory closure on the admin's newlazyComputedKeys_map instead of constructing theComputedValueupfront.ObservableObjectAdministration.materializeLazyComputed_(key)lazily instantiates theComputedValueand inserts it intovalues_on demand. It is called fromgetObservablePropValue_/setObservablePropValue_(so the existing decorator getter wrapper Just Works) and fromgetAtom(soobserve(o, key, ...)materialises before reading the underlying observable).isObservablePropcheckslazyComputedKeys_in addition tovalues_so a decorated computed continues to report as observable before its first read.The legacy
makeObservable-based path (defineComputedProperty_) is unchanged.Test plan
yarn test(1036 passing, 14 skipped — full suite green)yarn test:typesstage3-decorators.ts:4616 - @computed decorator should be lazy— assertsComputedValueis not created until first read whileisObservablePropstill returnstrue, and an unused getter never runs.4616 - observe on @computed before first read materialises it— assertsobserve(o, key)works for a lazy computed before any direct read.