Skip to content

Add initialHeaderSize prop to avoid overcalculating containers on first render#417

Closed
peterpme wants to merge 4 commits intoLegendApp:mainfrom
peterpme:feat/initial-header-size
Closed

Add initialHeaderSize prop to avoid overcalculating containers on first render#417
peterpme wants to merge 4 commits intoLegendApp:mainfrom
peterpme:feat/initial-header-size

Conversation

@peterpme
Copy link
Copy Markdown

We had a problem where our ListHeaderComponent was like 90% of the screen. It seemed like the container pool is over calculated on the initial render because headerSize state initializes to 0. This caused for a bunch of additional containers to be allocated and a bunch of items rendering on the first frame

Two fixes:

  • Add initialHeaderSize prop that seeds the headerSize state before the first render, so doInitialAllocateContainers uses the correct available scroll area from frame 0
  • Fix scrollBottom formula in calculateItemsInView: the previous scroll < 0 ? -scroll : 0 term cancelled out topPad when scroll=0, causing scrollBottom to equal scrollLength regardless of header size

…st render

When a ListHeaderComponent takes up a significant portion of the screen,
the container count is overcalculated on first render because headerSize
initializes to 0. This causes unnecessary containers to be allocated and
too many items to render initially.

Two fixes:
- Add `initialHeaderSize` prop that seeds the headerSize state before
  the first render, so doInitialAllocateContainers uses the correct
  available scroll area from frame 0
- Fix scrollBottom formula in calculateItemsInView: the previous
  `scroll < 0 ? -scroll : 0` term cancelled out topPad when scroll=0,
  causing scrollBottom to equal scrollLength regardless of header size
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6fd20674fc

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +42 to +43
const headerSize = peek$(ctx, "headerSize") || 0;
const numContainers = Math.ceil((((scrollLength - headerSize) + scrollBuffer * 2) / averageItemSize!) * numColumns);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clamp header-adjusted container count before allocating

When headerSize exceeds scrollLength + scrollBuffer * 2, this expression becomes <= 0. That leaves numContainers at 0 or negative, so calculateItemsInView will skip rendering entirely for 0, and helper code such as findAvailableContainers will start deriving negative container indices for < 0. A tall hero header or a small drawDistance can therefore leave the list blank on first mount; the header-adjusted viewport should be clamped before converting it to a container count.

Useful? React with 👍 / 👎.

@peterpme peterpme force-pushed the feat/initial-header-size branch from 36a87db to 6fd2067 Compare April 12, 2026 14:46
…tching

Introduces LegendListDatasets — a new component that keeps multiple data
arrays (datasets) simultaneously mounted in the React tree, using
React.Activity to defer work for inactive datasets. Switching between
datasets (e.g. spot/futures/favorites tabs) is near-instant: no React
reconciliation, no re-measurement, no re-allocation.

Architecture:
- One shared ScrollView and ListHeaderComponent (rendered once)
- Each dataset gets its own StateProvider + React.Activity wrapper
- Active dataset: Activity mode="visible", Containers in normal flow
- Inactive datasets: Activity mode="hidden", wrapper is position:absolute
  height:0 overflow:hidden — items live at translateY:-9999 but stay
  mounted so their sizes are cached
- On dataset switch: parent calls activate(scrollOffset) on the new
  active layer which runs calculateItemsInView with cached sizes

New exports: LegendListDatasets, DatasetEntry, LegendListDatasetsProps
- Wire up forwardedRef with full LegendListRef imperative API
  (scrollToIndex, scrollToItem, scrollToEnd, scrollToOffset,
   scrollIndexIntoView, scrollItemIntoView, getState,
   setScrollProcessingEnabled, setVisibleContentAnchorOffset,
   flashScrollIndicators, getNativeScrollRef, etc.)
  All methods delegate to the active dataset's ctx/state via
  getCtx()/getState() on DatasetLayerHandle

- Add initialScrollIndex / initialScrollOffset support
  Applied on first activation of each dataset; subsequent
  activations preserve the current scroll position

- Add snapToIndices support
  Each DatasetLayerInner calculates snap offsets via
  updateSnapToOffsets; LegendListDatasets subscribes to the
  active dataset's snapToOffsets via listen$ and passes them
  directly to the ScrollView

- Fix onScrollHandler to use activeIndexRef instead of captured
  activeIndex so it always routes to the correct layer without
  needing activeIndex in the dependency array
Three root causes fixed:

1. DatasetLayer had no React.memo — any parent re-render would cascade
   through all dataset layers unconditionally. Wrapped in typedMemo so
   sibling layers are skipped when only one dataset's data changes.

2. Callback props (renderItem, onEndReached, keyExtractor, etc.) were in
   sharedLayerProps deps, so un-memoized inline functions in the parent
   would invalidate sharedLayerProps every render, breaking the memo.
   Stabilized via refs + stable wrapper callbacks.

3. dataVersion was shared across all datasets — bumping it triggered
   calculateItemsInView({dataChanged:true}) on every layer simultaneously,
   clearing all positions for all datasets at once. Moved to DatasetEntry
   as a per-dataset field so only the affected dataset rebuilds.
@jmeistrich
Copy link
Copy Markdown
Member

Hey @peterpme! This PR seems like it contains multiple things, the header size change and also a LegendListDatasets feature? Should the datasets stuff be in a separate PR?

For the header size thing, I'm curious about the use case. I'm guessing that the header is 90% of the initial screen, but then you scroll down and see normal items there? In that case I think the optimization we're looking for is more for the number of items that initially render. The numContainers count is for allocating the container pool. The container pool should ideally never need to increase - the main performance optimization in LegendList is that we want to only render the array of containers once, and then we trigger individual containers to re-render while scrolling. So if that was reduced by 90% it would immediately grow when scrolling down, which would make the first scrolls slower.

I think what we really want is an optimization to render only what's visible in the first frame. So then on mount rather than the current behavior of rendering a screen's worth of items below the header, it would only render the items visible in screenSize - headerSize. Then as you scroll it would render items that come into view from there. Does that sound right?

@peterpme
Copy link
Copy Markdown
Author

peterpme commented Apr 15, 2026

Hey @jmeistrich, shoot sorry about that! I think I accidentally pushed to the wrong branch. I can close this out completely. Yes, the other idea I had was taking from the react navigation convo and applying it to the containers but we can ignore that for now.

That sounds exactly right! Thank you! I will look into this and put up a better PR.

@peterpme peterpme closed this Apr 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants