Cover image for How We Render Millions of Rows in React Without Crashing Chrome

How We Render Millions of Rows in React Without Crashing Chrome

👨‍💻 Matthew Hendricks
📅
⏱️ 8 min read
#engineering #performance #react #trading

A practical architecture for high-volume market data in the browser — what actually works today, where it still struggles, and how we’re evolving Matchstick Stream.

When you try to look at real tick data in the browser, you don’t have to go very far before everything falls apart.

Somewhere between 100k and 500k rows, most React apps start to:

  • Stutter when you scroll
  • Pause for seconds when new data arrives
  • Or just freeze completely when dev tools or a screen recorder are open

Matchstick Stream — the web UI for our Rust trading engine — is my attempt to build a browser-based data browser that can live in that world without melting.

This post is a technical north star for that effort:

  • What architecture we’re using today
  • What seems to be bottlenecking real-world 60fps
  • And how we’re iterating without jumping to WebGL or WASM

Constraint: this project intentionally stays in “normal web stack” territory — React, TypeScript, DOM, Canvas. If we need WebGL/WASM later, that’ll likely be a separate matchstick-render component.


The Setup: 1M+ Ticks in a Trading UI

The core demo looks like this:

  • 1,000,000+ ticks exposed via a DataSourceManager (mock or live)
  • A virtualized DataGrid that only renders what’s on screen
  • A live chart fed by a downsampled subset of rows
  • A performance dashboard that tracks cache stats, memory estimates, and operation timings

Under the hood:

  • Data access flows through useTickStream
  • Rendering flows through DataGrid, LiveChart, and friends
  • Metrics come from usePerformanceMonitoring and PerformanceDashboard

On a clean browser with nothing else running, this feels very close to “native-terminal smooth.”

As soon as you open dev tools, record the screen, or enable heavy metrics overlays, you can feel the frame rate dip.

The rest of this post is about why — and what we’ve already put in place to push that ceiling higher.


Architecture: Virtualization, Caching, and Aggregation

Virtualization: Only Render What You See

The DataGrid uses TanStack Virtual to keep the DOM small:

  • Roughly 30–60 row elements exist at any time.
  • Row height is fixed (36px), so layout math is cheap.
  • Overscan is set to a modest value to avoid over-rendering.

Each row is a DataRow component that:

  • Is wrapped in React.memo with a custom equality function
  • Uses CSS containment (contain: layout style paint)
  • Uses content-visibility to hint that off-screen rows can be skipped

So even with a million logical rows, React only deals with a handful of DOM nodes on each frame.

Caching: Infinite Query + Windowed Fetches

useTickStream wraps TanStack Query’s useInfiniteQuery:

  • Fetches windows of ticks (currently 1,000 rows per page)
  • Flattens all pages into allRows for the virtualizer
  • Prefetches additional pages as you approach the end of the current window

There’s a dedicated cache layer (cachedDataService and marketDataCache) that:

  • Tracks cache hits/misses
  • Estimates memory usage
  • Keeps an upper bound on cached pages to avoid unbounded growth

On top of that, we have on-demand aggregation:

  • When you switch from tick view to bar view, visible ticks are aggregated into StandardBar V1 in a worker.
  • Aggregated bars are cached per (aggregationLevel, visibleIndexRange) with a small LRU-ish map.

That’s the “happy path” architecture. Now let’s talk about what actually slows it down.


Where It Still Struggles (Honest Version)

Even with all of the above, there are situations where we don’t maintain a steady 60fps:

  • Browser dev tools open (especially Performance or Network tabs)
  • Screen recording running at the same time
  • Performance dashboard fully expanded and updating frequently
  • Very fast scrolls combined with active aggregation in the background

In those scenarios, you can feel:

  • Occasional dropped frames in the grid
  • Short stalls when new pages arrive while aggregation is running
  • Slight input latency when both the chart and grid are repainting at once

Some of that is simply “the cost of measuring performance in a browser.” But the code tells us there are still a few likely bottlenecks we can attack without touching WebGL/WASM.


Likely Bottleneck #1: Too Much Work on the Main Thread

Even though we push aggregation into a worker, a lot still happens on the main thread:

  • React reconciliation for the grid and chart
  • Layout and paint for the virtualized rows
  • State updates for metrics and cache stats

Two things stand out:

  1. Metrics polling

    • usePerformanceMonitoring polls every 1–2 seconds and updates React state with full performance and cache stats.
    • When the performance dashboard is open, each refresh causes React to reconcile a fairly dense stats table and several cards.
  2. Chart + grid repaint at the same time

    • App maps rowsvisibleData and then down-samples again for the chart on every render.
    • When you scroll, the virtualizer updates rows frequently, which can trigger both grid and chart work concurrently.

The combination is: measurement + visualization + UI all competing on the main thread, especially visible when dev tools are also observing everything.


Likely Bottleneck #2: Per-Frame Work Inside React

The virtualization and row memoization are solid, but there are still some subtle costs:

  • DataGrid walks rows in a useEffect to compute recycling stats.
  • visibleTicks for aggregation are recomputed with useMemo across the visible virtual items.
  • Focus management and keyboard navigation sometimes trigger scrollToIndex with smooth scrolling, which can lead to extra layout passes during fast navigation.

These are all individually reasonable, but together they add pressure to the time budget of each frame, especially under heavy interaction.


Likely Bottleneck #3: Visualization Cost of the Performance Dashboard Itself

The performance dashboard is intentionally rich:

  • Draggable container with its own layout and boundary logic
  • Multiple cache cards with animated utilization bars
  • A table of operations with counts and P95/P99 metrics

When it’s expanded and updating:

  • React reconciles a couple of hundred DOM nodes on each refresh.
  • The data it renders (counters, percentages) is constantly changing.

So the tool that helps us understand performance can, ironically, eat into the same performance budget we’re trying to protect — especially when combined with dev tools or screen capture.


How We’re Evolving the Architecture (Within the Constraints)

Given the constraints (no WebGL/WASM, stick to DOM + Canvas, keep it shippable), the next stages are about tightening the hot paths rather than rewriting the stack.

Here’s the direction we’re moving in:

1. Decouple Metrics From React Reconciliation

Today, metrics polling feeds straight into React state.

Planned adjustments:

  • Move more of the metrics storage into a lightweight in-memory store that React reads less frequently.
  • Throttle or pause dashboard updates automatically when:
    • Dev tools are open, or
    • A screen recording is active, or
    • FPS falls below a threshold.
  • Provide a “headless” metrics mode for automated runs where we care about numbers, not visualizations.

2. Make the Chart Less Eager

The chart currently slices from the same rows used by the grid.

Options we’re exploring:

  • Decouple chart input from rows and feed it from a slower, batched stream of ticks.
  • Add a simple sampling pipeline (e.g. only update the chart at 10–15fps while preserving grid smoothness).
  • Gate chart updates when scroll velocity is high (prioritize scroll over chart fidelity).

3. Reduce Work Per Scroll Event

The scroll path is sacred. We’re looking at:

  • Moving some of the recycling and metrics work off the scroll-critical path.
  • Ensuring aggregation can never trigger a large sync update while the user is mid-scroll.
  • Tightening virtualizer settings (overscan, estimateSize) based on actual FPS measurements rather than conservative defaults.

4. Treat Dev Tools and “Recorded” Sessions as Separate Profiles

In practice, there are two modes:

  • Trader mode: no dev tools, no recorder, just fast UI → we want maximum fidelity.
  • Engineering/demo mode: dev tools open, screen capture running → we want stability over fidelity.

We can detect and adapt:

  • Drop chart refresh rate when we detect dev tools.
  • Disable or slow performance dashboard updates automatically under heavy load.
  • Offer a single “Low Overhead Mode” toggle that turns off everything non-essential.

Why This Still Matters (Even Without Perfect 60fps)

Even if we never hit a perfectly flat 60fps line under every possible condition, this architecture already buys us a lot:

  • You can scroll hundreds of thousands of rows without freezing the browser.
  • You can switch between ticks and aggregated bars without reloading or blowing the heap.
  • You can introspect cache behavior and memory usage live inside the same UI.

And because we’re staying in a “normal web stack”:

  • The patterns are approachable for most React teams.
  • You don’t need to be a WebGL expert to reuse the ideas.
  • The same approach can be ported to other time-series domains (logs, analytics, IoT).

If/when we need more headroom — full 60fps under stress, richer visuals — that’s where a dedicated matchstick-render layer (WebGL/WASM) will likely take over. But Matchstick Stream is intentionally the “browser-native” layer that gets us very far without that jump.


Where to Go From Here

If you’re trying to build something similar (trading UI, log explorer, metrics browser), a few practical takeaways:

  • Start with virtualization and fixed row heights. Get the DOM under control first.
  • Measure, but don’t over-visualize, your metrics. Polling plus big dashboards can easily eat your gains.
  • Batch non-critical updates. Charts and fancy overlays don’t need per-frame updates.
  • Plan for different performance profiles. Traders, engineers, and demo recordings have different needs.

We’re continuing to refine Matchstick Stream along those lines — and we’ll share more concrete before/after numbers as we tighten each bottleneck.

If you want to follow that work, the best places are:

  • The matchstick-stream docs and performance notes in the repo
  • The public roadmap and blog on matchstick.trading

And if you’re building your own high-volume data browser and want to compare notes, I’d love to hear from you.

Email: hello@matchstick.trading
GitHub Discussions: https://github.com/orgs/matchstick-trading/discussions

🔥 Get Dev Updates

Breaking changes, new features, and critical updates delivered instantly

Real-time notifications when we ship. No fluff, just the code.

Share this post