ReactPerformanceFrontend

React Performance: What Actually Matters

Most React performance advice is premature optimization. Here's how to find real bottlenecks and fix them without making your codebase worse.

Nordbeam Team

React Performance: What Actually Matters

The React performance discourse is mostly noise. "Wrap everything in useMemo." "Never create objects in render." "React.memo all your components." This advice isn't wrong exactly—it's just premature. Most React applications don't have performance problems. And when they do, the problems are rarely where developers expect them.

We've debugged performance issues in production React applications handling thousands of concurrent users. The patterns are consistent: a handful of actual problems cause most of the slowness, and the solutions are usually simpler than the internet suggests.

The performance optimization industry thrives on fear. Framework authors present benchmarks showing millisecond differences. Conference talks showcase exotic techniques for niche problems. Twitter threads proclaim that your app is slow and you don't even know it. Most of this is noise. Real performance problems are obvious—users complain, metrics drop, the experience feels sluggish. If you need sophisticated tooling to detect a problem, users probably don't notice it either.

The First Rule: Measure Before Optimizing

The most common performance mistake isn't using the wrong optimization technique. It's optimizing things that don't need optimizing.

React is fast by default. The virtual DOM reconciliation that seemed revolutionary in 2013 is now table stakes, and React's implementation is mature. Most components render in microseconds. Shaving microseconds off microsecond operations doesn't help users.

Before touching any code, profile your application under realistic conditions. Not in development mode with React DevTools slowing everything down. Not with your blazing-fast MacBook Pro. Profile in production mode, on hardware similar to what your users have, with realistic data volumes.

The React DevTools Profiler shows component render times. But more importantly, it shows which components rendered and why. A component that renders in 0.5ms isn't a problem—unless it's rendering 200 times per second. The "why did this render" feature tells you whether the render was necessary or wasted.

Chrome's Performance tab shows the broader picture. Long tasks blocking the main thread. Layout thrashing from rapid DOM updates. JavaScript execution eating into frame budgets. These are the metrics that affect user experience, not individual component render times.

Lighthouse and Core Web Vitals provide standardized benchmarks that correlate with real user experience. LCP (Largest Contentful Paint) measures when the main content appears. FID (First Input Delay) measures responsiveness. CLS (Cumulative Layout Shift) measures visual stability. These metrics matter because Google uses them for search ranking and because they genuinely reflect what users experience.

Test on real devices, not just your development machine. That M3 MacBook Pro renders everything fast. Your users might be on three-year-old Android phones with limited memory. Chrome DevTools lets you throttle CPU and network to simulate constrained environments. The simulation isn't perfect, but it's better than assuming everyone has fast hardware.

User analytics can supplement synthetic profiling. Real User Monitoring (RUM) tracks actual performance for actual users. If 90% of users have good experiences and 10% struggle, that 10% tells you something important. Maybe they're in a region with slow connections. Maybe they're on specific devices. Understanding real-world distribution guides where to invest optimization effort.

The Real Performance Problems

After profiling dozens of React applications, the problems cluster into a few categories.

Rendering Too Much

The number one cause of React slowness is rendering components that don't need to render. Parent components re-render, and their children re-render too—even when nothing about those children changed.

This isn't a flaw in React; it's a feature. By default, React re-renders children because it's usually the right thing to do, and skipping it requires React to do comparison work that might be more expensive than just re-rendering.

The problem arises when you have expensive components that re-render frequently for reasons unrelated to their content. A chart component that takes 50ms to render shouldn't re-render every time the user types in an unrelated input field.

Identifying unnecessary re-renders requires understanding React's rendering model. When a component's state changes, React re-renders that component and all its descendants. This cascade is usually fine—React is fast. But it becomes problematic when descendants are expensive, or when state changes frequently (like during typing or dragging).

The React DevTools Profiler highlights this with the "Highlight updates when components render" feature. Watch your app while interacting with it. If components far from your interaction flash with updates, you've found potential optimization targets. The question is whether those components are expensive enough to matter.

// The parent renders on every keystroke
function Dashboard() {
  const [search, setSearch] = useState('');

  return (
    <div>
      <input value={search} onChange={e => setSearch(e.target.value)} />
      <ExpensiveChart data={chartData} /> {/* Re-renders on every keystroke */}
    </div>
  );
}

The fix is React.memo—but only for the expensive component:

const ExpensiveChart = React.memo(function ExpensiveChart({ data }) {
  // Only re-renders when data actually changes
  return <Chart data={data} />;
});

Don't wrap everything in React.memo. The comparison has cost. For cheap components, rendering is faster than comparing. Reserve memoization for components where you've measured a problem.

The component structure itself can prevent unnecessary renders. Move state down to the components that actually need it. If only the search input needs the search value, the search state should live in the search component, not a parent that also contains expensive children.

// Better structure: state isolated to where it's needed
function Dashboard() {
  return (
    <div>
      <SearchSection /> {/* Search state lives here */}
      <ExpensiveChart data={chartData} /> {/* Doesn't re-render on search */}
    </div>
  );
}

function SearchSection() {
  const [search, setSearch] = useState('');
  return <input value={search} onChange={e => setSearch(e.target.value)} />;
}

This structural approach is preferable to memoization because it's simpler and eliminates the problem rather than mitigating it. Memoization adds cognitive overhead; good component structure makes the code clearer.

Creating New References Every Render

React.memo compares props by reference. If you pass a new object or array on every render, the memoization is useless.

// Broken: new object every render, memo is useless
<MemoizedChild style={{ color: 'red' }} />

// Fixed: stable reference
const style = useMemo(() => ({ color: 'red' }), []);
<MemoizedChild style={style} />

The same applies to callbacks:

// Broken: new function every render
<MemoizedChild onClick={() => handleClick(id)} />

// Fixed: stable callback
const handleClick = useCallback(() => doSomething(id), [id]);
<MemoizedChild onClick={handleClick} />

This pattern matters specifically when passing to memoized children. If the child isn't memoized, useCallback and useMemo just add overhead. Don't apply them blindly.

The dependency array is where bugs hide. Omit a dependency and the callback closes over stale values. Include too many dependencies and the callback recreates too often, defeating the purpose. ESLint's exhaustive-deps rule catches most issues, but it can't catch semantic problems where technically correct dependencies still cause excessive recreation.

// Subtle bug: id changes frequently, so callback recreates often
const handleClick = useCallback(() => {
  trackClick(id);  // If id changes often, this callback isn't stable
}, [id]);

// Solution: pass id as argument instead of closing over it
const handleClick = useCallback((itemId) => {
  trackClick(itemId);
}, []);

When callbacks become complex, consider whether the component structure is right. A callback that needs many dependencies might indicate that the component is doing too much. Splitting into smaller components can simplify callbacks and reduce the need for memoization.

Long Lists Without Virtualization

Rendering 1,000 DOM nodes is slow. It doesn't matter how fast React is—the browser still has to create and manage 1,000 elements. Paint times increase. Memory usage climbs. Scrolling becomes janky.

Virtualization solves this by rendering only the visible items plus a small buffer. A list of 10,000 items might only ever render 50 DOM nodes, swapping content as the user scrolls.

import { useVirtualizer } from '@tanstack/react-virtual';

function VirtualList({ items }) {
  const parentRef = useRef(null);

  const virtualizer = useVirtualizer({
    count: items.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 50,
  });

  return (
    <div ref={parentRef} style={{ height: 400, overflow: 'auto' }}>
      <div style={{ height: virtualizer.getTotalSize(), position: 'relative' }}>
        {virtualizer.getVirtualItems().map((virtualRow) => (
          <div key={virtualRow.key} style={{
            position: 'absolute',
            top: virtualRow.start,
            height: virtualRow.size,
          }}>
            {items[virtualRow.index].name}
          </div>
        ))}
      </div>
    </div>
  );
}

TanStack Virtual is our current preference—it's flexible and framework-agnostic. For simpler cases, react-window works well. The specific library matters less than understanding when to reach for virtualization: lists over 100 items, especially if items are complex.

Virtualization introduces tradeoffs. Search and find-in-page don't work as expected—you can't Ctrl+F for content that isn't in the DOM. Accessibility can be complicated—screen readers announce visible items differently than a full list. Print styling requires special handling. These tradeoffs are usually acceptable for large lists, but they're worth considering.

Variable-height items add complexity. The virtualizer needs to know heights before rendering, but heights might depend on content that varies. Estimation works for most cases—the virtualizer guesses heights initially and adjusts as items render. For highly variable content, measurement-based approaches work better but add overhead.

Horizontal virtualization and grid virtualization exist for specific use cases. Spreadsheet-style interfaces with thousands of cells need virtualization in both directions. The complexity increases, but the same principles apply: render only what's visible.

Consider whether virtualization is necessary. Sometimes the answer is pagination or infinite scroll that only loads 50 items at a time, making virtualization unnecessary. Sometimes the answer is better filtering that reduces the list to a manageable size. Virtualization solves the "too many items" problem; sometimes it's better to have fewer items.

Expensive Calculations in Components

Sometimes the problem isn't rendering—it's computation. Filtering 10,000 items. Transforming complex data structures. Running expensive aggregations.

function ProductList({ products, filters }) {
  // This runs on every render, even if products and filters haven't changed
  const filtered = products
    .filter(p => matchesFilters(p, filters))
    .sort((a, b) => a.name.localeCompare(b.name));

  return <List items={filtered} />;
}

useMemo prevents the recomputation when dependencies haven't changed:

function ProductList({ products, filters }) {
  const filtered = useMemo(
    () => products
      .filter(p => matchesFilters(p, filters))
      .sort((a, b) => a.name.localeCompare(b.name)),
    [products, filters]
  );

  return <List items={filtered} />;
}

But think before reaching for useMemo. If the computation takes 0.1ms, memoizing it probably costs more than it saves. useMemo has overhead—comparing dependencies, storing cached values. Reserve it for computations that actually show up in profiling.

For truly expensive computations, consider moving work off the main thread. Web Workers can handle heavy processing without blocking UI updates. The data lives in a worker, computation happens there, and results post back to the main thread. The programming model is more complex—workers communicate through messages, not shared memory—but the responsiveness improvement can be dramatic.

// Heavy computation in a Web Worker
const filterWorker = new Worker('./filterWorker.js');

function ProductList({ products, filters }) {
  const [filtered, setFiltered] = useState([]);

  useEffect(() => {
    filterWorker.postMessage({ products, filters });
    filterWorker.onmessage = (e) => setFiltered(e.data);
  }, [products, filters]);

  return <List items={filtered} />;
}

Libraries like Comlink simplify worker communication with a function-call-like API. The mental model remains "call a function and get a result," even though the work happens in a separate thread.

For computations that are expensive but not worker-worthy, debouncing can help. If filters change rapidly during typing, don't recompute on every keystroke. Wait for typing to pause, then compute. The user sees results after a brief delay rather than experiencing UI stuttering during typing.

Context That Changes Too Often

React Context is convenient for passing data through component trees. But every context change triggers a re-render of every consumer. If your context value changes frequently, you might be re-rendering large portions of your app unnecessarily.

// Problem: theme rarely changes, but user changes often
// Every user update re-renders every component using this context
const AppContext = createContext({ user: null, theme: 'light' });

The fix is usually splitting contexts by update frequency:

// User context changes when user logs in/out
const UserContext = createContext(null);

// Theme context changes rarely
const ThemeContext = createContext('light');

Components subscribe only to the contexts they need. User updates don't trigger theme consumers to re-render.

Another approach is memoizing context values:

function AppProvider({ children }) {
  const [user, setUser] = useState(null);
  const [theme, setTheme] = useState('light');

  // Memoize the context value to prevent unnecessary re-renders
  const value = useMemo(
    () => ({ user, setUser, theme, setTheme }),
    [user, theme]
  );

  return <AppContext.Provider value={value}>{children}</AppContext.Provider>;
}

Without the useMemo, the context value is a new object on every render, which triggers all consumers to update even if the actual data hasn't changed.

For complex state, consider whether Context is the right tool. State management libraries like Zustand, Jotai, or Redux offer more granular subscription models. Components can subscribe to specific pieces of state rather than entire contexts. The additional dependency is often worth the performance characteristics, especially for applications with complex, frequently-changing state.

Selector patterns can optimize context consumption:

function useUserName() {
  const { user } = useContext(AppContext);
  // Only return what you need - though this doesn't prevent re-renders,
  // it makes the component's dependencies clear
  return user?.name;
}

For true subscription optimization, you need libraries that support selectors natively. The upcoming React use() hook and server components may change this landscape, but for now, external state management is often the answer for performance-critical applications.

The Bundle Size Problem

Performance isn't just runtime speed. It's also how long users wait for your JavaScript to download and parse.

A 2MB JavaScript bundle means users on slow connections wait seconds before the application is interactive. Users on fast connections still wait for the browser to parse and compile megabytes of code. Neither experience is good.

Code splitting helps by loading only what's needed:

const AdminDashboard = lazy(() => import('./AdminDashboard'));

function App() {
  return (
    <Routes>
      <Route path="/admin" element={
        <Suspense fallback={<Loading />}>
          <AdminDashboard />
        </Suspense>
      } />
    </Routes>
  );
}

The admin dashboard—and all its dependencies—only load when a user navigates to the admin route. Regular users never download that code.

Route-based splitting is the low-hanging fruit. Beyond that, consider splitting large libraries that aren't needed immediately. A chart library used on one page shouldn't be in the initial bundle.

Component-level splitting works for expensive components that aren't immediately visible:

// Heavy component that appears after user interaction
const RichTextEditor = lazy(() => import('./RichTextEditor'));

function CommentForm() {
  const [editing, setEditing] = useState(false);

  return editing ? (
    <Suspense fallback={<LoadingEditor />}>
      <RichTextEditor />
    </Suspense>
  ) : (
    <button onClick={() => setEditing(true)}>Write Comment</button>
  );
}

The rich text editor—often a large dependency—only loads when the user clicks to write a comment. Most users who just read don't pay the cost.

Analyze your bundle to find optimization opportunities. Tools like webpack-bundle-analyzer or vite-bundle-visualizer create treemaps showing what's in your bundle and how large each piece is. Surprising inclusions are common—a utility library that pulled in more than expected, a package that includes multiple locales when you only need one.

Tree shaking eliminates unused exports, but it only works when you import specific functions:

// Bad: imports entire library, tree shaking can't help
import _ from 'lodash';
_.map(items, fn);

// Good: imports only what's used
import map from 'lodash/map';
map(items, fn);

// Also good: ES module lodash with named imports
import { map } from 'lodash-es';
map(items, fn);

The difference can be significant—the full lodash library is around 70KB, while individual functions are a few KB each.

Consider whether you need the library at all. Modern JavaScript includes much of what lodash provided. Array methods, optional chaining, nullish coalescing—these often replace library calls with native code that doesn't affect bundle size.

When Not to Optimize

React's defaults are good. Most components don't need memoization. Most computations don't need caching. Most lists don't need virtualization.

Adding optimization has costs:

  • Code complexity. useMemo and useCallback add mental overhead. Dependencies can be subtle. Bugs in dependency arrays are hard to catch.
  • Actual overhead. Memoization isn't free. Comparing dependencies and storing values costs time and memory.
  • Maintenance burden. Optimized code is harder to change. Every modification requires thinking about cache invalidation.

Optimize when profiling shows a problem. Not when code review intuition suggests one might exist someday. Premature optimization makes codebases worse without making applications better.

The React team's philosophy is instructive here. React Compiler (formerly React Forget) aims to automate memoization decisions. The goal is removing the need for developers to manually optimize with useMemo, useCallback, and React.memo. If the React team is building tooling to eliminate these patterns, maybe we shouldn't be so eager to sprinkle them through our code.

Some patterns prevent problems without adding complexity:

  • Structure components so state lives close to where it's used
  • Use stable references for objects and functions that are passed as props
  • Prefer composition over deep component hierarchies
  • Fetch data at the route level rather than deep in the tree

These aren't optimizations—they're good patterns that happen to have good performance characteristics. Following them avoids problems; later optimization addresses whatever remains.

Practical Checklist

When you have an actual performance problem:

  1. Profile in production mode. Development mode includes extra checks that slow everything down. Profile what users actually experience.

  2. Find the real bottleneck. Is it re-renders? Computation? Network? Bundle size? Different problems have different solutions.

  3. Apply targeted fixes. Memoize the specific expensive component. Virtualize the specific long list. Don't blanket-apply optimizations hoping something helps.

  4. Measure the impact. Did the fix actually help? By how much? Sometimes the profiler reveals that the "problem" was never significant.

  5. Consider the tradeoffs. Is the complexity worth the improvement? A 20% speed increase that adds 200 lines of caching logic might not be worth it.

React performance work should be boring. Find the bottleneck, apply the standard fix, measure the improvement, move on. If you're doing something clever, you're probably doing something wrong.

Server Components and the Future

React Server Components change the performance conversation. Components that render on the server don't need client-side JavaScript at all. The bundle sent to the browser shrinks. Hydration is faster because there's less to hydrate.

If you're starting a new project, frameworks like Next.js with App Router make server components the default. Many performance problems—bundle size, hydration cost, initial render speed—are solved architecturally rather than through optimization.

For existing applications, migrating to server components is significant work. The patterns are different, the mental model changes, and not all libraries are compatible. The performance benefits are real, but the migration cost must be weighed.

The Performance Culture

The best React performance comes from organizations where performance is everyone's concern, not a specialist's job.

Establish performance budgets. Bundle size under X. LCP under Y. Make these targets visible and track them in CI. When someone adds a dependency that breaks the budget, the build warns or fails.

Profile regularly, not just when there are problems. Periodic profiling sessions catch regressions before they compound. The earlier you catch a performance problem, the easier it is to fix—you know which recent change caused it.

Learn from real users. RUM data tells you what users actually experience. Synthetic benchmarks in CI are useful but incomplete. Real users have different devices, networks, and usage patterns than your tests simulate.

Resist the temptation to pre-optimize. Build features first. Measure performance. Optimize what needs optimizing. The application that shipped with good-enough performance beats the application that never shipped because optimization was endless.

React performance isn't about tricks and techniques. It's about understanding your application, measuring what matters, and applying well-known solutions to well-identified problems. That's less exciting than the conference talks suggest, but it's what actually makes applications fast.


Having performance issues with your React application? Let's talk about what's actually causing them.