Performance optimization in React Native is frustrating because there's a long list of recommendations and most of them make no measurable difference for any given app. The advice is technically correct but practically unhelpful without context about which optimizations matter for which bottlenecks.
Over the past year, I've profiled and optimized a React Native healthcare app with 50,000+ active users. These are the changes that produced measurable improvements, in order of impact.
FlashList instead of FlatList
This was the single largest improvement. FlashList, from Shopify, is a drop-in replacement for FlatList with significantly better scroll performance.
The numbers from our app:
- FlatList: ~45 FPS during fast scroll on a mid-range Android device, with visible jank on lists with 100+ items
- FlashList: consistent 58-60 FPS on the same device and data set
The API is almost identical:
import { FlashList } from "@shopify/flash-list";
<FlashList
data={sessions}
renderItem={({ item }) => <SessionCard session={item} />}
estimatedItemSize={120}
keyExtractor={(item) => item.id}
/>
The key difference is estimatedItemSize. FlashList uses this to pre-calculate layout without measuring each item individually, which is the main source of FlatList's jank. The estimate doesn't need to be exact. Within 20% is fine.
We replaced every FlatList in the app with FlashList. Total time: about 2 hours. Total impact: scroll performance improved across every list screen.
Hermes engine
Hermes is Meta's JavaScript engine, optimized for React Native. It's now the default in new React Native projects, but older projects may still use JSC (JavaScriptCore).
The improvements:
- Cold start time: reduced by ~40% in our app (from ~3.2 seconds to ~1.9 seconds on a mid-range Android device)
- Memory usage: reduced by ~30%, because Hermes compiles JavaScript to bytecode ahead of time rather than parsing source at runtime
- Bundle size: reduced by ~20%, because the bytecode is smaller than minified JavaScript source
Enabling Hermes in an existing project:
// android/app/build.gradle
project.ext.react = [
enableHermes: true
]
// ios/Podfile
:hermes_enabled => true
The trade-off: Hermes doesn't support all JavaScript features. Proxy objects and some Intl methods aren't available. For most apps, this isn't an issue.
When memo and useCallback actually help
The standard React performance advice is to wrap expensive components in React.memo and stabilize callback references with useCallback. This advice is overused.
React.memo helps when:
- The component is expensive to render (complex layout, many children)
- The parent re-renders frequently with props that haven't changed
- The component receives primitive props or stable object references
React.memo doesn't help when:
- The component is cheap to render (the overhead of the comparison exceeds the cost of re-rendering)
- The props change on every render (new objects, inline functions)
- The component is a leaf with no children
In our app, we profiled and found that memo was beneficial on exactly three components:
- SessionCard: rendered in a list of 100+ items, re-renders triggered by the parent's state changes (typing in a search field)
- LanguageSelector: contains a grid of 100+ language buttons, re-renders when any session state changes
- MessageBubble: rendered in a chat list, re-renders when new messages arrive
We removed memo from about 40 other components where it had been added "just in case." Removing it made no measurable difference, confirming that the comparison overhead and the render cost were roughly equal.
The criteria for when memo helps: is the component in a list or rendered many times? Does the parent re-render frequently? Is the render function expensive? If the answer to all three is yes, memo helps. Otherwise, it probably doesn't.
Reducing bridge crossings
Before the new architecture (JSI), every communication between JavaScript and native went through the bridge. Each crossing involves serialization, message queuing, and deserialization. Individual crossings are fast. Hundreds per frame aren't.
The common source of excessive bridge crossings: animated values that update on every frame. If you use Animated.event without useNativeDriver: true, every animation frame crosses the bridge.
// Bad: crosses the bridge on every frame
<Animated.ScrollView
onScroll={Animated.event(
[{ nativeEvent: { contentOffset: { y: scrollY } } }],
{ useNativeDriver: false }
)}
/>
// Good: animation runs entirely on the native thread
<Animated.ScrollView
onScroll={Animated.event(
[{ nativeEvent: { contentOffset: { y: scrollY } } }],
{ useNativeDriver: true }
)}
/>
useNativeDriver moves the animation computation to the native thread. The JavaScript thread isn't involved on each frame. The limitation: only transform and opacity can be animated natively. Layout properties (width, height, margin) can't.
For layout animations, react-native-reanimated provides a worklet-based system that runs on the native thread without the bridge.
Profiling to find the actual bottleneck
The Flipper Performance plugin shows:
- JS thread FPS (should be 60)
- UI thread FPS (should be 60)
- Component render times
- Bridge message frequency
Before optimizing anything, profile. The bottleneck is often not where you expect it.
In our app, I assumed the session list was slow because of the list component. Profiling showed that the list was fine. The bottleneck was the header component above the list, which was re-rendering on every scroll event because it read from a context that updated on scroll. Moving the scroll-dependent logic into the list component and memoizing the header fixed the jank.
What didn't help
- Removing console.log statements: often cited as a performance fix. In production builds with Hermes, console.log is already stripped by the compiler. Removing them manually made no measurable difference.
- Using
shouldComponentUpdateinstead ofReact.memo: identical performance in our measurements. Use whichever matches your component style. - Moving to a different state management library: we considered switching from Redux to Zustand for performance. Profiling showed Redux wasn't the bottleneck. The state updates were fast. The re-renders they triggered were the issue, and those are solved with selector optimization, not library changes.
Performance optimization is measurement, not intuition. Profile first. Change one thing. Measure again. If the number didn't move, revert and try something else.
The FlashList section is exactly what we needed. We switched from FlatList last week and the scroll performance improvement on our list screens was immediate and obvious. Wish we'd done it earlier.
The note on memo being overused is something senior developers say but rarely explain with concrete examples. The criteria you give for when it actually helps are going into our team's style guide.