Testing in React Native is harder than testing in React for the web. The toolchain is more fragmented. The setup is more involved. The tests are slower. Some things that are easy to test on the web (navigation, native modules, platform-specific behaviour) require significant setup in React Native.
After working with various testing approaches across multiple production apps, I've settled on a setup that balances coverage, maintainability, and developer experience.
Jest configuration
The default Jest config from React Native CLI works for simple projects. For anything larger, you need to customize it.
// jest.config.js
module.exports = {
preset: 'react-native',
setupFilesAfterSetup: ['./jest.setup.js'],
transformIgnorePatterns: [
'node_modules/(?!(react-native|@react-native|@react-navigation|react-native-reanimated|react-native-gesture-handler|@shopify/flash-list)/)',
],
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/src/$1',
},
collectCoverageFrom: [
'src/**/*.{ts,tsx}',
'!src/**/*.d.ts',
'!src/**/types.ts',
'!src/**/*.stories.tsx',
],
};
The transformIgnorePatterns is critical. By default, Jest doesn't transform files in node_modules. But React Native and many RN libraries ship untranspiled code. You need to explicitly include them in the transform list. When a new dependency causes a "SyntaxError: Unexpected token" in tests, the fix is almost always adding it to this list.
The setup file:
// jest.setup.js
import '@testing-library/jest-native/extend-expect';
// Mock react-native-reanimated
jest.mock('react-native-reanimated', () => {
const Reanimated = require('react-native-reanimated/mock');
Reanimated.default.call = () => {};
return Reanimated;
});
// Mock native modules that aren't available in the test environment
jest.mock('react-native/Libraries/Animated/NativeAnimatedHelper');
React Native Testing Library
React Native Testing Library (RNTL) tests components the way users interact with them: by finding elements by their text, accessibility labels, or test IDs, and simulating user actions.
import { render, fireEvent, waitFor } from '@testing-library/react-native';
import SessionCard from './SessionCard';
describe('SessionCard', () => {
const session = {
id: '1',
language: 'Spanish',
status: 'active',
duration: 300,
interpreterName: 'Maria Garcia',
};
it('displays session information', () => {
const { getByText } = render(<SessionCard session={session} />);
expect(getByText('Spanish')).toBeTruthy();
expect(getByText('Maria Garcia')).toBeTruthy();
expect(getByText('5:00')).toBeTruthy();
});
it('calls onPress with session id', () => {
const onPress = jest.fn();
const { getByTestId } = render(
<SessionCard session={session} onPress={onPress} />
);
fireEvent.press(getByTestId('session-card'));
expect(onPress).toHaveBeenCalledWith('1');
});
});
What to test:
- Does the component render the correct data?
- Do user interactions trigger the correct callbacks?
- Do conditional renders work (loading states, error states, empty states)?
What not to test:
- Implementation details (internal state, method calls)
- Styling (is the button blue? is the font 14px?)
- Third-party library behaviour (does FlatList scroll?)
MSW for API mocking
Mock Service Worker (MSW) intercepts network requests at the network level. This means your components make real fetch calls, and MSW returns mock responses. The component code doesn't know it's being tested.
import { setupServer } from 'msw/node';
import { rest } from 'msw';
const server = setupServer(
rest.get('/api/sessions', (req, res, ctx) => {
return res(
ctx.json({
sessions: [
{ id: '1', language: 'Spanish', status: 'active' },
{ id: '2', language: 'French', status: 'completed' },
],
})
);
}),
rest.post('/api/sessions/:id/end', (req, res, ctx) => {
return res(ctx.json({ success: true }));
})
);
beforeAll(() => server.listen());
afterEach(() => server.resetHandlers());
afterAll(() => server.close());
MSW is better than mocking fetch directly because:
- The mock is declarative and readable
- The component's network code is exercised fully
- You can override handlers per test for error scenarios
- You don't need to know the internal fetching implementation
Detox for E2E testing
Detox runs your app on a real simulator/emulator and interacts with it like a user would. Tap buttons, type text, scroll lists, verify screen content.
Setup is painful. Detox requires specific build configurations for iOS and Android, a running simulator, and careful management of app state between tests. Budget a full day for initial setup.
Once running, Detox tests are valuable for critical user flows:
describe('Session flow', () => {
beforeAll(async () => {
await device.launchApp({ newInstance: true });
});
it('should request and join a session', async () => {
await element(by.id('language-selector')).tap();
await element(by.text('Spanish')).tap();
await element(by.id('request-interpreter')).tap();
await waitFor(element(by.text('Connecting...')))
.toBeVisible()
.withTimeout(5000);
await waitFor(element(by.text('Session Active')))
.toBeVisible()
.withTimeout(30000);
await element(by.id('end-session')).tap();
await element(by.text('Confirm')).tap();
await expect(element(by.text('Session Ended'))).toBeVisible();
});
});
We run Detox tests for:
- The session request and matching flow (the core user journey)
- Authentication (login, token refresh)
- Critical error states (no network, server error)
We don't run Detox tests for every screen or every interaction. The setup and execution time is too high. Unit and component tests handle the breadth. E2E tests handle the depth on critical paths.
Why snapshot tests aren't useful
Snapshot tests capture the rendered output of a component and fail when the output changes. In theory, they catch unintended changes. In practice, they catch every change, intended or not.
The typical workflow with snapshot tests:
- Make a change to a component
- Snapshot test fails
- Developer looks at the diff
- The diff is the expected result of their change
- Developer updates the snapshot
- No bugs were caught
Snapshot tests are noisy. They fail on every change. Developers learn to update snapshots reflexively without reviewing the diff. At that point, they provide no value.
Replace snapshot tests with specific assertions about what matters:
// Instead of: expect(tree).toMatchSnapshot();
// Write specific assertions:
expect(getByText('Session Active')).toBeTruthy();
expect(getByTestId('timer')).toHaveTextContent('5:00');
expect(queryByText('Error')).toBeNull();
These tests are resilient to refactoring (changing the component's structure doesn't break them) and fail only when actual behaviour changes.
Coverage targets
We aim for:
- 80% line coverage on business logic (services, utilities, hooks)
- 60% line coverage on components
- No coverage target on navigation or third-party integrations
These numbers aren't goals in themselves. They're guardrails to ensure we don't skip testing core logic. A function with 100% coverage can still have bugs. A function with 0% coverage almost certainly does.