I've worked on React codebases with 90% test coverage that were fragile and slow, and codebases with 40% coverage that were stable and fast. Coverage is a measurement, not a goal. The goal is confidence that the application works correctly after a change.
This is the testing strategy I use on every React project. It prioritises high-value tests and skips the ones that cost more to maintain than they save.
The testing pyramid for React
The traditional testing pyramid (many unit tests, fewer integration tests, fewest end-to-end tests) doesn't map well to React applications. In React, the most valuable tests are integration tests that render a component with its children and verify behaviour.
My pyramid looks like:
- Integration tests (60% of tests) — render a component tree and test user-visible behaviour
- Unit tests (25% of tests) — test pure logic: utilities, reducers, formatters
- End-to-end tests (15% of tests) — test critical user flows through the full stack
What I test with integration tests
Integration tests use React Testing Library to render a component and interact with it the way a user would: finding elements by their role or text, clicking buttons, typing in inputs, and asserting on what appears on screen.
test('submitting the contact form shows a success message', async () => {
const user = userEvent.setup()
render(<ContactForm />)
await user.type(screen.getByLabelText('Email'), 'test@example.com')
await user.type(screen.getByLabelText('Name'), 'Test User')
await user.type(screen.getByLabelText('Message'), 'This is a test message that's long enough.')
await user.click(screen.getByRole('button', { name: 'Send' }))
expect(await screen.findByText('Message sent')).toBeInTheDocument()
})
This test verifies the entire form flow: rendering, user input, validation, submission, and the success state. It doesn't test implementation details like what state variables exist or which API endpoint is called. If I refactor the form internals without changing the user-facing behaviour, the test still passes.
What makes a good integration test
- Tests user-visible behaviour, not implementation details
- Uses accessible queries (getByRole, getByLabelText) rather than test IDs
- Sets up realistic data (mocked API responses that match the real shape)
- Covers the happy path and the most important error path
What I don't integration-test
- Pure presentational components that render props as text (a
<Badge>component isn't worth testing) - Third-party component wrappers (if I wrap a date picker, I test the wrapper's logic, not the date picker itself)
- Layout and styling (visual testing tools like Chromatic handle this better)
What I test with unit tests
Unit tests are for pure functions: things that take input and return output with no side effects.
// formatters.ts
export function formatCurrency(cents: number, currency = 'USD'): string {
return new Intl.NumberFormat('en-US', {
style: 'currency',
currency,
}).format(cents / 100)
}
// formatters.test.ts
test('formats cents as USD', () => {
expect(formatCurrency(1500)).toBe('$15.00')
expect(formatCurrency(99)).toBe('$0.99')
expect(formatCurrency(0)).toBe('$0.00')
})
test('formats with different currency', () => {
expect(formatCurrency(1500, 'EUR')).toBe('€15.00')
})
These tests are fast, stable, and high-value. A formatting bug can affect every screen in the application, so the tests are worth writing.
Other good candidates for unit tests:
- Validation functions
- Data transformation utilities
- Reducers
- Custom hook logic (tested with renderHook)
What I test with end-to-end tests
E2E tests use Playwright or Cypress to test the application through a real browser. They're slow, brittle, and expensive to maintain. I use them sparingly, only for flows where a failure would have significant business impact.
My E2E test list for a typical application:
- User can sign up and log in
- User can complete the primary action (place an order, submit a form, create a post)
- User can navigate between the main sections without errors
- Payment flow works end-to-end (if applicable)
That's usually four to eight E2E tests. Each one covers a critical path that, if broken, would block users from accomplishing their primary goal.
Mocking strategy
I mock at the network layer, not at the module layer. This means I use MSW (Mock Service Worker) to intercept API requests and return controlled responses, rather than mocking individual functions with jest.mock.
import { rest } from 'msw'
import { setupServer } from 'msw/node'
const server = setupServer(
rest.get('/api/users', (req, res, ctx) => {
return res(ctx.json([
{ id: '1', name: 'Alice' },
{ id: '2', name: 'Bob' },
]))
})
)
beforeAll(() => server.listen())
afterEach(() => server.resetHandlers())
afterAll(() => server.close())
test('renders user list', async () => {
render(<UserList />)
expect(await screen.findByText('Alice')).toBeInTheDocument()
expect(screen.getByText('Bob')).toBeInTheDocument()
})
Network-level mocking has two advantages:
- The test exercises the real data-fetching code, not a mock of it
- If I switch from fetch to axios, or from REST to GraphQL, the test setup changes but the assertions don't
When I skip tests
I don't test everything. Tests have a maintenance cost, and some tests cost more than they save.
I skip tests for:
- Components that are primarily CSS (test visually instead)
- One-off admin pages with low traffic
- Prototypes and experiments
- Thin wrappers around well-tested libraries
The judgement is: if this component breaks, how quickly will I notice, and what is the cost? If the answer is "immediately, from user reports" and the cost is low (a typo on an about page), a test isn't worth writing.
If the answer is "slowly, after data corruption has occurred" and the cost is high, the test is essential.
The inverted testing pyramid (more integration tests than unit tests) is exactly how I've been testing React apps for the past two years and I'm glad to see someone else advocating for it. Testing user behaviour rather than implementation details makes refactoring so much less painful.
The MSW recommendation for mocking at the network layer is the right call. We switched from jest.mock to MSW and our tests became much more resilient to internal refactors. The setup cost is slightly higher but it pays for itself within a week.
The criteria for when to skip tests is the part most testing articles are afraid to write. Not everything needs a test. Knowing when to skip is as important as knowing how to write one.