adamwitko 🖐💻

The simple ramblings of an often grumpy developer.

A case for avoiding Jest Snapshots

November 2, 2020

If you have exposure to Jest Snapshots then I'm going to assume that you have probably experienced some confusing times.

No, no, this isn't a post about some Jest Snapshot quirks that you need to be aware of to increase your level with them. This post is about how and when you can make use of Jest Snapshots without creating a mess for you or your team to battle through.

Problems with Snapshots

Here is a list of problems that you may experience:

  • You do not always know if your component's presentation, or behaviour, is correct
  • Snapshots are updated because Jest tells you they need updating - but why?
  • Test intentions are difficult to understand
  • Often used as a direct replacement for verifying component behaviour
  • Snapshots can cover a lot of the component tree and become coupled to lower level components
  • Snapshots are often not reviewed

Okay, so what are a few aspects of Jest snapshots to look out for? Here we go.

1. Value your test descriptions

Jest Snapshots are one of several Matchers that Jest provides out of the box. They allow you to verify that a rendered component tree matches a previous snapshot of the rendered component tree. The snapshots are auto-generated and stored in a separate location by default.

Here is a basic expectation that a rendered Card component should match the persisted snapshot.

it('renders correctly', () => {
expect(render(<Card />)).toMatchSnapshot()
})

Put aside the 'value' of this test, the main problem here is what does it mean to be rendered correctly?

Without knowing what the Card component represents from the name, or looking inside the component, or worse, looking inside the snapshot file, you cannot tell what makes this render 'correct'. We all know that succinct test descriptions are useful but one smell I have seen when using Jest snapshots is a lazy approach to test description. Being correct is enough for some people, probably because snapshots are often seen as a quickly and dirty test.

This problem can be solved relatively easily by using a much better test description to define what a rendered Card is. Even if it feels redundant to do so, at least describing that a Card 'renders the title and description` displays your intent at the time of writing this snapshot test.

So, I'd argue this is a little bit better.

it('renders the title and description', () => {
expect(render(<Card />)).toMatchSnapshot()
})

Suggestion: Describe the expectation of your rendered compnent in the test description.

2. Presentation components benefit from snapshots, behavioural components not so much

I would argue that snapshots are beneficial for presentation heavy components over behaviour heavy components. This is however, not an absolute rule, but a good indicator where you may want to use snapshots. Without the use of snapshots, you would look to select HTML elements and use other matchers to verify existance, contents, or styling within the render tree.

with snapshots:

it('displays a link to the How To documentation', () => {
expect(render(<HowTo />)).toMatchSnapshot()
})

Respectively, with out snapshots:

it('displays a link to the How To documentation', () => {
const { getByAltText, getByText } = render(<HowTo />)

expect(getByText('How To')).toBeTruthy()
expect(getByAltText('Link to How To')).toHaveAttribute('href', 'https://docs.mywebsite.com')
})

Whereas behavioural components may have event handlers as props or a useEffect() calling out over the network. As you can see in the snapshot example below there is no expectations of the behaviour. In this case, you would hope that the onClick event is verified as part of another test.

it('displays a link to the How To documentation', () => {
expect(render(<HowTo onClick={jest.fn()} />)).toMatchSnapshot()
})

If you have presentation that you believe needs verifying and behaviour, think about verifying both of these together.

Suggestion: Lean on snapshots for presentation, execute and verify behaviour where behaviour exists.

3. Snapshots requiring constant updates

All code has the potential to be brittle, it is not a unique testing problem. Brittle code in regards to a test is when the code under test can cause test failures when private concerns are altered. This is typically due to poor design. Brittle tests are ones that fail or require updates when they know about one of the following:

  1. Too much about the internals of the code under test
  2. A large portion of the code base
  3. Setup and verification based on indeterministic data e.g. time

Experiencing all of these together is painful. Reducing the scope under test, externally managed data (time) and the knowledge the tests have about the code is key.

When looking at snapshots a brittle test typically spans a certain amount of all three points above. Number 1 is where I have seen most snapshot problems originate. Snapshots that match a large portion of the render tree can break with the slightest tweak to your components. Mock, or stub, child components where their render has little consequence to the scope of your test. If you are verifying component A, component Z several levels down is not the responsibility of this level of tests. Reducing scope reduces the are of change.

Suggestion: Reduce the amount of the component tree being rendered.

4. Snapshots do not contain every detail

The inexperienced users of snapshots may overlook this, but snapshot files contain the rendered HTML that is expected to match on every test run. There are aspects of these snapshot files that go overlooked and developers assume if the snapshot test is green then all is well.

Here is a snippet of a snapshot file:


<div class="css-pq7aad-Alert e5kqiab0">Hello there!</>

This render has come about using a styling framework (emotion) that generates unique CSS classes for the components. What if I was to say that the class here is a component based that applies styles based on props:

const Alert =  styled.div<{ hide: boolean }>`
${({ hide }) =>
hide && `display none;`
}

`


<Alert hide={true}>Hello there!</Alert>

The snapshot file for hide={true} and hide={false} is identical. The class generated remains the same.

I have seen the above tested with a test with hide as true and a test with hide as false - both using snapshots to verify the output. Both tests pass, the render is the same as the snapshot file, but the ... This shows the level at which snapshots can help you with your tests. This is a Red Herring in automated testing.

4. Snapshot files require review

As snapshots start to form an important type of matcher for your test suite, it's not on the test files that require review. The snapshots themselves, if stored externally using toMatchSnapshot(), are your tests. This means that during the review process you should look at and question what is being rendered and why this matcher is being used.

Conclusion

I could have padded out this post with a long list of concerns about using Jest snapshots but I tried to limit the number of aspects. If I can leave you with one rule on using snapshots is to question their applicability at every point you see them or intended to use them. A snapshot can be very beneficial if used correctly but it can be unmanageable if utilised incorrectly.

Contact me