Skip to content

Vitest Setup and Unit Testing

intermediate18 min read

Why Vitest Won

If you've used Jest before, you know the ritual: install Jest, install Babel, install ts-jest or @swc/jest, configure module transforms, fight with ESM imports, wait ten seconds for your test suite to start. It works. But it feels like you're fighting the tool more than testing your code.

Vitest eliminates all of that. It reuses your Vite config (or works with zero config), supports ESM natively, understands TypeScript out of the box, and runs tests so fast you forget you're running them. The API is nearly identical to Jest, so the migration cost is close to zero.

Mental Model

Think of Vitest as Jest with a turbocharger. Same steering wheel (API), same dashboard (matchers, mocks, spies), but the engine underneath is completely different. Vite's native ESM pipeline means Vitest doesn't need to transform your code before running it. No Babel, no ts-jest, no slow cold starts. You write the same describe/it/expect you already know, but everything runs instantly.

Why Not Jest?

Jest was built in a CommonJS world. When your project uses ESM (which every modern project does), Jest has to transform every file before running it. That transform step adds cold-start latency, requires configuration, and breaks when you use packages that ship only ESM.

Vitest runs on top of Vite's dev pipeline, which understands ESM, TypeScript, and JSX natively. No transform step. No configuration dance. The result:

  • Instant startup — no cold start penalty
  • Native ESM — import/export just works, including packages that only ship ESM
  • TypeScript without config — uses esbuild under the hood, no ts-jest needed
  • Jest-compatible APIdescribe, it, expect, vi.fn(), vi.mock() all work the same
  • Watch mode by default — reruns only affected tests using Vite's module graph
  • In-source testing — you can co-locate tests in the same file as source code (optional, but handy)

Setting Up Vitest with TypeScript

Install Vitest and its TypeScript types:

pnpm add -D vitest @vitest/coverage-v8

Create a vitest.config.ts at your project root:

import { defineConfig } from 'vitest/config'

export default defineConfig({
  test: {
    globals: true,
    environment: 'node',
    include: ['src/**/*.test.ts', 'src/**/*.test.tsx'],
    coverage: {
      provider: 'v8',
      reporter: ['text', 'html', 'lcov'],
      include: ['src/**/*.ts', 'src/**/*.tsx'],
      exclude: ['src/**/*.test.ts', 'src/**/*.d.ts'],
    },
  },
})

Add scripts to your package.json:

{
  "scripts": {
    "test": "vitest",
    "test:run": "vitest run",
    "test:coverage": "vitest run --coverage"
  }
}

With globals: true, you don't need to import describe, it, or expect in every file. Add the types to your tsconfig.json:

{
  "compilerOptions": {
    "types": ["vitest/globals"]
  }
}

That's it. No Babel. No transform plugins. No Jest config objects with 40 properties. You're ready to test.

Quiz
Why does Vitest start faster than Jest on modern ESM projects?

The Building Blocks: describe, it, expect

Every test file follows the same structure. describe groups related tests. it (or test — they're identical) defines a single test case. expect makes assertions.

import { clamp } from './math'

describe('clamp', () => {
  it('returns the value when within range', () => {
    expect(clamp(5, 0, 10)).toBe(5)
  })

  it('clamps to minimum when value is too low', () => {
    expect(clamp(-3, 0, 10)).toBe(0)
  })

  it('clamps to maximum when value is too high', () => {
    expect(clamp(15, 0, 10)).toBe(10)
  })

  it('handles edge case where min equals max', () => {
    expect(clamp(5, 7, 7)).toBe(7)
  })
})

A few things to notice:

  • Test names read like sentencesit('returns the value when within range') tells you exactly what's being tested without reading the code
  • One assertion per test (usually) — each test verifies one behavior. When it fails, you know exactly what broke
  • Edge cases get their own tests — the min-equals-max case is a separate test, not crammed into another

Nesting describe Blocks

For complex functions, nest describe blocks to organize by scenario:

describe('formatPrice', () => {
  describe('with USD currency', () => {
    it('formats whole numbers without decimals', () => {
      expect(formatPrice(100, 'USD')).toBe('$100')
    })

    it('formats cents with two decimal places', () => {
      expect(formatPrice(9.5, 'USD')).toBe('$9.50')
    })
  })

  describe('with EUR currency', () => {
    it('uses euro symbol', () => {
      expect(formatPrice(100, 'EUR')).toBe('€100')
    })
  })

  describe('with invalid input', () => {
    it('throws on negative values', () => {
      expect(() => formatPrice(-1, 'USD')).toThrow('Price cannot be negative')
    })
  })
})

Matchers: The Full Toolkit

Matchers are the methods you chain after expect(). Here are the ones you'll use 95% of the time.

Equality Matchers

// toBe — strict equality (===), use for primitives
expect(2 + 2).toBe(4)
expect('hello').toBe('hello')

// toEqual — deep equality, use for objects and arrays
expect({ a: 1, b: 2 }).toEqual({ a: 1, b: 2 })
expect([1, 2, 3]).toEqual([1, 2, 3])

// toStrictEqual — like toEqual but also checks for undefined properties
expect({ a: 1 }).toStrictEqual({ a: 1 })
expect({ a: 1, b: undefined }).not.toStrictEqual({ a: 1 })
Common Trap

toBe uses Object.is() under the hood. Two objects with identical contents are NOT toBe equal because they're different references. Use toEqual for objects. This is the #1 reason tests fail unexpectedly for beginners: expect({ a: 1 }).toBe({ a: 1 }) fails.

Truthiness Matchers

expect(null).toBeNull()
expect(undefined).toBeUndefined()
expect('hello').toBeDefined()
expect(1).toBeTruthy()
expect(0).toBeFalsy()

Number Matchers

expect(10).toBeGreaterThan(5)
expect(10).toBeGreaterThanOrEqual(10)
expect(5).toBeLessThan(10)
expect(0.1 + 0.2).toBeCloseTo(0.3, 5) // floating point!

String Matchers

expect('team').toContain('ea')
expect('Hello World').toMatch(/world/i)

Array and Object Matchers

expect([1, 2, 3]).toContain(2)
expect([{ id: 1 }, { id: 2 }]).toContainEqual({ id: 1 })
expect({ name: 'Ada', age: 36 }).toHaveProperty('name')
expect({ name: 'Ada', age: 36 }).toHaveProperty('name', 'Ada')
expect([1, 2, 3]).toHaveLength(3)

Error Matchers

// toThrow — verifies a function throws
expect(() => divide(1, 0)).toThrow()
expect(() => divide(1, 0)).toThrow('Cannot divide by zero')
expect(() => divide(1, 0)).toThrow(/divide/)
expect(() => divide(1, 0)).toThrow(DivisionError)

Function / Spy Matchers

const callback = vi.fn()

callback('hello')
callback('world')

expect(callback).toHaveBeenCalled()
expect(callback).toHaveBeenCalledTimes(2)
expect(callback).toHaveBeenCalledWith('hello')
expect(callback).toHaveBeenLastCalledWith('world')
expect(callback).toHaveBeenNthCalledWith(1, 'hello')

The .not Modifier

Any matcher can be negated:

expect(5).not.toBe(3)
expect([1, 2, 3]).not.toContain(4)
expect(() => safeOp()).not.toThrow()
Quiz
What will this test assertion do?

Test Lifecycle Hooks

When multiple tests need the same setup or cleanup, lifecycle hooks keep your tests DRY.

describe('UserService', () => {
  let db: TestDatabase

  beforeAll(async () => {
    db = await TestDatabase.connect()
  })

  afterAll(async () => {
    await db.disconnect()
  })

  beforeEach(async () => {
    await db.seed(testUsers)
  })

  afterEach(async () => {
    await db.clear()
  })

  it('finds user by email', async () => {
    const user = await db.findByEmail('ada@example.com')
    expect(user).toBeDefined()
    expect(user!.name).toBe('Ada Lovelace')
  })

  it('returns null for unknown email', async () => {
    const user = await db.findByEmail('nobody@example.com')
    expect(user).toBeNull()
  })
})

When to Use Each Hook

HookRunsUse Case
beforeAllOnce before all tests in the blockExpensive setup: DB connections, server startup
afterAllOnce after all tests in the blockCleanup: close connections, remove temp files
beforeEachBefore every single testReset state: seed data, clear mocks, reset DOM
afterEachAfter every single testCleanup per test: restore mocks, clear timers

Scope Matters

Hooks are scoped to their describe block. A beforeEach inside a nested describe only runs for tests in that nested block:

describe('outer', () => {
  beforeEach(() => console.log('outer setup'))

  describe('inner', () => {
    beforeEach(() => console.log('inner setup'))

    it('runs both hooks', () => {
      // Logs: "outer setup", then "inner setup"
    })
  })

  it('runs only outer hook', () => {
    // Logs: "outer setup"
  })
})
Why beforeEach is almost always better than beforeAll for state

Using beforeAll to set up shared mutable state is tempting because it's faster — you only run setup once. But it creates test coupling. If test A modifies the shared state, test B sees a different state than expected. Tests start passing or failing depending on execution order, which makes debugging miserable. Always use beforeEach to reset mutable state so each test runs in isolation. Reserve beforeAll for truly immutable, expensive setup like opening a database connection.

Quiz
If you have a beforeAll and a beforeEach in the same describe block, what is the execution order for the first test?

Parameterized Tests with test.each

When you have the same test logic for multiple inputs, test.each eliminates the repetition:

describe('isPalindrome', () => {
  test.each([
    { input: 'racecar', expected: true },
    { input: 'hello', expected: false },
    { input: 'madam', expected: true },
    { input: '', expected: true },
    { input: 'a', expected: true },
    { input: 'ab', expected: false },
  ])('isPalindrome("$input") returns $expected', ({ input, expected }) => {
    expect(isPalindrome(input)).toBe(expected)
  })
})

This generates six individual test cases with descriptive names like isPalindrome("racecar") returns true. When one fails, you know exactly which input caused it.

Table Syntax

For simpler cases, use array-of-arrays:

test.each([
  [0, 'zero'],
  [1, 'one'],
  [2, 'two'],
])('toWord(%i) returns "%s"', (input, expected) => {
  expect(toWord(input)).toBe(expected)
})

When to Use test.each

Use it when:

  • You have 3+ tests with identical assertion logic but different data
  • You're testing boundary values (0, -1, MAX_SAFE_INTEGER, empty string, null)
  • You're testing multiple format conversions or transformations

Don't use it when:

  • Each test case has different assertion logic
  • The test names would be unclear with interpolation
  • You only have two cases — just write two regular tests

Snapshot Testing

Snapshot testing captures the output of a function and compares it against a stored snapshot. If the output changes, the test fails until you explicitly approve the change.

describe('serializeConfig', () => {
  it('matches the expected output structure', () => {
    const config = buildDefaultConfig()
    expect(config).toMatchSnapshot()
  })
})

The first time this runs, Vitest creates a .snap file with the serialized output. On subsequent runs, it compares against the saved snapshot.

Inline Snapshots

Instead of a separate file, inline snapshots store the expected value right in the test file:

it('formats the greeting', () => {
  expect(greet('Ada')).toMatchInlineSnapshot(`"Hello, Ada!"`)
})

Vitest automatically fills in (and updates) the inline snapshot value.

When Snapshots Help vs. Hurt

Good uses:

  • Serialized config objects — catching unexpected structural changes
  • Error message formats — ensuring user-facing messages don't change accidentally
  • Complex transformation output — HTML serialization, AST transforms

Bad uses:

  • Large component trees — snapshots become walls of text nobody reviews
  • Frequently changing output — you end up blindly updating snapshots with vitest -u
  • Business logic — use explicit assertions instead; snapshots hide intent
Warning

The biggest risk with snapshot tests is approval fatigue. When a snapshot breaks, the temptation is to run vitest -u and update all snapshots without reviewing the diff. This defeats the entire purpose. If you use snapshots, keep them small and review every update carefully.

Async Testing

Vitest handles async tests naturally. Return a promise, use async/await, or use callbacks.

async/await (preferred)

it('fetches user data', async () => {
  const user = await fetchUser(1)
  expect(user.name).toBe('Ada Lovelace')
})

Testing Promise Rejections

it('rejects with an error for invalid ID', async () => {
  await expect(fetchUser(-1)).rejects.toThrow('Invalid user ID')
})

it('resolves with the correct value', async () => {
  await expect(fetchUser(1)).resolves.toEqual({
    id: 1,
    name: 'Ada Lovelace',
  })
})

Testing Timers

Use vi.useFakeTimers() to control time:

describe('debounce', () => {
  beforeEach(() => {
    vi.useFakeTimers()
  })

  afterEach(() => {
    vi.useRealTimers()
  })

  it('calls the function after the delay', () => {
    const fn = vi.fn()
    const debounced = debounce(fn, 300)

    debounced()
    expect(fn).not.toHaveBeenCalled()

    vi.advanceTimersByTime(300)
    expect(fn).toHaveBeenCalledOnce()
  })

  it('resets the timer on subsequent calls', () => {
    const fn = vi.fn()
    const debounced = debounce(fn, 300)

    debounced()
    vi.advanceTimersByTime(200)
    debounced()
    vi.advanceTimersByTime(200)

    expect(fn).not.toHaveBeenCalled()

    vi.advanceTimersByTime(100)
    expect(fn).toHaveBeenCalledOnce()
  })
})
Quiz
What happens if you forget to call vi.useRealTimers() in afterEach when using fake timers?

Code Coverage with the v8 Provider

Coverage tells you which lines of your code are exercised by tests. Vitest supports two providers: v8 (faster, uses V8's built-in coverage) and istanbul (slower, more mature). For most projects, v8 is the right choice.

We already configured it in vitest.config.ts. Run coverage with:

pnpm test:coverage

This generates a coverage report showing:

MetricWhat It Measures
StatementsHow many statements were executed
BranchesHow many if/else/ternary/switch branches were taken
FunctionsHow many functions were called
LinesHow many lines were executed

Setting Coverage Thresholds

Add thresholds to your config to fail CI when coverage drops:

export default defineConfig({
  test: {
    coverage: {
      provider: 'v8',
      thresholds: {
        statements: 80,
        branches: 80,
        functions: 80,
        lines: 80,
      },
    },
  },
})

Coverage Is Not Quality

Here's the thing most teams get wrong about coverage. 100% coverage does not mean your code is well-tested. You can hit every line without testing meaningful behavior:

// This test gives 100% line coverage but tests nothing useful
it('calls the function', () => {
  const result = calculateTax(100, 'US')
  expect(result).toBeDefined() // Useless — just checks it didn't throw
})

A meaningful test checks the correct output for a given input, not just that code ran without crashing. Aim for 80% coverage as a floor, not 100% as a goal. The last 20% is usually error handling, edge cases in third-party integrations, and dead code — areas where the effort-to-value ratio drops sharply.

Quiz
A team has 100% line coverage on their calculateDiscount function. A bug ships to production where a 50% discount is applied twice, charging the customer nothing. What does this tell you about their tests?

Putting It All Together: A Real-World Example

Here's a complete test file for a shopping cart utility, combining everything we've covered:

import { createCart, CartItem } from './cart'

describe('createCart', () => {
  let cart: ReturnType<typeof createCart>

  beforeEach(() => {
    cart = createCart()
  })

  describe('addItem', () => {
    it('adds an item to the cart', () => {
      cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: 1 })
      expect(cart.getItems()).toHaveLength(1)
      expect(cart.getItems()[0]).toEqual({
        id: '1',
        name: 'Keyboard',
        price: 79.99,
        quantity: 1,
      })
    })

    it('increments quantity for duplicate items', () => {
      cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: 1 })
      cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: 2 })
      expect(cart.getItems()).toHaveLength(1)
      expect(cart.getItems()[0].quantity).toBe(3)
    })

    it('throws on negative quantity', () => {
      expect(() =>
        cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: -1 })
      ).toThrow('Quantity must be positive')
    })
  })

  describe('getTotal', () => {
    test.each([
      {
        items: [{ id: '1', name: 'A', price: 10, quantity: 2 }],
        expected: 20,
      },
      {
        items: [
          { id: '1', name: 'A', price: 10, quantity: 1 },
          { id: '2', name: 'B', price: 25, quantity: 2 },
        ],
        expected: 60,
      },
      { items: [], expected: 0 },
    ])('calculates $expected for given items', ({ items, expected }) => {
      items.forEach((item) => cart.addItem(item as CartItem))
      expect(cart.getTotal()).toBe(expected)
    })
  })

  describe('removeItem', () => {
    it('removes an existing item', () => {
      cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: 1 })
      cart.removeItem('1')
      expect(cart.getItems()).toHaveLength(0)
    })

    it('does nothing for non-existent item', () => {
      cart.addItem({ id: '1', name: 'Keyboard', price: 79.99, quantity: 1 })
      cart.removeItem('999')
      expect(cart.getItems()).toHaveLength(1)
    })
  })
})

Notice the patterns: beforeEach resets the cart for isolation, test.each handles multiple total calculations, descriptive describe nesting groups by feature, and each test verifies one specific behavior.

What developers doWhat they should do
Using toBe to compare objects or arrays
toBe uses Object.is() which checks referential equality. Two objects with identical contents are different references, so toBe fails.
Use toEqual for deep structural comparison, toBe for primitives
Sharing mutable state across tests with beforeAll
Tests that share mutable state become order-dependent. Test A can modify shared state and cause test B to fail, making debugging extremely difficult.
Use beforeEach to reset state for each test
Blindly updating snapshots with vitest -u when they break
Snapshot tests exist to catch unexpected changes. Auto-accepting all updates without review defeats the purpose and lets bugs through.
Review every snapshot diff before updating to catch unintended changes
Writing expect(result).toBeDefined() as the only assertion
toBeDefined only verifies the function did not throw. It does not verify correctness. A function that returns the wrong value still passes this assertion.
Assert the specific expected output value: expect(result).toEqual(expectedValue)
Forgetting to restore fake timers with vi.useRealTimers()
Fake timers leak across tests. Subsequent tests will have broken timer behavior unless you explicitly restore real timers in cleanup.
Always pair vi.useFakeTimers() in beforeEach with vi.useRealTimers() in afterEach
Key Rules
  1. 1Use toBe for primitives and toEqual for objects — toBe checks reference identity, not structural equality
  2. 2Reset mutable state in beforeEach, not beforeAll — test isolation prevents order-dependent failures
  3. 3test.each eliminates repetition when 3+ tests share identical assertion logic with different data
  4. 4Coverage measures execution, not correctness — 100% coverage with weak assertions catches nothing
  5. 5Always pair vi.useFakeTimers() with vi.useRealTimers() in cleanup to prevent timer state leaking between tests
  6. 6Write test names that read as sentences describing the expected behavior, not the implementation