Day 18: How To Make Claude Code Write the Tests Too
Learn to write comprehensive tests with Claude Code. Happy path + edge cases + error states. Two commands: /test and /test-component. Zero excuses left.
Hey, it's G.
Day 18 of the Claude Code series.
I'll be honest — I under-tested both Resiboko and 1MinThumb.
Not because I don't know how to write tests.
Because writing tests is the part of development that feels the least like progress.
Claude Code fixed this for me.
Not by making tests more fun — by making them so fast to generate that I have no excuse left not to write them.
The Problem (Testing Feels Like Time That Could Be Spent Shipping)
Here's why I didn't write tests:
It takes time. Time that could be spent building features users actually see.
It's tedious. Writing test after test for edge cases and error states.
It feels like no progress. Green tests don't ship. Features ship.
So I shipped without tests.
The result:
Production bugs I could have caught. Users hitting errors I never tested for.
Debugging the same thing twice. Same bug sneaking back in different forms.
Shipping with dread instead of confidence. "Did I break anything? I guess we'll find out."
The Concept (Claude Code Removes the Time Excuse)
Most developers who don't write tests have the same reason:
It takes time that feels like it could be spent shipping.
Claude Code removes that excuse.
It can write a solid test suite for any function or component in under a minute.
But there's a right way and a wrong way.
❌ Wrong Way
> Write tests for this file
You get:
- Generic tests
- Only happy path coverage
- No edge cases
- No error states
Green tests that wouldn't catch a real bug.
False confidence.
✅ Right Way
Be specific about what you want tested.
Three things make a test suite actually useful:
1. Happy Path
Does it work when everything goes right?
2. Edge Cases
What happens at the boundaries?
- Empty arrays
- Null values
- Zero
- Maximum limits
- Negative numbers
3. Error States
What happens when things go wrong?
- Failed fetches
- Invalid inputs
- Missing data
- Network errors
Tell Claude Code to cover all three and you get tests worth having.
Two Types of Testing Contexts
Unit Tests
Testing individual functions and utilities in isolation.
Characteristics:
- Fast to write
- Fast to run
- Easy to debug
Example: Testing a formatCurrency() function
Component Tests
Testing React components.
What to test:
- What renders
- What happens on interaction
- What shows in different states
Example: Testing a BillingSummary component
Each needs slightly different instructions.
The /test Slash Command
Build this command for unit tests:
touch ~/.claude/commands/test.md
Inside the file:
Write tests for the current file using Vitest.
Cover all of the following:
- Happy path — normal expected usage
- Edge cases — empty inputs, null values, boundary conditions
- Error states — what happens when things fail
Rules:
- Place the test file next to the source file
- Use descriptive test names that explain what is being tested
- Do not modify the source file
- Each test should test one thing only
- Mock external dependencies — do not make real API calls
Use it:
claude
> view utils/formatCurrency.ts
> /test
Claude responds:
Creating tests for utils/formatCurrency.ts...
Test file: utils/__tests__/formatCurrency.test.ts
Tests written:
Happy path:
✓ formats USD currency correctly
✓ formats PHP currency correctly
✓ formats large numbers with commas
Edge cases:
✓ handles zero correctly
✓ handles negative numbers
✓ handles very small decimal amounts
✓ handles maximum safe integer
Error states:
✓ handles null gracefully
✓ handles undefined gracefully
✓ handles non-numeric strings
✓ handles NaN values
12 tests written. Running them now...
All passing. Commit:
git add .
git commit -m "test: add comprehensive tests for formatCurrency"
The /test-component Slash Command
Build this command for component tests:
touch ~/.claude/commands/test-component.md
Inside the file:
Write component tests for the current file using Vitest
and React Testing Library.
Cover all of the following:
- Renders correctly with default props
- Renders correctly in each meaningful state
(loading, error, empty, populated)
- User interactions work correctly (clicks, inputs, submits)
- Correct content shows for each state
Rules:
- Place test file next to the component
- Use getByRole and getByText over getByTestId where possible
- Mock all API calls and external dependencies
- Do not test implementation details — test behavior
- Do not modify the source file
Use it:
claude
> view components/billing/BillingSummary.tsx
> /test-component
Claude responds:
Creating component tests for BillingSummary.tsx...
Test file: components/billing/__tests__/BillingSummary.test.tsx
Tests written:
Rendering:
✓ renders plan name correctly
✓ renders usage count correctly
✓ renders upgrade button
States:
✓ renders loading skeleton while fetching
✓ renders error message when fetch fails
✓ renders empty state when no subscription exists
✓ renders populated state with subscription data
Interactions:
✓ upgrade button is disabled on Pro plan
✓ upgrade button is enabled on Free plan
✓ clicking upgrade button calls onUpgrade callback
10 tests written. Running them now...
Real Testing Session (Complete Example)
Let me walk you through testing a utility function in Resiboko.
The Function
File: utils/validateSubscription.ts
export function validateSubscription(
plan: 'Free' | 'Pro' | null,
seats: number
): boolean {
if (!plan || seats <= 0) return false
const limits = {
Free: 100,
Pro: Infinity
}
return seats <= limits[plan]
}
Step 1: Generate Tests
claude
> view utils/validateSubscription.ts
> /test
Step 2: Claude Analyzes and Writes Tests
Claude responds:
Reading utils/validateSubscription.ts...
Function identified:
- validateSubscription(plan, seats)
- Returns true if subscription is valid for seat count
- Handles null inputs
- Different limits per plan type
Creating comprehensive test suite...
Test file: utils/__tests__/validateSubscription.test.ts
Tests written:
Happy path:
✓ returns true for valid Free plan with 50 seats
✓ returns true for valid Pro plan with 500 seats
✓ returns true for Free plan at exact limit (100)
Edge cases:
✓ returns false when seats exceed Free plan limit
✓ returns false for null plan
✓ returns false for zero seats
✓ returns false for negative seats
✓ handles Pro plan with very large seat count
✓ handles boundary at 100 for Free plan
Error states:
✓ handles unknown plan type gracefully
✓ handles undefined plan gracefully
✓ handles undefined seats gracefully
12 tests written.
Step 3: Run the Tests
> Run the tests you just wrote and report results.
Claude responds:
Running tests...
✓ utils/__tests__/validateSubscription.test.ts (12)
✓ returns true for valid Free plan with 50 seats
✓ returns true for valid Pro plan with 500 seats
✓ returns true for Free plan at exact limit (100)
✓ returns false when seats exceed Free plan limit
✓ returns false for null plan
✓ returns false for zero seats
✓ returns false for negative seats
✓ handles Pro plan with very large seat count
✓ handles boundary at 100 for Free plan
✓ handles unknown plan type gracefully
✓ handles undefined plan gracefully
✓ handles undefined seats gracefully
Test run complete. All 12 tests passing.
Step 4: Commit
git add .
git commit -m "test: add comprehensive tests for validateSubscription"
Real coverage. Real confidence. Two minutes of work.
Testing After Bug Fixes
Here's the habit that changed everything for me:
After fixing any bug, write a test that would have caught it.
The Prompt
claude
> The bug we just fixed was: user.email was null on
> initial render causing TypeError.
>
> Write a test that would have caught this bug before
> it reached production. Make sure the test fails on
> the old code and passes on the fixed version.
Claude responds:
Creating regression test for null user bug...
Test file: app/dashboard/__tests__/page.test.tsx
Test written:
✓ shows loading state when user is null on initial render
This test would have failed on the old code (tried to
access user.email immediately) and passes on the new
code (checks if user exists first).
Preventing this bug from coming back.
This test has saved me twice already.
Same bug tried to sneak back in different forms.
The test caught it both times.
Testing API Routes
API routes need different coverage:
claude
> view app/api/subscriptions/route.ts
> Write tests for this API route. Cover:
> - Successful response with valid data
> - Response when user is not authenticated
> - Response when database call fails
> - Response with invalid request body
> Use Vitest. Mock the Supabase client.
Claude responds:
Creating API route tests...
Test file: app/api/subscriptions/__tests__/route.test.ts
Tests written:
✓ returns 200 with subscription data for authenticated user
✓ returns 401 when user is not authenticated
✓ returns 500 when database call fails
✓ returns 400 with invalid request body
✓ returns correct response format
✓ calls Supabase client with correct parameters
Mocking:
- Supabase client mocked using vi.mock()
- Auth state mocked for different scenarios
- Database responses mocked for error testing
6 tests written. Running them now...
Getting Coverage on Untested Files
When you have an old file with zero tests:
claude
> Read utils/auth.ts and tell me:
> 1. What functions exist and what they do
> 2. What edge cases are worth testing
> 3. What could realistically go wrong
>
> Then write a complete test suite covering all of it.
Claude responds:
Analysis of utils/auth.ts:
Functions found:
1. getUser() - fetches current user from Supabase
2. requireAuth() - throws if no user, returns user if exists
3. isAdmin() - checks if user has admin role
4. logout() - signs user out
Edge cases worth testing:
- getUser when user is not logged in
- requireAuth when user is null
- isAdmin when user has no role field
- logout when already logged out
What could go wrong:
- Supabase client returns null unexpectedly
- Network errors during auth check
- Invalid user data structure
- Race conditions in auth state
Writing comprehensive test suite...
Test file: utils/__tests__/auth.test.ts
Tests written: 16 tests covering all functions,
edge cases, and error states.
Running them now...
Always Verify the Tests Run
Critical step that's easy to skip:
After Claude writes tests, always run them.
> Run the tests you just wrote and report the results.
> If any fail, fix them before we move on.
Why this matters:
Claude sometimes writes tests that:
- Have syntax errors
- Import things that don't exist
- Mock incorrectly
- Assert the wrong things
Running them immediately catches these issues.
The Complete Testing Workflow
Here's the workflow I use every time:
After Building a Feature
1. Build the feature
2. /review to check the code
3. /test or /test-component
4. Run the tests → verify they pass
5. /ship to commit everything
After Fixing a Bug
1. Fix the bug
2. /review the fix
3. Write a test that would have caught this bug
4. Run the test → verify it fails on old code, passes on new
5. /ship to commit fix + test
For Legacy Code
1. Identify untested file
2. Ask Claude to analyze what needs testing
3. Write comprehensive test suite
4. Run tests → fix any failures
5. Commit tests separately
What Makes a Good Test?
Claude needs clear instructions to write good tests.
✅ Good Test Instructions
Write tests covering:
- Happy path with typical valid inputs
- Edge cases: empty arrays, null, zero, max values
- Error states: network failures, invalid data
- Each test should verify one specific behavior
- Use descriptive test names
- Mock all external dependencies
❌ Bad Test Instructions
Write some tests
Too vague. You'll get generic happy-path tests only.
Component Testing Best Practices
When testing React components with Claude Code:
Test Behavior, Not Implementation
❌ Bad:
Test that useState is called with initial value
✅ Good:
Test that loading spinner shows while fetching
Test All Meaningful States
For any component, test:
- Initial/default state
- Loading state
- Error state
- Empty state
- Populated state
Test User Interactions
- Click buttons
- Type in inputs
- Submit forms
- Toggle switches
- Select options
Verify the right things happen after each interaction.
My Raw Notes (Unfiltered)
Honestly embarrassed by how little test coverage Resiboko has. Starting fresh with this habit on everything new.
The "write a test that would have caught this bug" prompt after fixing something is now a non-negotiable for me — it's caught the same bug trying to sneak back in twice.
The "run the tests immediately after writing them" step is important — Claude sometimes writes tests that don't actually run. Always verify before committing.
The component test command is the one I use less because React Testing Library has a learning curve but Claude handles the setup which is usually the hardest part.
Tests aren't about being a good developer. They're about not debugging the same thing twice.
Tomorrow (Day 19 Preview)
Topic: Working with APIs — how to let Claude Code handle third-party integrations so you're not reading docs for hours.
What I'm testing: Can Claude Code integrate Stripe, Supabase, or any API without you reading the docs? How do you guide it to handle authentication, error states, and edge cases?
Following This Series
Phase 3 (Days 15-21): Vibe Coding ⬅️ You are here
So far in Phase 3:
- Day 14: Vibe coding mindset
- Day 15: Feature-first workflow
- Day 16: Debugging with Claude Code
- Day 17: Refactoring with Claude Code
- Day 18: Writing tests with Claude Code (today)
- Day 19: Working with APIs (tomorrow)
G
P.S. - Two commands after every feature: /test for utilities, /test-component for React components. Two minutes. No excuse.
P.P.S. - After fixing any bug: "Write a test that would have caught this bug before production." That test saves you from debugging the same thing twice.
P.P.P.S. - Always run the tests Claude writes before committing. Verify they actually pass. Claude sometimes writes tests that don't run.