TypeFetched/manual/chapter-12-testing-debugging.md
Casey Collier b85b9a63e2 Initial commit: TypedFetch - Zero-dependency, type-safe HTTP client
Features:
- Zero configuration, just works out of the box
- Runtime type inference and validation
- Built-in caching with W-TinyLFU algorithm
- Automatic retries with exponential backoff
- Circuit breaker for resilience
- Request deduplication
- Offline support with queue
- OpenAPI schema discovery
- Full TypeScript support with type descriptors
- Modular architecture
- Configurable for advanced use cases

Built with bun, ready for npm publishing
2025-07-20 12:35:43 -04:00

26 KiB

Chapter 12: Testing & Debugging

"Testing is doubting. Debugging is understanding."


The Production Nightmare

Sarah deployed Weather Buddy 11.0 on Friday afternoon (first mistake). By Monday, her inbox was on fire.

"The app shows Tokyo weather for London!" "My favorites keep disappearing!" "It says I'm offline when I'm not!"

Sarah stared at the bug reports. Everything worked perfectly on her machine. How could she debug issues she couldn't reproduce?

"Welcome to production debugging," Marcus said, pulling up a chair. "Let me show you how to test TypedFetch apps properly and debug them like a detective."

Testing TypedFetch: The Right Way

TypedFetch includes powerful testing utilities:

import { createMockTypedFetch, MockAdapter } from 'typedfetch/testing'

describe('Weather API', () => {
  let tf: TypedFetch
  let mock: MockAdapter
  
  beforeEach(() => {
    // Create mock instance
    const { instance, adapter } = createMockTypedFetch()
    tf = instance
    mock = adapter
  })
  
  afterEach(() => {
    mock.reset()
  })
  
  test('fetches weather successfully', async () => {
    // Setup mock
    mock.onGet('/api/weather/london').reply(200, {
      temperature: 15,
      condition: 'cloudy'
    })
    
    // Make request
    const { data } = await tf.get('/api/weather/london')
    
    // Assert
    expect(data.temperature).toBe(15)
    expect(data.condition).toBe('cloudy')
    expect(mock.history.get.length).toBe(1)
  })
  
  test('handles errors gracefully', async () => {
    // Mock network error
    mock.onGet('/api/weather/invalid').networkError()
    
    // Expect error
    await expect(tf.get('/api/weather/invalid'))
      .rejects.toThrow('Network Error')
  })
  
  test('caches responses', async () => {
    mock.onGet('/api/weather/paris').reply(200, { temp: 20 })
    
    // First call - network
    const first = await tf.get('/api/weather/paris')
    expect(mock.history.get.length).toBe(1)
    
    // Second call - cache
    const second = await tf.get('/api/weather/paris')
    expect(mock.history.get.length).toBe(1)  // No new request
    expect(second.data).toEqual(first.data)
  })
})

Unit Testing Best Practices

1. Test Request Configuration

describe('Request Configuration', () => {
  test('adds authentication headers', async () => {
    const token = 'secret-token'
    
    tf.addRequestInterceptor(config => ({
      ...config,
      headers: {
        ...config.headers,
        'Authorization': `Bearer ${token}`
      }
    }))
    
    mock.onGet('/api/user').reply(200, { name: 'Sarah' })
    
    await tf.get('/api/user')
    
    // Check request headers
    const request = mock.history.get[0]
    expect(request.headers['Authorization']).toBe(`Bearer ${token}`)
  })
  
  test('handles query parameters', async () => {
    mock.onGet('/api/search').reply(200, [])
    
    await tf.get('/api/search', {
      params: {
        q: 'weather london',
        limit: 10
      }
    })
    
    const request = mock.history.get[0]
    expect(request.params).toEqual({
      q: 'weather london',
      limit: 10
    })
  })
  
  test('respects timeout configuration', async () => {
    mock.onGet('/api/slow').delay(2000).reply(200, {})
    
    await expect(
      tf.get('/api/slow', { timeout: 1000 })
    ).rejects.toThrow('timeout')
  })
})

2. Test Error Scenarios

describe('Error Handling', () => {
  test('retries on 5xx errors', async () => {
    let attempts = 0
    
    mock.onGet('/api/flaky').reply(() => {
      attempts++
      
      if (attempts < 3) {
        return [500, { error: 'Server Error' }]
      }
      
      return [200, { success: true }]
    })
    
    const { data } = await tf.get('/api/flaky')
    
    expect(data.success).toBe(true)
    expect(attempts).toBe(3)
  })
  
  test('provides helpful error messages', async () => {
    mock.onGet('/api/protected').reply(401, {
      error: 'Unauthorized'
    })
    
    try {
      await tf.get('/api/protected')
    } catch (error) {
      expect(error.message).toContain('Authentication required')
      expect(error.suggestions).toContain('Check your API token')
      expect(error.response.status).toBe(401)
    }
  })
  
  test('handles network failures', async () => {
    mock.onGet('/api/data').networkError()
    
    try {
      await tf.get('/api/data')
    } catch (error) {
      expect(error.code).toBe('NETWORK_ERROR')
      expect(error.message).toContain('Network request failed')
    }
  })
})

3. Test Caching Behavior

describe('Caching', () => {
  test('caches GET requests', async () => {
    const spy = jest.fn().mockResolvedValue([200, { data: 'test' }])
    mock.onGet('/api/cacheable').reply(spy)
    
    // First request - hits network
    await tf.get('/api/cacheable')
    expect(spy).toHaveBeenCalledTimes(1)
    
    // Second request - hits cache
    await tf.get('/api/cacheable')
    expect(spy).toHaveBeenCalledTimes(1)
  })
  
  test('invalidates cache on mutation', async () => {
    mock.onGet('/api/todos').reply(200, [{ id: 1, text: 'Test' }])
    mock.onPost('/api/todos').reply(201, { id: 2, text: 'New' })
    
    // Cache the GET
    await tf.get('/api/todos')
    
    // POST should invalidate
    await tf.post('/api/todos', {
      data: { text: 'New' }
    })
    
    // Next GET should hit network
    mock.onGet('/api/todos').reply(200, [
      { id: 1, text: 'Test' },
      { id: 2, text: 'New' }
    ])
    
    const { data } = await tf.get('/api/todos')
    expect(data).toHaveLength(2)
    expect(mock.history.get).toHaveLength(2)
  })
  
  test('respects cache headers', async () => {
    mock.onGet('/api/cacheable').reply(200, 
      { data: 'test' },
      { 'Cache-Control': 'max-age=60' }
    )
    
    const first = await tf.get('/api/cacheable')
    
    // Should use cache within 60 seconds
    const second = await tf.get('/api/cacheable')
    expect(mock.history.get).toHaveLength(1)
    
    // Fast-forward time
    jest.advanceTimersByTime(61000)
    
    // Should hit network after expiry
    const third = await tf.get('/api/cacheable')
    expect(mock.history.get).toHaveLength(2)
  })
})

Integration Testing

Test how components work together:

describe('Weather Service Integration', () => {
  let service: WeatherService
  let tf: TypedFetch
  let mock: MockAdapter
  
  beforeEach(() => {
    const { instance, adapter } = createMockTypedFetch()
    tf = instance
    mock = adapter
    service = new WeatherService(tf)
  })
  
  test('fetches and transforms weather data', async () => {
    mock.onGet('/api/weather/london').reply(200, {
      current_condition: [{
        temp_C: '15',
        weatherDesc: [{ value: 'Cloudy' }]
      }]
    })
    
    const weather = await service.getWeather('london')
    
    expect(weather).toEqual({
      city: 'London',
      temperature: 15,
      condition: 'Cloudy',
      unit: 'celsius'
    })
  })
  
  test('handles offline mode', async () => {
    // Mock offline
    Object.defineProperty(navigator, 'onLine', {
      writable: true,
      value: false
    })
    
    // Setup IndexedDB mock
    const cache = {
      london: {
        temperature: 14,
        condition: 'Partly Cloudy',
        cached: true
      }
    }
    
    // Service should return cached data
    const weather = await service.getWeather('london')
    expect(weather.cached).toBe(true)
    expect(weather.temperature).toBe(14)
  })
  
  test('queues updates when offline', async () => {
    Object.defineProperty(navigator, 'onLine', {
      writable: true,
      value: false
    })
    
    const result = await service.addFavorite('Tokyo')
    
    expect(result.queued).toBe(true)
    expect(result.syncPending).toBe(true)
    
    // Verify queued in IndexedDB
    const queue = await service.getSyncQueue()
    expect(queue).toContainEqual({
      action: 'ADD_FAVORITE',
      city: 'Tokyo',
      timestamp: expect.any(Number)
    })
  })
})

E2E Testing with TypedFetch

End-to-end tests with real browser:

import { test, expect } from '@playwright/test'
import { TypedFetchDevTools } from 'typedfetch/devtools'

test.describe('Weather Buddy E2E', () => {
  let devTools: TypedFetchDevTools
  
  test.beforeEach(async ({ page }) => {
    // Connect to TypedFetch DevTools
    devTools = await TypedFetchDevTools.connect(page)
    
    // Setup request interception
    await devTools.interceptRequests({
      '/api/weather/*': {
        response: { temperature: 20, condition: 'Sunny' }
      }
    })
    
    await page.goto('http://localhost:3000')
  })
  
  test('displays weather for searched city', async ({ page }) => {
    // Type in search
    await page.fill('#city-search', 'London')
    await page.click('#search-button')
    
    // Wait for request
    const request = await devTools.waitForRequest('/api/weather/london')
    expect(request.method).toBe('GET')
    
    // Check display
    await expect(page.locator('.temperature')).toContainText('20°C')
    await expect(page.locator('.condition')).toContainText('Sunny')
  })
  
  test('shows offline indicator', async ({ page, context }) => {
    // Go offline
    await context.setOffline(true)
    
    // Check indicator
    await expect(page.locator('.connection-status')).toContainText('Offline')
    
    // Try to search
    await page.fill('#city-search', 'Paris')
    await page.click('#search-button')
    
    // Should show cached or offline message
    await expect(page.locator('.offline-message')).toBeVisible()
  })
  
  test('syncs when back online', async ({ page, context }) => {
    // Go offline
    await context.setOffline(true)
    
    // Add favorite while offline
    await page.click('[data-city="Tokyo"] .favorite-button')
    
    // Check queued
    const queue = await devTools.getSyncQueue()
    expect(queue).toHaveLength(1)
    
    // Go online
    await context.setOffline(false)
    
    // Wait for sync
    await devTools.waitForSync()
    
    // Verify synced
    const syncedQueue = await devTools.getSyncQueue()
    expect(syncedQueue).toHaveLength(0)
    
    // Check request was made
    const syncRequest = await devTools.getRequest('/api/favorites')
    expect(syncRequest.method).toBe('POST')
    expect(syncRequest.body).toEqual({ city: 'Tokyo' })
  })
})

Debugging TypedFetch in Production

1. TypedFetch DevTools Extension

// Enable DevTools in production (carefully!)
if (shouldEnableDevTools()) {
  tf.enableDevTools({
    // Only for specific users
    enabled: user.role === 'developer',
    
    // Redact sensitive data
    redact: ['password', 'token', 'apiKey'],
    
    // Sampling for performance
    sampling: 0.01,  // 1% of requests
    
    // Remote debugging
    remote: {
      enabled: true,
      endpoint: 'https://debug.example.com',
      apiKey: process.env.DEBUG_API_KEY
    }
  })
}

// DevTools panel shows:
// - All requests with timing
// - Cache hit/miss
// - Request/response headers
// - Error details
// - Performance metrics

2. Request Tracing

class RequestTracer {
  private traces = new Map<string, Trace>()
  
  startTrace(requestId: string): Trace {
    const trace = {
      id: requestId,
      startTime: performance.now(),
      url: '',
      method: '',
      headers: {},
      events: []
    }
    
    this.traces.set(requestId, trace)
    return trace
  }
  
  addEvent(requestId: string, event: TraceEvent) {
    const trace = this.traces.get(requestId)
    if (trace) {
      trace.events.push({
        ...event,
        timestamp: performance.now() - trace.startTime
      })
    }
  }
  
  endTrace(requestId: string): Trace | undefined {
    const trace = this.traces.get(requestId)
    if (trace) {
      trace.duration = performance.now() - trace.startTime
      
      // Send to monitoring
      this.sendTrace(trace)
      
      this.traces.delete(requestId)
    }
    
    return trace
  }
  
  private sendTrace(trace: Trace) {
    // Send to APM service
    navigator.sendBeacon('/api/traces', JSON.stringify(trace))
  }
}

// Use in interceptors
tf.addRequestInterceptor(config => {
  const traceId = crypto.randomUUID()
  config.metadata.traceId = traceId
  
  tracer.startTrace(traceId)
  tracer.addEvent(traceId, {
    type: 'request-start',
    data: { url: config.url, method: config.method }
  })
  
  return config
})

3. Error Tracking

class ErrorTracker {
  private errorQueue: ErrorReport[] = []
  private flushInterval = 5000
  
  constructor() {
    // Batch send errors
    setInterval(() => this.flush(), this.flushInterval)
    
    // Capture unhandled errors
    window.addEventListener('unhandledrejection', event => {
      this.trackError(event.reason, {
        type: 'unhandled-promise',
        promise: event.promise
      })
    })
  }
  
  trackError(error: any, context: any = {}) {
    const report: ErrorReport = {
      timestamp: Date.now(),
      message: error.message || String(error),
      stack: error.stack,
      type: error.name || 'Error',
      context: {
        ...context,
        url: window.location.href,
        userAgent: navigator.userAgent,
        ...this.getAppContext()
      }
    }
    
    // Add TypedFetch specific info
    if (error.config) {
      report.request = {
        url: error.config.url,
        method: error.config.method,
        headers: this.redactHeaders(error.config.headers)
      }
    }
    
    if (error.response) {
      report.response = {
        status: error.response.status,
        statusText: error.response.statusText,
        headers: this.redactHeaders(error.response.headers)
      }
    }
    
    this.errorQueue.push(report)
    
    // Immediate send for critical errors
    if (this.isCritical(error)) {
      this.flush()
    }
  }
  
  private flush() {
    if (this.errorQueue.length === 0) return
    
    const errors = [...this.errorQueue]
    this.errorQueue = []
    
    fetch('/api/errors', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ errors })
    }).catch(err => {
      // Re-queue on failure
      this.errorQueue.unshift(...errors)
    })
  }
  
  private redactHeaders(headers: any): any {
    const redacted = { ...headers }
    const sensitive = ['authorization', 'cookie', 'x-api-key']
    
    sensitive.forEach(key => {
      if (redacted[key]) {
        redacted[key] = '[REDACTED]'
      }
    })
    
    return redacted
  }
  
  private isCritical(error: any): boolean {
    return error.response?.status >= 500 ||
           error.code === 'NETWORK_ERROR' ||
           error.message?.includes('Critical')
  }
  
  private getAppContext() {
    return {
      version: APP_VERSION,
      environment: process.env.NODE_ENV,
      session: getSessionId(),
      userId: getCurrentUserId()
    }
  }
}

// Initialize error tracking
const errorTracker = new ErrorTracker()

// Track TypedFetch errors
tf.addErrorInterceptor(error => {
  errorTracker.trackError(error, {
    source: 'typedfetch'
  })
  
  throw error
})

Performance Debugging

1. Request Performance Monitoring

class PerformanceMonitor {
  private metrics: PerformanceMetrics = {
    requests: [],
    cache: {
      hits: 0,
      misses: 0,
      evictions: 0
    },
    network: {
      latency: [],
      bandwidth: []
    }
  }
  
  measureRequest(config: RequestConfig): () => void {
    const start = performance.now()
    const entry: RequestMetrics = {
      url: config.url,
      method: config.method,
      startTime: start,
      events: []
    }
    
    // Add performance markers
    performance.mark(`tf-request-start-${config.metadata.requestId}`)
    
    return () => {
      const end = performance.now()
      entry.duration = end - start
      
      performance.mark(`tf-request-end-${config.metadata.requestId}`)
      performance.measure(
        `tf-request-${config.metadata.requestId}`,
        `tf-request-start-${config.metadata.requestId}`,
        `tf-request-end-${config.metadata.requestId}`
      )
      
      // Collect resource timing
      const resourceTiming = performance
        .getEntriesByType('resource')
        .find(e => e.name.includes(config.url))
      
      if (resourceTiming) {
        entry.timing = {
          dns: resourceTiming.domainLookupEnd - resourceTiming.domainLookupStart,
          tcp: resourceTiming.connectEnd - resourceTiming.connectStart,
          ttfb: resourceTiming.responseStart - resourceTiming.requestStart,
          download: resourceTiming.responseEnd - resourceTiming.responseStart
        }
      }
      
      this.metrics.requests.push(entry)
      this.analyzePerformance()
    }
  }
  
  private analyzePerformance() {
    const recent = this.metrics.requests.slice(-100)
    
    if (recent.length < 10) return
    
    const analysis = {
      avgDuration: average(recent.map(r => r.duration)),
      p95Duration: percentile(recent.map(r => r.duration), 0.95),
      slowestEndpoints: this.findSlowestEndpoints(recent),
      cacheHitRate: this.metrics.cache.hits / 
                    (this.metrics.cache.hits + this.metrics.cache.misses)
    }
    
    // Alert on performance degradation
    if (analysis.p95Duration > 1000) {
      console.warn('Performance degradation detected:', analysis)
      this.sendAlert(analysis)
    }
  }
  
  private findSlowestEndpoints(requests: RequestMetrics[]): EndpointStats[] {
    const byEndpoint = groupBy(requests, r => r.url)
    
    return Object.entries(byEndpoint)
      .map(([url, reqs]) => ({
        url,
        count: reqs.length,
        avgDuration: average(reqs.map(r => r.duration)),
        p95Duration: percentile(reqs.map(r => r.duration), 0.95)
      }))
      .sort((a, b) => b.p95Duration - a.p95Duration)
      .slice(0, 5)
  }
}

2. Memory Leak Detection

class MemoryLeakDetector {
  private samples: MemorySample[] = []
  private interval = 30000  // 30 seconds
  
  start() {
    if (!performance.memory) {
      console.warn('Memory monitoring not available')
      return
    }
    
    setInterval(() => {
      this.takeSample()
    }, this.interval)
  }
  
  private takeSample() {
    const sample: MemorySample = {
      timestamp: Date.now(),
      usedJSHeapSize: performance.memory.usedJSHeapSize,
      totalJSHeapSize: performance.memory.totalJSHeapSize,
      jsHeapSizeLimit: performance.memory.jsHeapSizeLimit,
      ...this.getCustomMetrics()
    }
    
    this.samples.push(sample)
    
    // Keep last hour
    const cutoff = Date.now() - (60 * 60 * 1000)
    this.samples = this.samples.filter(s => s.timestamp > cutoff)
    
    this.detectLeaks()
  }
  
  private detectLeaks() {
    if (this.samples.length < 10) return
    
    // Check for continuous growth
    const recent = this.samples.slice(-10)
    const growth = recent[recent.length - 1].usedJSHeapSize - recent[0].usedJSHeapSize
    const growthRate = growth / (10 * this.interval)
    
    // Alert if growing > 1MB per minute
    if (growthRate > 1024 * 1024 / 60000) {
      console.error('Potential memory leak detected:', {
        growthRate: `${(growthRate * 60000 / 1024 / 1024).toFixed(2)} MB/min`,
        current: `${(recent[recent.length - 1].usedJSHeapSize / 1024 / 1024).toFixed(2)} MB`,
        samples: recent
      })
      
      this.captureHeapSnapshot()
    }
  }
  
  private getCustomMetrics() {
    return {
      tfCacheSize: tf.cache.size(),
      tfPendingRequests: tf.getPendingCount(),
      domNodes: document.getElementsByTagName('*').length,
      eventListeners: this.countEventListeners()
    }
  }
  
  private countEventListeners(): number {
    // Estimate event listeners (not precise)
    let count = 0
    const allElements = document.getElementsByTagName('*')
    
    for (const element of allElements) {
      const listeners = getEventListeners(element)
      count += Object.values(listeners).flat().length
    }
    
    return count
  }
  
  private captureHeapSnapshot() {
    if ('memory' in performance && 'measureUserAgentSpecificMemory' in performance) {
      performance.measureUserAgentSpecificMemory().then(result => {
        console.log('Heap snapshot:', result)
        
        // Send to monitoring
        this.sendSnapshot(result)
      })
    }
  }
}

Best Debugging Practices 🎯

1. Use Request IDs

// Add unique ID to every request
tf.addRequestInterceptor(config => {
  config.headers['X-Request-ID'] = crypto.randomUUID()
  return config
})

// Track through entire system
// Client → Server → Database → Logs

2. Structured Logging

class Logger {
  log(level: LogLevel, message: string, context?: any) {
    const entry = {
      timestamp: new Date().toISOString(),
      level,
      message,
      context,
      ...this.getRequestContext()
    }
    
    // Local logging
    console[level](entry)
    
    // Remote logging
    if (level === 'error' || level === 'warn') {
      this.sendToServer(entry)
    }
  }
  
  private getRequestContext() {
    return {
      url: window.location.href,
      sessionId: getSessionId(),
      userId: getCurrentUserId(),
      version: APP_VERSION
    }
  }
}

3. Debug Mode

// Enable comprehensive debugging
if (localStorage.getItem('debug') === 'true') {
  tf.enableDebug()
  
  // Log all requests
  tf.addRequestInterceptor(config => {
    console.group(`🚀 ${config.method} ${config.url}`)
    console.log('Headers:', config.headers)
    console.log('Params:', config.params)
    console.log('Body:', config.data)
    console.groupEnd()
    return config
  })
  
  // Log all responses
  tf.addResponseInterceptor(response => {
    console.group(`✅ ${response.config.url}`)
    console.log('Status:', response.status)
    console.log('Headers:', response.headers)
    console.log('Data:', response.data)
    console.log('Cached:', response.cached)
    console.groupEnd()
    return response
  })
}

4. Production Debugging

// Safe production debugging
class ProductionDebugger {
  async captureDebugInfo(): Promise<DebugBundle> {
    return {
      timestamp: Date.now(),
      
      // App state
      state: this.captureState(),
      
      // Recent requests
      requests: tf.getHistory().slice(-20),
      
      // Cache info
      cache: {
        size: tf.cache.size(),
        entries: tf.cache.keys(),
        hitRate: tf.cache.getHitRate()
      },
      
      // Performance
      performance: {
        memory: performance.memory,
        navigation: performance.getEntriesByType('navigation')[0],
        resources: performance.getEntriesByType('resource').slice(-50)
      },
      
      // Errors
      errors: errorTracker.getRecent(),
      
      // Environment
      environment: {
        userAgent: navigator.userAgent,
        screen: `${screen.width}x${screen.height}`,
        connection: navigator.connection,
        online: navigator.onLine
      }
    }
  }
  
  async sendDebugBundle() {
    const bundle = await this.captureDebugInfo()
    
    // Compress and send
    const compressed = await compress(JSON.stringify(bundle))
    
    await fetch('/api/debug', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/octet-stream',
        'X-Debug-Token': await this.getDebugToken()
      },
      body: compressed
    })
  }
}

Practice Time! 🏋️

Exercise 1: Build a Test Suite

Create comprehensive tests:

describe('WeatherAPI', () => {
  // Your tests here:
  // - Happy path
  // - Error scenarios
  // - Caching behavior
  // - Offline support
  // - Performance
})

Exercise 2: Create Debug Tools

Build debugging utilities:

class DebugTools {
  // Your code here:
  // - Request inspector
  // - Cache analyzer
  // - Performance profiler
  // - Error reporter
}

Exercise 3: Implement E2E Tests

Create end-to-end tests:

test.describe('Weather Buddy', () => {
  // Your tests here:
  // - User flows
  // - Offline scenarios
  // - Error recovery
  // - Performance
})

Key Takeaways 🎯

  1. Test at multiple levels - Unit, integration, and E2E
  2. Mock TypedFetch for predictable tests - Control responses
  3. Test error scenarios thoroughly - They happen more than success
  4. Use request tracing in production - Follow requests through system
  5. Monitor performance continuously - Detect degradation early
  6. Enable debug mode safely - With proper controls
  7. Capture debug bundles - For post-mortem analysis
  8. Track errors with context - Know why things failed

Common Pitfalls 🚨

  1. Not testing offline scenarios - Major source of bugs
  2. Missing error test cases - Happy path only
  3. No production debugging - Can't fix what you can't see
  4. Ignoring performance tests - Slow death of apps
  5. Poor error messages - Users can't help you
  6. No request tracing - Debugging is guesswork

What's Next?

You've mastered testing and debugging! But how do you build reusable API abstractions? In Chapter 13, we'll explore advanced patterns:

  • Repository pattern
  • API client factories
  • Domain-driven design
  • Composable APIs
  • Plugin architectures
  • Code generation

Ready to architect like a pro? See you in Chapter 13! 🏗️


Chapter Summary

  • TypedFetch provides powerful testing utilities with mock adapters
  • Test request configuration, error handling, and caching behavior thoroughly
  • Integration tests verify components work together correctly
  • E2E tests with real browsers catch user-facing issues
  • Production debugging requires request tracing and error tracking
  • Performance monitoring helps detect degradation early
  • Memory leak detection prevents resource exhaustion
  • Structured logging and debug bundles aid troubleshooting

Next Chapter Preview: Building API Abstractions - Repository patterns, client factories, and architecting reusable API layers.