TypeFetched/manual/chapter-6-cache-revolution.md
Casey Collier b85b9a63e2 Initial commit: TypedFetch - Zero-dependency, type-safe HTTP client
Features:
- Zero configuration, just works out of the box
- Runtime type inference and validation
- Built-in caching with W-TinyLFU algorithm
- Automatic retries with exponential backoff
- Circuit breaker for resilience
- Request deduplication
- Offline support with queue
- OpenAPI schema discovery
- Full TypeScript support with type descriptors
- Modular architecture
- Configurable for advanced use cases

Built with bun, ready for npm publishing
2025-07-20 12:35:43 -04:00

29 KiB

Chapter 6: The Cache Revolution

"The fastest API call is the one you don't make."


The Performance Awakening

Sarah's Weather Buddy app was rock solid. It handled errors gracefully, recovered from failures, and never crashed. But during the Monday morning rush, when everyone checked weather before commuting, the app felt... sluggish.

"Why is it so slow?" Jake complained. "I'm checking the same cities every day!"

Marcus pulled up the network tab. "Look at this - you're making the same API calls over and over. Each weather check is a 200ms round trip."

"But I need fresh data," Sarah protested.

"Do you though?" Marcus smiled. "Does the temperature really change every second? Time to learn about caching - the single biggest performance win you'll ever implement."

Understanding Caching: Your Secret Weapon

Caching is like having a really good memory. Instead of asking the same question repeatedly, you remember the answer for a while.

// Without caching - every call hits the network
button.addEventListener('click', async () => {
  const weather = await tf.get('/api/weather/london')  // 200ms
  updateDisplay(weather)
})

// With caching - only first call hits network
button.addEventListener('click', async () => {
  const weather = await tf.get('/api/weather/london')  // 200ms first time, <1ms after
  updateDisplay(weather)
})

TypedFetch includes a revolutionary cache that's not just fast - it's smart.

The W-TinyLFU Algorithm: 25% Better Than LRU

Most caches use LRU (Least Recently Used) - they keep recent items and discard old ones. But TypedFetch uses W-TinyLFU, which is like having a cache with a photographic memory:

// Traditional LRU - recency wins
cache.get('A')  // A becomes most recent
cache.get('B')  // B becomes most recent
cache.get('C')  // C becomes most recent
cache.get('D')  // D becomes most recent, A gets evicted

// W-TinyLFU - frequency AND recency matter
cache.get('A')  // A: frequency=1, recent
cache.get('A')  // A: frequency=2, recent
cache.get('B')  // B: frequency=1, recent
cache.get('C')  // C: frequency=1, recent
cache.get('D')  // D: frequency=1, but A stays (higher frequency)

Why W-TinyLFU Rocks

  1. Better Hit Rates: 15-25% more cache hits than LRU
  2. Scan Resistance: One-time requests don't pollute the cache
  3. Frequency Awareness: Keeps frequently accessed items
  4. Memory Efficient: Uses sketch data structures

Let's see it in action:

// TypedFetch automatically uses W-TinyLFU
const popularUser = await tf.get('/api/users/1')     // Accessed often
const trendingPost = await tf.get('/api/posts/hot')  // Accessed very often
const randomUser = await tf.get('/api/users/99999')  // Accessed once

// Later, when cache is full:
// - popularUser: still cached (high frequency)
// - trendingPost: still cached (very high frequency)  
// - randomUser: evicted (low frequency)

Cache Configuration: Fine-Tuning Performance

TypedFetch gives you complete control over caching:

// Global cache settings
tf.configure({
  cache: {
    maxSize: 100 * 1024 * 1024,  // 100MB cache
    maxAge: 5 * 60 * 1000,       // 5 minutes default TTL
    staleWhileRevalidate: true,  // Serve stale while fetching fresh
    algorithm: 'W-TinyLFU'       // or 'LRU' if you prefer
  }
})

// Per-request cache control
const { data } = await tf.get('/api/weather', {
  cache: {
    maxAge: 60000,              // Cache for 1 minute
    staleWhileRevalidate: true, // Return stale data while refreshing
    key: 'weather-london'       // Custom cache key
  }
})

// Skip cache
const { data: fresh } = await tf.get('/api/weather', {
  cache: false  // Always fetch fresh
})

// Force cache
const { data: cached } = await tf.get('/api/weather', {
  cache: 'force'  // Use cache even if expired
})

Cache Strategies for Different Data Types

Not all data should be cached the same way:

// Static data - cache aggressively
const countries = await tf.get('/api/countries', {
  cache: {
    maxAge: 7 * 24 * 60 * 60 * 1000,  // 1 week
    immutable: true  // Never changes
  }
})

// User data - cache briefly
const profile = await tf.get('/api/users/me', {
  cache: {
    maxAge: 60000,  // 1 minute
    private: true   // Don't share between users
  }
})

// Real-time data - cache very briefly
const stockPrice = await tf.get('/api/stocks/AAPL', {
  cache: {
    maxAge: 5000,   // 5 seconds
    staleWhileRevalidate: false  // Always need fresh
  }
})

// Personalized data - cache with user context
const recommendations = await tf.get('/api/recommendations', {
  cache: {
    key: `recs-user-${userId}`,  // User-specific key
    maxAge: 300000               // 5 minutes
  }
})

Cache Warming: Preload for Speed

Don't wait for users to request data - preload it:

// Warm cache on app start
async function warmCache() {
  const criticalEndpoints = [
    '/api/config',
    '/api/user/preferences',
    '/api/features'
  ]
  
  // Parallel cache warming
  await Promise.all(
    criticalEndpoints.map(endpoint => 
      tf.get(endpoint, { 
        cache: { warm: true }  // Low priority
      })
    )
  )
}

// Predictive cache warming
function predictiveWarm(currentPage) {
  const predictions = {
    '/dashboard': ['/api/stats', '/api/recent-activity'],
    '/profile': ['/api/user/posts', '/api/user/followers'],
    '/weather': ['/api/weather/current-location']
  }
  
  const toWarm = predictions[currentPage] || []
  toWarm.forEach(endpoint => {
    // Warm in background
    setTimeout(() => tf.get(endpoint), 100)
  })
}

// Time-based warming
function scheduleWarmup() {
  // Warm cache before work hours
  const now = new Date()
  const nineAM = new Date()
  nineAM.setHours(9, 0, 0, 0)
  
  if (now < nineAM) {
    const delay = nineAM - now
    setTimeout(warmCache, delay)
  }
}

Cache Invalidation: The Hard Problem

"There are only two hard things in Computer Science: cache invalidation and naming things." - Phil Karlton

TypedFetch makes invalidation easy:

// Invalidate specific endpoint
tf.cache.invalidate('/api/users/123')

// Invalidate with pattern
tf.cache.invalidatePattern('/api/users/*')

// Invalidate on mutation
const { data } = await tf.post('/api/posts', {
  data: newPost,
  invalidates: ['/api/posts', '/api/posts/recent']
})

// Smart invalidation based on relationships
tf.addResponseInterceptor(response => {
  if (response.config.method === 'POST' && response.config.url.includes('/comments')) {
    // New comment invalidates the post
    const postId = response.data.postId
    tf.cache.invalidate(`/api/posts/${postId}`)
  }
  return response
})

// Tag-based invalidation
const posts = await tf.get('/api/posts', {
  cache: { tags: ['posts', 'content'] }
})

// Later, invalidate all with tag
tf.cache.invalidateTag('content')

Weather Buddy 6.0: Lightning Fast

Let's add intelligent caching to Weather Buddy:

<!DOCTYPE html>
<html>
<head>
    <title>Weather Buddy 6.0 - Lightning Fast</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; }
        .city-grid { display: grid; grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); gap: 20px; }
        .city-card { border: 1px solid #ddd; padding: 15px; border-radius: 8px; position: relative; }
        .cache-indicator { position: absolute; top: 5px; right: 5px; font-size: 12px; }
        .cache-fresh { color: #4CAF50; }
        .cache-stale { color: #ff9800; }
        .cache-miss { color: #f44336; }
        .performance-stats { position: fixed; bottom: 20px; right: 20px; background: white; padding: 15px; border: 1px solid #ddd; border-radius: 8px; font-family: monospace; }
        .cache-controls { margin: 20px 0; padding: 15px; background: #f0f0f0; border-radius: 8px; }
    </style>
    <script type="module">
        import { tf } from 'https://esm.sh/typedfetch'
        
        // Performance tracking
        const stats = {
            requests: 0,
            cacheHits: 0,
            cacheMisses: 0,
            totalTime: 0,
            savedTime: 0
        }
        
        // Configure intelligent caching
        tf.configure({
            cache: {
                maxSize: 50 * 1024 * 1024,  // 50MB
                algorithm: 'W-TinyLFU',
                staleWhileRevalidate: true
            }
        })
        
        // Add performance tracking
        tf.addRequestInterceptor(config => {
            config.metadata = { startTime: Date.now() }
            stats.requests++
            return config
        })
        
        tf.addResponseInterceptor(response => {
            const duration = Date.now() - response.config.metadata.startTime
            stats.totalTime += duration
            
            if (response.cached) {
                stats.cacheHits++
                stats.savedTime += 200  // Assume 200ms saved per cache hit
            } else {
                stats.cacheMisses++
            }
            
            updateStats()
            return response
        })
        
        // Weather fetching with intelligent caching
        async function fetchWeatherCached(city) {
            const cacheKey = `weather-${city}`
            
            // Different cache strategies based on time
            const now = new Date()
            const hour = now.getHours()
            
            let cacheConfig
            if (hour >= 6 && hour <= 9) {
                // Morning rush - cache briefly
                cacheConfig = {
                    maxAge: 60000,  // 1 minute
                    staleWhileRevalidate: true
                }
            } else if (hour >= 22 || hour <= 5) {
                // Night - cache longer
                cacheConfig = {
                    maxAge: 1800000,  // 30 minutes
                    staleWhileRevalidate: true
                }
            } else {
                // Normal hours
                cacheConfig = {
                    maxAge: 300000,  // 5 minutes
                    staleWhileRevalidate: true
                }
            }
            
            try {
                const { data, cached, stale } = await tf.get(
                    `https://wttr.in/${city}?format=j1`,
                    { 
                        cache: { ...cacheConfig, key: cacheKey },
                        returnCacheData: true
                    }
                )
                
                return {
                    weather: data,
                    cacheStatus: cached ? (stale ? 'stale' : 'fresh') : 'miss',
                    city
                }
            } catch (error) {
                // Try force cache on error
                try {
                    const { data } = await tf.get(
                        `https://wttr.in/${city}?format=j1`,
                        { cache: 'force' }
                    )
                    return {
                        weather: data,
                        cacheStatus: 'forced',
                        city
                    }
                } catch {
                    throw error
                }
            }
        }
        
        // Update weather display with cache info
        function updateWeatherCard(cardId, data) {
            const card = document.getElementById(cardId)
            const weather = data.weather
            
            const cacheClass = {
                fresh: 'cache-fresh',
                stale: 'cache-stale',
                miss: 'cache-miss',
                forced: 'cache-stale'
            }[data.cacheStatus]
            
            const cacheText = {
                fresh: '⚡ Cached',
                stale: '🔄 Updating',
                miss: '🌐 Fresh',
                forced: '📦 Offline'
            }[data.cacheStatus]
            
            card.innerHTML = `
                <span class="cache-indicator ${cacheClass}">${cacheText}</span>
                <h3>${data.city}</h3>
                <p>🌡️ ${weather.current_condition[0].temp_C}°C / ${weather.current_condition[0].temp_F}°F</p>
                <p>🌤️ ${weather.current_condition[0].weatherDesc[0].value}</p>
                <p>💨 Wind: ${weather.current_condition[0].windspeedKmph} km/h</p>
                <p>💧 Humidity: ${weather.current_condition[0].humidity}%</p>
                <p>🕐 Updated: ${new Date().toLocaleTimeString()}</p>
                <button onclick="refreshCity('${data.city}', true)">Force Refresh</button>
            `
        }
        
        // Refresh city weather
        window.refreshCity = async function(city, force = false) {
            const cardId = `city-${city.replace(/\s/g, '-')}`
            const card = document.getElementById(cardId)
            
            if (force) {
                tf.cache.invalidate(`weather-${city}`)
            }
            
            card.style.opacity = '0.6'
            
            try {
                const data = await fetchWeatherCached(city)
                updateWeatherCard(cardId, data)
            } catch (error) {
                console.error(`Failed to fetch weather for ${city}:`, error)
            } finally {
                card.style.opacity = '1'
            }
        }
        
        // Predictive prefetching
        function setupPredictiveFetch() {
            const cities = ['London', 'Tokyo', 'New York', 'Paris', 'Sydney']
            let currentIndex = 0
            
            // Prefetch next city when hovering
            document.addEventListener('mouseover', (e) => {
                if (e.target.closest('.city-card')) {
                    const nextCity = cities[(currentIndex + 1) % cities.length]
                    // Silently prefetch
                    fetchWeatherCached(nextCity).catch(() => {})
                }
            })
        }
        
        // Update statistics display
        function updateStats() {
            const hitRate = stats.requests > 0 
                ? ((stats.cacheHits / stats.requests) * 100).toFixed(1)
                : 0
                
            const avgTime = stats.requests > 0
                ? Math.round(stats.totalTime / stats.requests)
                : 0
                
            document.getElementById('stats').innerHTML = `
                <strong>Performance Stats</strong><br>
                Requests: ${stats.requests}<br>
                Cache Hits: ${stats.cacheHits} (${hitRate}%)<br>
                Avg Time: ${avgTime}ms<br>
                Time Saved: ${(stats.savedTime / 1000).toFixed(1)}s<br>
                Cache Size: ${formatBytes(tf.cache.size())}<br>
                Algorithm: W-TinyLFU
            `
        }
        
        // Format bytes nicely
        function formatBytes(bytes) {
            if (bytes < 1024) return bytes + ' B'
            if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(1) + ' KB'
            return (bytes / (1024 * 1024)).toFixed(1) + ' MB'
        }
        
        // Cache control functions
        window.clearCache = function() {
            tf.cache.clear()
            stats.cacheHits = 0
            stats.cacheMisses = 0
            updateStats()
            alert('Cache cleared!')
        }
        
        window.warmCache = async function() {
            const cities = Array.from(document.querySelectorAll('.city-card h3'))
                .map(h3 => h3.textContent)
            
            console.log('Warming cache for', cities)
            
            await Promise.all(
                cities.map(city => 
                    fetchWeatherCached(city).catch(() => {})
                )
            )
            
            alert('Cache warmed!')
        }
        
        window.showCacheContents = function() {
            const contents = tf.cache.keys()
            console.log('Cache contents:', contents)
            alert(`Cache contains ${contents.length} entries:\n${contents.join('\n')}`)
        }
        
        // Add city with caching
        window.addCity = async function(cityName) {
            const cityDiv = document.createElement('div')
            cityDiv.className = 'city-card'
            cityDiv.id = `city-${cityName.replace(/\s/g, '-')}`
            document.getElementById('cityGrid').appendChild(cityDiv)
            
            await refreshCity(cityName)
        }
        
        // Periodic refresh with cache
        function startAutoRefresh() {
            setInterval(() => {
                document.querySelectorAll('.city-card h3').forEach(h3 => {
                    refreshCity(h3.textContent)
                })
            }, 60000)  // Every minute
        }
        
        // Initialize
        window.addEventListener('load', () => {
            // Add default cities
            ['London', 'Tokyo', 'New York', 'Paris'].forEach(city => addCity(city))
            
            // Setup features
            setupPredictiveFetch()
            startAutoRefresh()
            updateStats()
            
            // Cache debugging
            tf.cache.on('hit', (key) => console.log('Cache hit:', key))
            tf.cache.on('miss', (key) => console.log('Cache miss:', key))
            tf.cache.on('evict', (key) => console.log('Cache evict:', key))
        })
    </script>
</head>
<body>
    <h1>Weather Buddy 6.0 - Lightning Fast ⚡</h1>
    
    <div class="cache-controls">
        <h3>Cache Controls</h3>
        <button onclick="warmCache()">🔥 Warm Cache</button>
        <button onclick="clearCache()">🗑️ Clear Cache</button>
        <button onclick="showCacheContents()">📋 Show Contents</button>
        <button onclick="tf.cache.analyze()">📊 Analyze Performance</button>
    </div>
    
    <div class="search-box">
        <input 
            type="text" 
            id="citySearch" 
            placeholder="Add a city..."
            onkeypress="if(event.key==='Enter') addCity(this.value)"
        />
        <button onclick="addCity(document.getElementById('citySearch').value)">Add City</button>
    </div>
    
    <div id="cityGrid" class="city-grid"></div>
    
    <div id="stats" class="performance-stats"></div>
</body>
</html>

Advanced Caching Patterns

1. Stale-While-Revalidate

Serve stale data instantly while fetching fresh data in background:

const { data, stale } = await tf.get('/api/dashboard', {
  cache: {
    maxAge: 60000,  // Fresh for 1 minute
    staleWhileRevalidate: 300000  // Serve stale up to 5 minutes while updating
  }
})

if (stale) {
  showNotification('Updating data...')
}

// User sees old data immediately (fast!)
// Fresh data loads in background
// UI updates when ready

2. Cache Layers

Implement multiple cache layers for resilience:

class LayeredCache {
  constructor() {
    this.memory = new Map()  // L1: Memory (fastest)
    this.session = window.sessionStorage  // L2: Session
    this.local = window.localStorage  // L3: Persistent
  }
  
  async get(key) {
    // Check L1
    if (this.memory.has(key)) {
      return this.memory.get(key)
    }
    
    // Check L2
    const sessionData = this.session.getItem(key)
    if (sessionData) {
      const parsed = JSON.parse(sessionData)
      this.memory.set(key, parsed)  // Promote to L1
      return parsed
    }
    
    // Check L3
    const localData = this.local.getItem(key)
    if (localData) {
      const parsed = JSON.parse(localData)
      this.memory.set(key, parsed)  // Promote to L1
      this.session.setItem(key, localData)  // Promote to L2
      return parsed
    }
    
    return null
  }
  
  set(key, value, options = {}) {
    const serialized = JSON.stringify(value)
    
    // Always set in L1
    this.memory.set(key, value)
    
    // Set in L2 if not private
    if (!options.private) {
      this.session.setItem(key, serialized)
    }
    
    // Set in L3 if persistent
    if (options.persist) {
      this.local.setItem(key, serialized)
    }
  }
}

3. Smart Cache Key Generation

Generate cache keys that consider all relevant factors:

function generateCacheKey(url, options = {}) {
  const factors = [
    url,
    options.userId,
    options.locale,
    options.version,
    options.deviceType
  ].filter(Boolean)
  
  // Create a stable, unique key
  return factors.join(':')
}

// Usage
const key = generateCacheKey('/api/content', {
  userId: getCurrentUser().id,
  locale: navigator.language,
  version: APP_VERSION,
  deviceType: isMobile() ? 'mobile' : 'desktop'
})

4. Cache Warming Strategies

// 1. Predictive warming based on user behavior
class PredictiveWarmer {
  constructor() {
    this.patterns = new Map()
  }
  
  track(from, to) {
    if (!this.patterns.has(from)) {
      this.patterns.set(from, new Map())
    }
    
    const destinations = this.patterns.get(from)
    destinations.set(to, (destinations.get(to) || 0) + 1)
  }
  
  predict(current) {
    const destinations = this.patterns.get(current)
    if (!destinations) return []
    
    // Sort by frequency
    return Array.from(destinations.entries())
      .sort((a, b) => b[1] - a[1])
      .slice(0, 3)  // Top 3
      .map(([url]) => url)
  }
}

// 2. Time-based warming
function scheduleWarming() {
  const schedule = [
    { hour: 8, endpoints: ['/api/dashboard', '/api/tasks'] },
    { hour: 12, endpoints: ['/api/lunch-menu', '/api/nearby'] },
    { hour: 17, endpoints: ['/api/traffic', '/api/weather'] }
  ]
  
  schedule.forEach(({ hour, endpoints }) => {
    scheduleAt(hour, () => {
      endpoints.forEach(endpoint => tf.get(endpoint))
    })
  })
}

// 3. Relationship-based warming
async function warmRelated(resource) {
  const relations = {
    '/api/user': ['/api/user/preferences', '/api/user/avatar'],
    '/api/post/*': ['/api/comments', '/api/reactions'],
    '/api/product/*': ['/api/reviews', '/api/related']
  }
  
  const related = findRelated(resource, relations)
  await Promise.all(related.map(url => tf.get(url)))
}

Cache Analysis and Monitoring

TypedFetch provides deep insights into cache performance:

// Get cache analytics
const analytics = tf.cache.analyze()
console.log(analytics)
// {
//   hitRate: 0.85,
//   missRate: 0.15,
//   evictionRate: 0.05,
//   avgHitTime: 0.5,
//   avgMissTime: 150,
//   hotKeys: ['api/user', 'api/config'],
//   coldKeys: ['api/random-endpoint'],
//   sizeBytes: 1048576,
//   itemCount: 150,
//   algorithm: 'W-TinyLFU'
// }

// Monitor cache events
tf.cache.on('hit', ({ key, age, size }) => {
  console.log(`Cache hit: ${key} (age: ${age}ms, size: ${size}b)`)
})

tf.cache.on('miss', ({ key, reason }) => {
  console.log(`Cache miss: ${key} (${reason})`)
})

tf.cache.on('evict', ({ key, reason, age }) => {
  console.log(`Evicted: ${key} (${reason}, lived ${age}ms)`)
})

// Performance comparison
async function compareCacheAlgorithms() {
  const algorithms = ['LRU', 'LFU', 'W-TinyLFU']
  const results = {}
  
  for (const algo of algorithms) {
    tf.configure({ cache: { algorithm: algo } })
    tf.cache.clear()
    
    // Run workload
    const start = Date.now()
    await runWorkload()
    const duration = Date.now() - start
    
    results[algo] = {
      duration,
      ...tf.cache.analyze()
    }
  }
  
  console.table(results)
}

Cache-First Architecture

Design your app to work great even offline:

// Service Worker for offline-first
self.addEventListener('fetch', event => {
  event.respondWith(
    caches.match(event.request)
      .then(cached => {
        if (cached) {
          // Return cache, update in background
          event.waitUntil(
            fetch(event.request)
              .then(response => {
                return caches.open('v1').then(cache => {
                  cache.put(event.request, response.clone())
                  return response
                })
              })
          )
          return cached
        }
        
        // Not in cache, fetch and cache
        return fetch(event.request)
          .then(response => {
            return caches.open('v1').then(cache => {
              cache.put(event.request, response.clone())
              return response
            })
          })
      })
  )
})

// App-level cache-first strategy
class CacheFirstAPI {
  async get(url, options = {}) {
    // Always try cache first
    try {
      const cached = await tf.get(url, { 
        cache: 'force',
        timeout: 50  // Fast timeout for cache
      })
      
      if (cached.data) {
        // Got cached data, refresh in background
        tf.get(url, { cache: false }).catch(() => {})
        return cached
      }
    } catch {}
    
    // Cache miss or error, fetch fresh
    return tf.get(url, options)
  }
}

Best Practices for Caching 🎯

1. Cache the Right Things

// ✅ Good candidates for caching
'/api/countries'        // Static data
'/api/user/profile'     // Changes infrequently  
'/api/products'         // Can be stale briefly

// ❌ Bad candidates for caching
'/api/stock-prices'     // Real-time data
'/api/notifications'    // Must be fresh
'/api/auth/token'       // Security sensitive

2. Set Appropriate TTLs

const cacheTTLs = {
  static: 7 * 24 * 60 * 60 * 1000,    // 1 week
  userProfile: 5 * 60 * 1000,          // 5 minutes
  productList: 60 * 1000,              // 1 minute
  searchResults: 30 * 1000,            // 30 seconds
  realtime: 0                          // No cache
}

3. Invalidate Intelligently

// After mutations, invalidate related data
async function updateUserProfile(data) {
  const result = await tf.patch('/api/user/profile', { data })
  
  // Invalidate related caches
  tf.cache.invalidate('/api/user/profile')
  tf.cache.invalidate('/api/user/avatar')
  tf.cache.invalidatePattern('/api/user/posts/*')
  
  return result
}

4. Monitor and Optimize

// Track cache performance
setInterval(() => {
  const stats = tf.cache.analyze()
  
  if (stats.hitRate < 0.7) {
    console.warn('Low cache hit rate:', stats.hitRate)
    // Adjust cache strategy
  }
  
  if (stats.evictionRate > 0.2) {
    console.warn('High eviction rate:', stats.evictionRate)
    // Increase cache size
  }
}, 60000)

Practice Time! 🏋️

Exercise 1: Custom Cache Implementation

Build a simple cache with TTL:

class SimpleCache {
  constructor(maxSize = 100) {
    // Your code here:
    // - Store items with timestamps
    // - Implement get/set
    // - Handle expiration
    // - Implement size limits
  }
}

Exercise 2: Cache Warming Strategy

Design a predictive cache warmer:

class PredictiveCache {
  // Your code here:
  // - Track user navigation
  // - Predict next requests
  // - Warm cache proactively
  // - Measure effectiveness
}

Exercise 3: Offline-First App

Build an app that works offline:

class OfflineApp {
  // Your code here:
  // - Cache all critical data
  // - Queue mutations when offline
  // - Sync when online
  // - Handle conflicts
}

Key Takeaways 🎯

  1. Caching is the biggest performance win - 100x faster than network
  2. W-TinyLFU beats LRU - 25% better hit rates
  3. TypedFetch caches automatically - Zero config needed
  4. Different data needs different strategies - Static vs dynamic
  5. Stale data is often fine - Stale-while-revalidate pattern
  6. Cache warming prevents cold starts - Predictive and scheduled
  7. Invalidation needs planning - Tag-based and pattern matching
  8. Monitor cache performance - Hit rates and eviction rates

Common Pitfalls 🚨

  1. Caching sensitive data - User-specific data needs careful handling
  2. Not invalidating after mutations - Stale data confusion
  3. Too short TTLs - Missing cache benefits
  4. Too long TTLs - Serving outdated data
  5. Not warming cache - Cold start performance
  6. Ignoring cache size - Memory issues

What's Next?

You've mastered caching and made your app lightning fast! But what about type safety? In Chapter 7, we'll explore TypedFetch's incredible type inference system:

  • Runtime type inference from actual responses
  • TypeScript integration for compile-time safety
  • Auto-generating types from OpenAPI schemas
  • Type validation and error prevention
  • Making impossible states impossible

Ready to make your API calls type-safe? See you in Chapter 7! 🎯


Chapter Summary

  • Caching is the single biggest performance optimization you can make
  • TypedFetch uses W-TinyLFU algorithm for 25% better hit rates than LRU
  • Different data types need different cache strategies (static vs dynamic)
  • Stale-while-revalidate serves old data fast while updating in background
  • Cache warming prevents cold starts by preloading likely requests
  • Invalidation should be planned with tags and patterns
  • Monitor cache performance with hit rates and eviction metrics
  • Weather Buddy 6.0 shows cache status and saves seconds of loading time

Next Chapter Preview: Type Safety Paradise - How TypedFetch infers types at runtime and compile time to prevent errors before they happen.