# Chapter 6: The Cache Revolution *"The fastest API call is the one you don't make."* --- ## The Performance Awakening Sarah's Weather Buddy app was rock solid. It handled errors gracefully, recovered from failures, and never crashed. But during the Monday morning rush, when everyone checked weather before commuting, the app felt... sluggish. "Why is it so slow?" Jake complained. "I'm checking the same cities every day!" Marcus pulled up the network tab. "Look at this - you're making the same API calls over and over. Each weather check is a 200ms round trip." "But I need fresh data," Sarah protested. "Do you though?" Marcus smiled. "Does the temperature really change every second? Time to learn about caching - the single biggest performance win you'll ever implement." ## Understanding Caching: Your Secret Weapon Caching is like having a really good memory. Instead of asking the same question repeatedly, you remember the answer for a while. ```javascript // Without caching - every call hits the network button.addEventListener('click', async () => { const weather = await tf.get('/api/weather/london') // 200ms updateDisplay(weather) }) // With caching - only first call hits network button.addEventListener('click', async () => { const weather = await tf.get('/api/weather/london') // 200ms first time, <1ms after updateDisplay(weather) }) ``` TypedFetch includes a revolutionary cache that's not just fast - it's smart. ## The W-TinyLFU Algorithm: 25% Better Than LRU Most caches use LRU (Least Recently Used) - they keep recent items and discard old ones. But TypedFetch uses W-TinyLFU, which is like having a cache with a photographic memory: ```javascript // Traditional LRU - recency wins cache.get('A') // A becomes most recent cache.get('B') // B becomes most recent cache.get('C') // C becomes most recent cache.get('D') // D becomes most recent, A gets evicted // W-TinyLFU - frequency AND recency matter cache.get('A') // A: frequency=1, recent cache.get('A') // A: frequency=2, recent cache.get('B') // B: frequency=1, recent cache.get('C') // C: frequency=1, recent cache.get('D') // D: frequency=1, but A stays (higher frequency) ``` ### Why W-TinyLFU Rocks 1. **Better Hit Rates**: 15-25% more cache hits than LRU 2. **Scan Resistance**: One-time requests don't pollute the cache 3. **Frequency Awareness**: Keeps frequently accessed items 4. **Memory Efficient**: Uses sketch data structures Let's see it in action: ```javascript // TypedFetch automatically uses W-TinyLFU const popularUser = await tf.get('/api/users/1') // Accessed often const trendingPost = await tf.get('/api/posts/hot') // Accessed very often const randomUser = await tf.get('/api/users/99999') // Accessed once // Later, when cache is full: // - popularUser: still cached (high frequency) // - trendingPost: still cached (very high frequency) // - randomUser: evicted (low frequency) ``` ## Cache Configuration: Fine-Tuning Performance TypedFetch gives you complete control over caching: ```javascript // Global cache settings tf.configure({ cache: { maxSize: 100 * 1024 * 1024, // 100MB cache maxAge: 5 * 60 * 1000, // 5 minutes default TTL staleWhileRevalidate: true, // Serve stale while fetching fresh algorithm: 'W-TinyLFU' // or 'LRU' if you prefer } }) // Per-request cache control const { data } = await tf.get('/api/weather', { cache: { maxAge: 60000, // Cache for 1 minute staleWhileRevalidate: true, // Return stale data while refreshing key: 'weather-london' // Custom cache key } }) // Skip cache const { data: fresh } = await tf.get('/api/weather', { cache: false // Always fetch fresh }) // Force cache const { data: cached } = await tf.get('/api/weather', { cache: 'force' // Use cache even if expired }) ``` ## Cache Strategies for Different Data Types Not all data should be cached the same way: ```javascript // Static data - cache aggressively const countries = await tf.get('/api/countries', { cache: { maxAge: 7 * 24 * 60 * 60 * 1000, // 1 week immutable: true // Never changes } }) // User data - cache briefly const profile = await tf.get('/api/users/me', { cache: { maxAge: 60000, // 1 minute private: true // Don't share between users } }) // Real-time data - cache very briefly const stockPrice = await tf.get('/api/stocks/AAPL', { cache: { maxAge: 5000, // 5 seconds staleWhileRevalidate: false // Always need fresh } }) // Personalized data - cache with user context const recommendations = await tf.get('/api/recommendations', { cache: { key: `recs-user-${userId}`, // User-specific key maxAge: 300000 // 5 minutes } }) ``` ## Cache Warming: Preload for Speed Don't wait for users to request data - preload it: ```javascript // Warm cache on app start async function warmCache() { const criticalEndpoints = [ '/api/config', '/api/user/preferences', '/api/features' ] // Parallel cache warming await Promise.all( criticalEndpoints.map(endpoint => tf.get(endpoint, { cache: { warm: true } // Low priority }) ) ) } // Predictive cache warming function predictiveWarm(currentPage) { const predictions = { '/dashboard': ['/api/stats', '/api/recent-activity'], '/profile': ['/api/user/posts', '/api/user/followers'], '/weather': ['/api/weather/current-location'] } const toWarm = predictions[currentPage] || [] toWarm.forEach(endpoint => { // Warm in background setTimeout(() => tf.get(endpoint), 100) }) } // Time-based warming function scheduleWarmup() { // Warm cache before work hours const now = new Date() const nineAM = new Date() nineAM.setHours(9, 0, 0, 0) if (now < nineAM) { const delay = nineAM - now setTimeout(warmCache, delay) } } ``` ## Cache Invalidation: The Hard Problem "There are only two hard things in Computer Science: cache invalidation and naming things." - Phil Karlton TypedFetch makes invalidation easy: ```javascript // Invalidate specific endpoint tf.cache.invalidate('/api/users/123') // Invalidate with pattern tf.cache.invalidatePattern('/api/users/*') // Invalidate on mutation const { data } = await tf.post('/api/posts', { data: newPost, invalidates: ['/api/posts', '/api/posts/recent'] }) // Smart invalidation based on relationships tf.addResponseInterceptor(response => { if (response.config.method === 'POST' && response.config.url.includes('/comments')) { // New comment invalidates the post const postId = response.data.postId tf.cache.invalidate(`/api/posts/${postId}`) } return response }) // Tag-based invalidation const posts = await tf.get('/api/posts', { cache: { tags: ['posts', 'content'] } }) // Later, invalidate all with tag tf.cache.invalidateTag('content') ``` ## Weather Buddy 6.0: Lightning Fast Let's add intelligent caching to Weather Buddy: ```html Weather Buddy 6.0 - Lightning Fast

Weather Buddy 6.0 - Lightning Fast ⚡

Cache Controls

``` ## Advanced Caching Patterns ### 1. Stale-While-Revalidate Serve stale data instantly while fetching fresh data in background: ```javascript const { data, stale } = await tf.get('/api/dashboard', { cache: { maxAge: 60000, // Fresh for 1 minute staleWhileRevalidate: 300000 // Serve stale up to 5 minutes while updating } }) if (stale) { showNotification('Updating data...') } // User sees old data immediately (fast!) // Fresh data loads in background // UI updates when ready ``` ### 2. Cache Layers Implement multiple cache layers for resilience: ```javascript class LayeredCache { constructor() { this.memory = new Map() // L1: Memory (fastest) this.session = window.sessionStorage // L2: Session this.local = window.localStorage // L3: Persistent } async get(key) { // Check L1 if (this.memory.has(key)) { return this.memory.get(key) } // Check L2 const sessionData = this.session.getItem(key) if (sessionData) { const parsed = JSON.parse(sessionData) this.memory.set(key, parsed) // Promote to L1 return parsed } // Check L3 const localData = this.local.getItem(key) if (localData) { const parsed = JSON.parse(localData) this.memory.set(key, parsed) // Promote to L1 this.session.setItem(key, localData) // Promote to L2 return parsed } return null } set(key, value, options = {}) { const serialized = JSON.stringify(value) // Always set in L1 this.memory.set(key, value) // Set in L2 if not private if (!options.private) { this.session.setItem(key, serialized) } // Set in L3 if persistent if (options.persist) { this.local.setItem(key, serialized) } } } ``` ### 3. Smart Cache Key Generation Generate cache keys that consider all relevant factors: ```javascript function generateCacheKey(url, options = {}) { const factors = [ url, options.userId, options.locale, options.version, options.deviceType ].filter(Boolean) // Create a stable, unique key return factors.join(':') } // Usage const key = generateCacheKey('/api/content', { userId: getCurrentUser().id, locale: navigator.language, version: APP_VERSION, deviceType: isMobile() ? 'mobile' : 'desktop' }) ``` ### 4. Cache Warming Strategies ```javascript // 1. Predictive warming based on user behavior class PredictiveWarmer { constructor() { this.patterns = new Map() } track(from, to) { if (!this.patterns.has(from)) { this.patterns.set(from, new Map()) } const destinations = this.patterns.get(from) destinations.set(to, (destinations.get(to) || 0) + 1) } predict(current) { const destinations = this.patterns.get(current) if (!destinations) return [] // Sort by frequency return Array.from(destinations.entries()) .sort((a, b) => b[1] - a[1]) .slice(0, 3) // Top 3 .map(([url]) => url) } } // 2. Time-based warming function scheduleWarming() { const schedule = [ { hour: 8, endpoints: ['/api/dashboard', '/api/tasks'] }, { hour: 12, endpoints: ['/api/lunch-menu', '/api/nearby'] }, { hour: 17, endpoints: ['/api/traffic', '/api/weather'] } ] schedule.forEach(({ hour, endpoints }) => { scheduleAt(hour, () => { endpoints.forEach(endpoint => tf.get(endpoint)) }) }) } // 3. Relationship-based warming async function warmRelated(resource) { const relations = { '/api/user': ['/api/user/preferences', '/api/user/avatar'], '/api/post/*': ['/api/comments', '/api/reactions'], '/api/product/*': ['/api/reviews', '/api/related'] } const related = findRelated(resource, relations) await Promise.all(related.map(url => tf.get(url))) } ``` ## Cache Analysis and Monitoring TypedFetch provides deep insights into cache performance: ```javascript // Get cache analytics const analytics = tf.cache.analyze() console.log(analytics) // { // hitRate: 0.85, // missRate: 0.15, // evictionRate: 0.05, // avgHitTime: 0.5, // avgMissTime: 150, // hotKeys: ['api/user', 'api/config'], // coldKeys: ['api/random-endpoint'], // sizeBytes: 1048576, // itemCount: 150, // algorithm: 'W-TinyLFU' // } // Monitor cache events tf.cache.on('hit', ({ key, age, size }) => { console.log(`Cache hit: ${key} (age: ${age}ms, size: ${size}b)`) }) tf.cache.on('miss', ({ key, reason }) => { console.log(`Cache miss: ${key} (${reason})`) }) tf.cache.on('evict', ({ key, reason, age }) => { console.log(`Evicted: ${key} (${reason}, lived ${age}ms)`) }) // Performance comparison async function compareCacheAlgorithms() { const algorithms = ['LRU', 'LFU', 'W-TinyLFU'] const results = {} for (const algo of algorithms) { tf.configure({ cache: { algorithm: algo } }) tf.cache.clear() // Run workload const start = Date.now() await runWorkload() const duration = Date.now() - start results[algo] = { duration, ...tf.cache.analyze() } } console.table(results) } ``` ## Cache-First Architecture Design your app to work great even offline: ```javascript // Service Worker for offline-first self.addEventListener('fetch', event => { event.respondWith( caches.match(event.request) .then(cached => { if (cached) { // Return cache, update in background event.waitUntil( fetch(event.request) .then(response => { return caches.open('v1').then(cache => { cache.put(event.request, response.clone()) return response }) }) ) return cached } // Not in cache, fetch and cache return fetch(event.request) .then(response => { return caches.open('v1').then(cache => { cache.put(event.request, response.clone()) return response }) }) }) ) }) // App-level cache-first strategy class CacheFirstAPI { async get(url, options = {}) { // Always try cache first try { const cached = await tf.get(url, { cache: 'force', timeout: 50 // Fast timeout for cache }) if (cached.data) { // Got cached data, refresh in background tf.get(url, { cache: false }).catch(() => {}) return cached } } catch {} // Cache miss or error, fetch fresh return tf.get(url, options) } } ``` ## Best Practices for Caching 🎯 ### 1. Cache the Right Things ```javascript // ✅ Good candidates for caching '/api/countries' // Static data '/api/user/profile' // Changes infrequently '/api/products' // Can be stale briefly // ❌ Bad candidates for caching '/api/stock-prices' // Real-time data '/api/notifications' // Must be fresh '/api/auth/token' // Security sensitive ``` ### 2. Set Appropriate TTLs ```javascript const cacheTTLs = { static: 7 * 24 * 60 * 60 * 1000, // 1 week userProfile: 5 * 60 * 1000, // 5 minutes productList: 60 * 1000, // 1 minute searchResults: 30 * 1000, // 30 seconds realtime: 0 // No cache } ``` ### 3. Invalidate Intelligently ```javascript // After mutations, invalidate related data async function updateUserProfile(data) { const result = await tf.patch('/api/user/profile', { data }) // Invalidate related caches tf.cache.invalidate('/api/user/profile') tf.cache.invalidate('/api/user/avatar') tf.cache.invalidatePattern('/api/user/posts/*') return result } ``` ### 4. Monitor and Optimize ```javascript // Track cache performance setInterval(() => { const stats = tf.cache.analyze() if (stats.hitRate < 0.7) { console.warn('Low cache hit rate:', stats.hitRate) // Adjust cache strategy } if (stats.evictionRate > 0.2) { console.warn('High eviction rate:', stats.evictionRate) // Increase cache size } }, 60000) ``` ## Practice Time! 🏋️ ### Exercise 1: Custom Cache Implementation Build a simple cache with TTL: ```javascript class SimpleCache { constructor(maxSize = 100) { // Your code here: // - Store items with timestamps // - Implement get/set // - Handle expiration // - Implement size limits } } ``` ### Exercise 2: Cache Warming Strategy Design a predictive cache warmer: ```javascript class PredictiveCache { // Your code here: // - Track user navigation // - Predict next requests // - Warm cache proactively // - Measure effectiveness } ``` ### Exercise 3: Offline-First App Build an app that works offline: ```javascript class OfflineApp { // Your code here: // - Cache all critical data // - Queue mutations when offline // - Sync when online // - Handle conflicts } ``` ## Key Takeaways 🎯 1. **Caching is the biggest performance win** - 100x faster than network 2. **W-TinyLFU beats LRU** - 25% better hit rates 3. **TypedFetch caches automatically** - Zero config needed 4. **Different data needs different strategies** - Static vs dynamic 5. **Stale data is often fine** - Stale-while-revalidate pattern 6. **Cache warming prevents cold starts** - Predictive and scheduled 7. **Invalidation needs planning** - Tag-based and pattern matching 8. **Monitor cache performance** - Hit rates and eviction rates ## Common Pitfalls 🚨 1. **Caching sensitive data** - User-specific data needs careful handling 2. **Not invalidating after mutations** - Stale data confusion 3. **Too short TTLs** - Missing cache benefits 4. **Too long TTLs** - Serving outdated data 5. **Not warming cache** - Cold start performance 6. **Ignoring cache size** - Memory issues ## What's Next? You've mastered caching and made your app lightning fast! But what about type safety? In Chapter 7, we'll explore TypedFetch's incredible type inference system: - Runtime type inference from actual responses - TypeScript integration for compile-time safety - Auto-generating types from OpenAPI schemas - Type validation and error prevention - Making impossible states impossible Ready to make your API calls type-safe? See you in Chapter 7! 🎯 --- ## Chapter Summary - Caching is the single biggest performance optimization you can make - TypedFetch uses W-TinyLFU algorithm for 25% better hit rates than LRU - Different data types need different cache strategies (static vs dynamic) - Stale-while-revalidate serves old data fast while updating in background - Cache warming prevents cold starts by preloading likely requests - Invalidation should be planned with tags and patterns - Monitor cache performance with hit rates and eviction metrics - Weather Buddy 6.0 shows cache status and saves seconds of loading time **Next Chapter Preview**: Type Safety Paradise - How TypedFetch infers types at runtime and compile time to prevent errors before they happen.