TypeFetched/README.md

9.4 KiB

TypedFetch

Type-safe HTTP client that doesn't suck - Fetch for humans who have stuff to build

Zero dependencies. Full type safety. Just works.

📦 npm: @catalystlabs/typedfetch
🌐 Website: typedfetch.dev
📚 Docs: typedfetch.dev/docs
💻 Source: git.catalystlab.cc/caseycollier/TypeFetched

🚀 Quick Start

# Using bun
bun add @catalystlabs/typedfetch

# Using npm
npm install @catalystlabs/typedfetch
import { tf } from '@catalystlabs/typedfetch'

// Zero config - just works!
const { data, response } = await tf.get('https://api.github.com/users/github')
console.log(data.name) // Full response data

// With configuration
import { createTypedFetch } from '@catalystlabs/typedfetch'

const client = createTypedFetch({
  request: {
    baseURL: 'https://api.example.com',
    headers: {
      'Authorization': 'Bearer token'
    }
  }
})

const { data, response } = await client.get('/users/123')

// Apply presets on the fly
import { presets } from '@catalystlabs/typedfetch'

tf
  .use(presets.browser(), presets.auth.bearer('token-123'))
  .get('/profile')

// Bring your generated endpoint map for zero-effort typing
import type { TypedFetchGeneratedEndpoints } from './typedfetch.generated'

const typed = createTypedFetch<TypedFetchGeneratedEndpoints>({
  request: { baseURL: 'https://api.example.com' }
})

const profile = await typed.get('/me')
// profile.data is strongly typed based on your schema/runtime samples

Features

🔒 Type Safety

  • TypeScript inference for response data
  • No manual type casting needed
  • Type-safe error handling
  • Generate .d.ts snapshots from runtime data or OpenAPI discovery via tf.exportTypes() or the typedfetch sync CLI

🛡️ Built-in Resilience

  • Automatic retries with exponential backoff
  • Circuit breaker for failing endpoints
  • Request caching (memory + IndexedDB)
  • HTTP cache header respect

🚀 Simple API

  • Clean, chainable API
  • Standard HTTP methods: get(), post(), put(), delete()
  • Consistent response format
  • Zero boilerplate
  • Declarative resource() builder for human-friendly endpoint modules
  • Inline request mocking utilities for tests, demos, and offline work

Performance

  • <15KB gzipped bundle
  • Zero runtime dependencies
  • Efficient caching
  • Request deduplication

📚 Documentation

Basic Usage

import { tf } from '@catalystlabs/typedfetch'

// GET request
const { data, response } = await tf.get('https://api.example.com/users')

// POST request (fetch-style RequestInit object)
const { data, response } = await tf.post('https://api.example.com/users', {
  body: {
    name: 'John Doe',
    email: 'john@example.com'
  },
  headers: {
    'X-Demo': 'docs'
  }
})

// PUT request (pass the body directly and optional init as third arg)
const { data: updatedUser } = await tf.put('https://api.example.com/users/123',
  { name: 'Jane Doe' },
  { headers: { 'X-Docs': '1' } }
)

// PATCH request (body only)
const { data } = await tf.patch('https://api.example.com/users/123', {
  title: 'Director of Engineering'
})

// DELETE request
const { data, response } = await tf.delete('https://api.example.com/users/123')

post, put, and patch accept either a Fetch-style RequestInit object (with body, headers, etc.) or the raw body as the second argument plus an optional third RequestInit for headers/signals.

Configuration

import { createTypedFetch } from '@catalystlabs/typedfetch'

const client = createTypedFetch({
  request: {
    baseURL: 'https://api.example.com',
    headers: {
      'Authorization': 'Bearer token',
      'Content-Type': 'application/json'
    },
    timeout: 30000
  },

  // Retry configuration
  retry: {
    maxAttempts: 3,
    delays: [100, 250, 500, 1000],
    retryableStatuses: [408, 429, 500, 502, 503, 504]
  },

  // Cache configuration
  cache: {
    enabled: true,
    ttl: 300000, // 5 minutes
    maxSize: 500
  }
})

// Or configure the global instance
import { tf } from '@catalystlabs/typedfetch'

tf.configure({
  request: {
    baseURL: 'https://api.example.com',
    headers: {
      'Authorization': 'Bearer token'
    }
  }
})

Opinionated presets

import { tf, presets } from '@catalystlabs/typedfetch'

tf.use(
  presets.browser(),
  presets.resilient(),
  presets.auth.bearer('my-token')
)

// All subsequent calls inherit the composed behavior
await tf.get('/me')

Presets are just functions that emit config so you can compose them freely or build your own: const edge = () => ({ cache: { ttl: 1000 } }).

Declarative resources

import { tf } from '@catalystlabs/typedfetch'

const users = tf.resource('/users/:id', {
  show: {
    method: 'GET'
  },
  update: {
    method: 'PATCH',
    json: true
  }
})

const { data } = await users.show({ params: { id: '42' } })
await users.update({ params: { id: '42' }, body: { name: 'Nova' } })

Resources automatically expand :params, merge query objects, and keep returning the standard { data, response } tuple.

Built-in mocking for tests & demos

TypedFetch now includes a zero-dependency mock adapter so you can short-circuit requests without swapping clients:

import { tf } from '@catalystlabs/typedfetch'

// Match colon params or even RegExp/function matchers
tf.mock({
  method: 'GET',
  url: '/users/:id',
  handler: ({ params, query }) => ({
    data: { id: params.id, name: 'Mocked User', filter: query.filter }
  })
})

// Only intercept the next matching call
tf.mockOnce({
  method: 'POST',
  url: '/users',
  response: { status: 201, data: { id: 'temp-id' } }
})

// Toggle or reset mocks globally
tf.disableMocking()
tf.enableMocking()
tf.clearMocks()

Handlers receive params, query, headers, and even parsed bodies, and their data payloads still feed the type registry—perfect for storybooks, tests, or rapid prototyping without standing up a server.

Type snapshot export

import { tf } from '@catalystlabs/typedfetch'

await tf.discover('https://api.example.com')
const code = await tf.exportTypes({ outFile: 'typedfetch.generated.d.ts', banner: 'Example API' })

console.log('Types written to disk!')

CLI-powered type generation

Prefer a single command? Install (or npx) the bundled CLI:

npx typedfetch sync --base https://api.example.com \
  --out src/generated/typedfetch.generated.d.ts \
  --namespace API

The CLI will:

  1. Instantiate a RevolutionaryTypedFetch client using your optional --config JSON file
  2. Run schema discovery (tf.discover) against the provided base URL
  3. Emit a type snapshot to --out (or stdout if omitted)

Use the emitted types to get end-to-end inference:

import type { TypedFetchGeneratedEndpoints } from './src/generated/typedfetch.generated'
import { createTypedFetch } from '@catalystlabs/typedfetch'

const client = createTypedFetch<TypedFetchGeneratedEndpoints>({
  request: { baseURL: 'https://api.example.com' }
})

// Response + request body types are wired up automatically
const { data } = await client.get('/users/:id')

tf.exportTypes() serializes everything the registry knows (OpenAPI + runtime samples) into a .d.ts file, which you can then import for fully typed API clients.

Response Format

All methods return a consistent response format:

const { data, response } = await tf.get('/endpoint')

// data: The parsed response body
// response: The full Response object from fetch API

Error Handling

try {
  const { data, response } = await tf.get('/users/123')
  // Handle success
} catch (error) {
  if (error.response) {
    // Server responded with error status
    console.log(error.response.status)
    console.log(error.data)
  } else if (error.request) {
    // Request was made but no response received
    console.log('Network error:', error.message)
  } else {
    // Something else happened
    console.log('Error:', error.message)
  }
}

Advanced Features

Circuit Breaker

Automatically stops making requests to failing endpoints:

const client = createTypedFetch({
  circuitBreaker: {
    enabled: true,
    failureThreshold: 5,
    resetTimeout: 60000 // 1 minute
  }
})

Request Caching

Intelligent caching with multiple storage options:

const client = createTypedFetch({
  cache: {
    enabled: true,
    ttl: 300000, // 5 minutes
    storage: 'indexeddb', // Persistent storage
    respectCacheHeaders: true // Honor HTTP cache headers
  }
})

Custom Headers per Request

const { data } = await tf.get('/endpoint', {
  headers: {
    'X-Custom-Header': 'value'
  }
})

🎯 Why TypedFetch?

vs Axios

  • Built on modern fetch API
  • Smaller bundle size
  • Better TypeScript support
  • Built-in resilience features

vs Native Fetch

  • Automatic JSON parsing
  • Better error handling
  • Built-in retries and caching
  • Simpler API

📦 Bundle Size

  • Core: <15KB gzipped
  • Zero runtime dependencies
  • Tree-shakeable
  • Works without build step

🌐 Browser Support

  • Modern browsers (ES2020+)
  • Node.js 18+
  • Deno
  • Bun

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

📄 License

MIT License - see LICENSE for details.


TypedFetch: Because life's too short for complex HTTP clients. 🚀