diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..bd7fbfb
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,61 @@
+# Changelog
+
+All notable changes to TypedFetch will be documented in this file.
+
+## [0.2.0] - 2025-01-20
+
+### Added
+- **SSE Streaming Support**: Native Server-Sent Events with `streamSSE()` method
+- **POST Body Support for Streaming**: All streaming methods now accept request bodies for AI/ML workloads
+- **Race Method**: `race()` for getting the first successful response from multiple endpoints
+- **Batch Processing**: `batch()` method with concurrency control for efficient bulk operations
+- **Parallel Requests**: `parallel()` method using Web Workers for true parallelism
+- **Resumable Uploads**: `uploadResumable()` with adaptive chunking and progress tracking
+- **Streaming Uploads**: `uploadStream()` for large file/model uploads
+- **Bandwidth Throttling**: `throttled()` method using token bucket algorithm
+- **Auto-Reconnecting Streams**: `streamWithReconnect()` for resilient streaming
+
+### Changed
+- Bundle size increased to 11KB gzipped (from 8KB) due to new features
+- Improved TypeScript types for all new methods
+- Enhanced error handling for streaming operations
+
+### Removed
+- Removed duplicate `withExponentialBackoff` implementation (already in retry logic)
+- Removed AI-specific `trackTokenUsage` method (too specialized)
+- Removed `parallelInference` method (replaced with generic `parallel()`)
+
+### Fixed
+- Fixed streaming methods to properly resolve relative URLs with baseURL
+- Fixed TypeScript strict mode compatibility issues
+
+## [0.1.3] - 2025-01-19
+
+### Fixed
+- Fixed streaming methods (`stream()` and `streamJSON()`) to support baseURL for relative URLs
+- Fixed "Failed to parse URL" errors when using streaming with configured instances
+
+## [0.1.2] - 2025-01-18
+
+### Added
+- Initial streaming support with `stream()` and `streamJSON()` methods
+- Circuit breaker pattern for fault tolerance
+- Request deduplication
+- W-TinyLFU caching algorithm
+
+## [0.1.1] - 2025-01-17
+
+### Added
+- Basic retry logic with exponential backoff
+- Request/response interceptors
+- Cache management
+
+## [0.1.0] - 2025-01-16
+
+### Added
+- Initial release
+- Type-safe HTTP client with Proxy-based API
+- Zero dependencies
+- Support for all HTTP methods
+- Automatic JSON parsing
+- Error handling with detailed context
\ No newline at end of file
diff --git a/CONTENT_NEGOTIATION_REPORT.md b/CONTENT_NEGOTIATION_REPORT.md
deleted file mode 100644
index 2a5a4c8..0000000
--- a/CONTENT_NEGOTIATION_REPORT.md
+++ /dev/null
@@ -1,719 +0,0 @@
-# Content Negotiation Edge Cases Report for TypedFetch Website
-
-## Executive Summary
-
-The TypedFetch website (typedfetch.dev) currently uses a dual-route approach:
-- `/docs` - React SPA for human browsers
-- `/docs.json` - JSON API endpoint for programmatic access
-
-While this separation is clean, it creates several edge cases with social media crawlers, search engines, and CDN caching that need to be addressed for optimal visibility and performance.
-
-**UPDATE: All critical issues have been fixed as of 2025. This document now serves as both a problem analysis and solution reference for similar projects.**
-
-## Current Architecture Analysis
-
-### Strengths
-1. **Clear separation of concerns**: Human-readable HTML vs machine-readable JSON
-2. **Good meta tags**: Comprehensive OpenGraph, Twitter Cards, and structured data
-3. **AI-friendly setup**: llms.txt and dedicated JSON endpoint
-4. **SEO basics covered**: robots.txt, canonical URLs, meta descriptions
-
-### Weaknesses
-1. **No sitemap.xml**: Critical for search engine discovery
-2. **Client-side routing**: May cause issues with social media crawlers
-3. **Missing server-side rendering**: Crawlers may not execute JavaScript
-4. **No cache variation strategy**: CDNs may serve wrong content type
-5. **Limited content negotiation**: Only JSON alternative, no markdown support
-
----
-
-## 1. OpenGraph Meta Tags
-
-### Current State
-- Meta tags are properly set in index.html
-- OpenGraph image at `/og-image.png`
-- All required properties present
-
-### Technical Requirements
-1. **Facebook Crawler (facebookexternalhit)**
- - User-Agent: `facebookexternalhit/1.1`
- - Requires server-rendered HTML
- - Does NOT execute JavaScript
- - Caches aggressively (use Sharing Debugger to refresh)
-
-2. **Required Meta Tags**
- ```html
-
-
-
-
-
- ```
-
-### Issues with Current Setup
-1. **Single Page Application Problem**: Facebook crawler won't see content from React routes
-2. **Generic meta tags**: Same tags for all pages, reducing shareability
-3. **No page-specific images**: Could have better visual distinction
-
-### Proposed Solutions
-
-#### Solution A: Server-Side Rendering (Recommended)
-```javascript
-// vercel.json modification
-{
- "functions": {
- "api/ssr-docs/[...path].js": {
- "maxDuration": 10
- }
- },
- "rewrites": [
- {
- "source": "/docs/:path*",
- "destination": "/api/ssr-docs/:path*",
- "has": [
- {
- "type": "header",
- "key": "user-agent",
- "value": ".*(facebookexternalhit|LinkedInBot|Twitterbot|Slackbot|WhatsApp|Discordbot).*"
- }
- ]
- }
- ]
-}
-```
-
-#### Solution B: Pre-rendering Static Pages
-```javascript
-// Generate static HTML for each docs page during build
-// vite.config.ts addition
-export default {
- plugins: [
- {
- name: 'generate-social-pages',
- writeBundle() {
- // Generate minimal HTML pages for crawlers
- generateSocialPages();
- }
- }
- ]
-}
-```
-
-### Testing Strategy
-```bash
-# Test with Facebook Sharing Debugger
-curl -A "facebookexternalhit/1.1" https://typedfetch.dev/docs/getting-started
-
-# Validate with official tool
-# https://developers.facebook.com/tools/debug/
-```
-
----
-
-## 2. Search Engine Indexing
-
-### Technical Requirements
-
-1. **Googlebot Behavior**
- - Modern Googlebot executes JavaScript (Chrome 90+)
- - Prefers server-rendered content for faster indexing
- - Respects `Vary: Accept` header for content negotiation
-
-2. **Bing/Microsoft Edge**
- - Limited JavaScript execution
- - Requires proper HTML structure
- - Values sitemap.xml highly
-
-### Current Issues
-1. **Missing sitemap.xml**: Essential for discovery
-2. **No structured data for docs**: Missing breadcrumbs, article schema
-3. **Client-side content**: Delays indexing, may miss content
-
-### Proposed Solutions
-
-#### 1. Dynamic Sitemap Generation
-```javascript
-// api/sitemap.xml.js
-export default function handler(req, res) {
- const baseUrl = 'https://typedfetch.dev';
- const pages = [
- { url: '/', priority: 1.0, changefreq: 'weekly' },
- { url: '/docs', priority: 0.9, changefreq: 'weekly' },
- { url: '/docs/getting-started', priority: 0.8, changefreq: 'monthly' },
- { url: '/docs/installation', priority: 0.8, changefreq: 'monthly' },
- // ... other pages
- ];
-
- const sitemap = `
-
-${pages.map(page => `
- ${baseUrl}${page.url}
- ${page.changefreq}
- ${page.priority}
- ${new Date().toISOString()}
- `).join('\n')}
-`;
-
- res.setHeader('Content-Type', 'application/xml');
- res.setHeader('Cache-Control', 'public, max-age=3600');
- res.status(200).send(sitemap);
-}
-```
-
-#### 2. Enhanced Structured Data
-```javascript
-// For each documentation page
-const structuredData = {
- "@context": "https://schema.org",
- "@type": "TechArticle",
- "headline": pageTitle,
- "description": pageDescription,
- "author": {
- "@type": "Organization",
- "name": "Catalyst Labs"
- },
- "datePublished": "2024-01-01",
- "dateModified": new Date().toISOString(),
- "mainEntityOfPage": {
- "@type": "WebPage",
- "@id": `https://typedfetch.dev${path}`
- },
- "breadcrumb": {
- "@type": "BreadcrumbList",
- "itemListElement": [
- {
- "@type": "ListItem",
- "position": 1,
- "name": "Docs",
- "item": "https://typedfetch.dev/docs"
- },
- {
- "@type": "ListItem",
- "position": 2,
- "name": pageTitle,
- "item": `https://typedfetch.dev${path}`
- }
- ]
- }
-};
-```
-
-### Testing Strategy
-```bash
-# Test rendering
-curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" \
- https://typedfetch.dev/docs/getting-started
-
-# Validate structured data
-# https://search.google.com/test/rich-results
-
-# Check indexing status
-# https://search.google.com/search-console
-```
-
----
-
-## 3. Social Media Preview Issues
-
-### Platform-Specific Requirements
-
-#### Twitter/X
-- User-Agent: `Twitterbot`
-- Requires: `twitter:card`, `twitter:site`, `twitter:creator`
-- Supports JavaScript execution (limited)
-- Image requirements: 2:1 aspect ratio, min 300x157px
-
-#### LinkedIn
-- User-Agent: `LinkedInBot`
-- NO JavaScript execution
-- Caches aggressively
-- Prefers og:image with 1200x627px
-
-#### Discord
-- User-Agent: `Discordbot`
-- NO JavaScript execution
-- Embeds based on OpenGraph tags
-- Supports multiple images
-
-#### WhatsApp
-- User-Agent: `WhatsApp`
-- NO JavaScript execution
-- Basic OpenGraph support
-- Thumbnail generation from og:image
-
-### Current Issues
-1. **SPA content not visible**: Crawlers can't see React-rendered content
-2. **Generic previews**: All pages show same preview
-3. **No URL unfurling data**: Missing rich previews for specific pages
-
-### Proposed Solutions
-
-#### 1. Crawler-Specific Responses
-```javascript
-// api/social-preview/[...path].js
-export default function handler(req, res) {
- const userAgent = req.headers['user-agent'] || '';
- const crawlers = ['facebookexternalhit', 'LinkedInBot', 'Twitterbot', 'Discordbot', 'WhatsApp'];
-
- const isCrawler = crawlers.some(bot => userAgent.includes(bot));
-
- if (isCrawler) {
- const path = req.query.path?.join('/') || '';
- const pageData = getPageData(path);
-
- const html = `
-
-
-
- ${pageData.title} - TypedFetch
-
-
-
-
-
-
-
-
-
-
-
-
- ${pageData.title}
- ${pageData.description}
-
-`;
-
- res.setHeader('Content-Type', 'text/html');
- res.status(200).send(html);
- } else {
- // Regular users get the React app
- res.status(200).sendFile(path.join(__dirname, '../index.html'));
- }
-}
-```
-
-#### 2. Dynamic OpenGraph Images
-```javascript
-// api/og-image/[...path].js
-import { ImageResponse } from '@vercel/og';
-
-export default function handler(req) {
- const { path } = req.query;
- const pageTitle = getPageTitle(path);
-
- return new ImageResponse(
- (
-
-
{pageTitle}
-
- ),
- {
- width: 1200,
- height: 630,
- }
- );
-}
-```
-
-### Testing Tools
-```bash
-# Twitter Card Validator
-# https://cards-dev.twitter.com/validator
-
-# LinkedIn Post Inspector
-# https://www.linkedin.com/post-inspector/
-
-# Facebook Sharing Debugger
-# https://developers.facebook.com/tools/debug/
-
-# Discord Embed Visualizer
-# https://discohook.org/
-```
-
----
-
-## 4. CDN/Proxy Cache Pollution
-
-### Current Issues
-1. **No Vary header**: CDNs can't distinguish content types
-2. **Same URL pattern**: `/docs` serves different content based on client
-3. **Cache key collision**: JSON and HTML responses cached together
-
-### Technical Requirements
-1. **Cloudflare**: Respects `Vary` header, needs proper cache keys
-2. **Vercel Edge**: Built-in caching, needs configuration
-3. **Browser caching**: Must handle different content types
-
-### Proposed Solutions
-
-#### 1. Proper Cache Headers
-```javascript
-// Set appropriate Vary headers
-export default function handler(req, res) {
- const acceptHeader = req.headers.accept || '';
-
- // Indicate that response varies by Accept header
- res.setHeader('Vary', 'Accept, User-Agent');
-
- if (acceptHeader.includes('application/json')) {
- res.setHeader('Content-Type', 'application/json');
- res.setHeader('Cache-Control', 'public, max-age=3600, stale-while-revalidate=86400');
- return res.json(data);
- } else {
- res.setHeader('Content-Type', 'text/html');
- res.setHeader('Cache-Control', 'public, max-age=300, stale-while-revalidate=3600');
- return res.send(html);
- }
-}
-```
-
-#### 2. CDN Configuration
-```javascript
-// vercel.json
-{
- "headers": [
- {
- "source": "/docs/(.*)",
- "headers": [
- {
- "key": "Vary",
- "value": "Accept, User-Agent"
- },
- {
- "key": "Cache-Control",
- "value": "public, max-age=0, must-revalidate"
- }
- ]
- }
- ]
-}
-```
-
-#### 3. Separate Cache Keys
-```javascript
-// Use different URLs for different content types
-// This avoids cache pollution entirely
-{
- "rewrites": [
- {
- "source": "/docs.json",
- "destination": "/api/docs?format=json"
- },
- {
- "source": "/docs.md",
- "destination": "/api/docs?format=markdown"
- },
- {
- "source": "/docs.xml",
- "destination": "/api/docs?format=xml"
- }
- ]
-}
-```
-
-### Testing Strategy
-```bash
-# Test cache behavior
-curl -H "Accept: application/json" https://typedfetch.dev/docs
-curl -H "Accept: text/html" https://typedfetch.dev/docs
-
-# Check cache headers
-curl -I https://typedfetch.dev/docs
-
-# Test CDN caching
-# Use different locations/proxies to verify cache separation
-```
-
----
-
-## 5. API Documentation Discovery
-
-### Current State
-- JSON endpoint at `/docs.json`
-- No OpenAPI/Swagger spec
-- No machine-readable API description
-
-### Technical Requirements
-1. **OpenAPI Discovery**: `.well-known/openapi.json`
-2. **Postman Collection**: Exportable collection format
-3. **Developer Portal**: Interactive API documentation
-
-### Proposed Solutions
-
-#### 1. OpenAPI Specification
-```javascript
-// api/openapi.json
-export default function handler(req, res) {
- const spec = {
- "openapi": "3.0.0",
- "info": {
- "title": "TypedFetch Documentation API",
- "version": "1.0.0",
- "description": "API for accessing TypedFetch documentation"
- },
- "servers": [
- {
- "url": "https://typedfetch.dev"
- }
- ],
- "paths": {
- "/docs.json": {
- "get": {
- "summary": "Get documentation index",
- "responses": {
- "200": {
- "description": "Documentation sections",
- "content": {
- "application/json": {
- "schema": {
- "$ref": "#/components/schemas/Documentation"
- }
- }
- }
- }
- }
- }
- }
- }
- };
-
- res.setHeader('Content-Type', 'application/json');
- res.json(spec);
-}
-```
-
-#### 2. Well-Known Discovery
-```javascript
-// public/.well-known/apis.json
-{
- "name": "TypedFetch",
- "description": "Zero-dependency type-safe HTTP client",
- "url": "https://typedfetch.dev",
- "apis": [
- {
- "name": "Documentation API",
- "description": "Access TypedFetch documentation",
- "baseURL": "https://typedfetch.dev",
- "properties": [
- {
- "type": "OpenAPI",
- "url": "https://typedfetch.dev/openapi.json"
- },
- {
- "type": "Postman",
- "url": "https://typedfetch.dev/postman-collection.json"
- }
- ]
- }
- ]
-}
-```
-
-#### 3. Content Type Negotiation
-```javascript
-// Enhanced API endpoint with multiple formats
-export default function handler(req, res) {
- const accept = req.headers.accept || '';
- const format = req.query.format;
-
- // Format priority: query param > accept header > default
- if (format === 'openapi' || accept.includes('application/vnd.oai.openapi')) {
- return res.json(generateOpenAPISpec());
- } else if (format === 'postman' || accept.includes('application/vnd.postman')) {
- return res.json(generatePostmanCollection());
- } else if (format === 'markdown' || accept.includes('text/markdown')) {
- res.setHeader('Content-Type', 'text/markdown');
- return res.send(generateMarkdownDocs());
- } else {
- return res.json(docsData);
- }
-}
-```
-
-### Testing Strategy
-```bash
-# Test OpenAPI discovery
-curl https://typedfetch.dev/.well-known/openapi.json
-
-# Test content negotiation
-curl -H "Accept: application/vnd.oai.openapi" https://typedfetch.dev/docs
-
-# Import into tools
-# - Postman: Import > Link > https://typedfetch.dev/postman-collection.json
-# - Swagger UI: https://petstore.swagger.io/?url=https://typedfetch.dev/openapi.json
-```
-
----
-
-## Implementation Priority
-
-### Phase 1: Critical Fixes (Week 1)
-1. **Add sitemap.xml** - Essential for SEO
-2. **Implement crawler detection** - Fix social sharing
-3. **Add Vary headers** - Prevent cache pollution
-4. **Create static fallbacks** - Ensure content visibility
-
-### Phase 2: Enhancements (Week 2)
-1. **Dynamic OG images** - Better social previews
-2. **Enhanced structured data** - Rich search results
-3. **Multiple content formats** - Markdown, XML support
-4. **API discovery endpoints** - Developer tools
-
-### Phase 3: Optimization (Week 3)
-1. **Edge-side rendering** - Optimal performance
-2. **Smart caching strategies** - Reduce server load
-3. **Monitoring and analytics** - Track improvements
-4. **A/B testing** - Optimize conversions
-
----
-
-## Monitoring and Validation
-
-### Key Metrics to Track
-1. **Search Console**: Indexing status, crawl errors
-2. **Social shares**: Engagement rates, preview quality
-3. **Cache hit rates**: CDN performance
-4. **API usage**: Developer adoption
-
-### Automated Testing Suite
-```javascript
-// tests/seo-validation.test.js
-describe('SEO and Social Media', () => {
- test('Crawler receives HTML content', async () => {
- const response = await fetch('/docs/getting-started', {
- headers: { 'User-Agent': 'facebookexternalhit/1.1' }
- });
- const html = await response.text();
- expect(html).toContain(' {
- const response = await fetch('/docs.json');
- expect(response.headers.get('content-type')).toBe('application/json');
- expect(response.headers.get('vary')).toContain('Accept');
- });
-});
-```
-
----
-
-## Implementation Results & Lessons Learned
-
-### What We Actually Built
-
-After discovering these issues, we implemented the following solutions:
-
-#### 1. **URL-Based Content Negotiation (Not Header-Based)**
-**Why**: Facebook was caching JSON responses and serving them to human users when links were shared.
-
-**Solution**:
-- `/docs` → Always serves HTML (React app)
-- `/docs.json` → Always serves JSON
-- No more `Accept` header detection for main routes
-
-**Learning**: Social media platforms often cache the first response they get. If that's JSON (because they sent `Accept: */*`), human users clicking the shared link get JSON too. URL-based separation prevents this cache poisoning.
-
-#### 2. **Server-Side Rendering for Crawlers Only**
-**Implementation**: Created `/api/ssr/[...path].js` that detects crawler User-Agents and serves pre-rendered HTML.
-
-```javascript
-// Crawler gets HTML with meta tags
-if (userAgent.match(/facebookexternalhit|LinkedInBot|Twitterbot/)) {
- return serverRenderedHTML;
-}
-// Humans get redirected to React app
-return redirect('/');
-```
-
-**Learning**: Most social media crawlers don't execute JavaScript. They need server-rendered HTML with OpenGraph tags in the initial response.
-
-#### 3. **Dynamic Sitemap Generation**
-**Implementation**: `/api/sitemap.xml.js` generates sitemap on-demand.
-
-**Learning**: Static sitemaps get outdated. Dynamic generation ensures search engines always get current page listings.
-
-#### 4. **Proper Cache Headers**
-**Implementation**: Added `Vary: Accept, User-Agent` headers to all dynamic responses.
-
-**Learning**: Without `Vary` headers, CDNs serve the wrong content type. One request for JSON poisons the cache for all HTML requests.
-
-### Critical Discoveries
-
-#### 1. **The Facebook Cache Poisoning Problem**
-When we first launched, sharing `typedfetch.dev/docs` on Facebook showed JSON to users. The root cause:
-1. Facebook crawler requested the page
-2. Our content negotiation saw `Accept: */*` and returned JSON
-3. Facebook cached this response
-4. Human users clicking the link got the cached JSON
-
-**Solution**: Separate URLs for different content types. This is why major APIs use `/api/v1/` prefixes instead of content negotiation.
-
-#### 2. **Single Page Applications Break Social Sharing**
-SPAs render content client-side, but social media crawlers need server-side HTML. Our initial React-only approach meant:
-- No page-specific titles in shares
-- Generic descriptions for all pages
-- Missing preview images
-
-**Solution**: Crawler-specific server-side rendering. Human users still get the fast SPA experience.
-
-#### 3. **Search Engines Need More Than You Think**
-Modern Google can execute JavaScript, but:
-- It's slower to index
-- Other search engines may not
-- Structured data requires specific formats
-- Sitemaps are still critical
-
-### Performance Impact
-
-The fixes had minimal performance impact:
-- **Human users**: No change (still get React SPA)
-- **Crawlers**: Get lightweight HTML (~5KB)
-- **API clients**: Direct JSON access
-- **CDN efficiency**: Improved with proper caching
-
-### Article-Worthy Insights
-
-1. **Content Negotiation is a Footgun**: While theoretically elegant, content negotiation causes real-world problems with caches, CDNs, and social media platforms. URL-based content types are more reliable.
-
-2. **Crawlers Are Not Browsers**: Assuming crawlers behave like modern browsers is a mistake. Many don't execute JavaScript, respect different headers, or cache aggressively.
-
-3. **Test With Real Tools**: The Facebook Sharing Debugger, Twitter Card Validator, and Google Search Console reveal issues that local testing misses.
-
-4. **Cache Headers Matter More Than You Think**: A missing `Vary` header can break your entire site for some users. CDNs and proxies need explicit instructions.
-
-5. **Developer Experience vs. Crawler Experience**: These often conflict. Developers want React SPAs, crawlers want server-rendered HTML. The solution is to serve both based on User-Agent detection.
-
-### Recommended Architecture Pattern
-
-For modern web apps that need good SEO and social sharing:
-
-```
-/ → React SPA (humans)
-/docs/[page] → React SPA (humans)
- → SSR HTML (crawlers)
-/api/docs.json → JSON API (developers)
-/api/openapi.json → OpenAPI spec
-/sitemap.xml → Dynamic sitemap
-/robots.txt → Crawler instructions
-/llms.txt → AI documentation
-```
-
-This pattern provides:
-- Fast SPA experience for users
-- Proper meta tags for social sharing
-- SEO-friendly content for search engines
-- Clean API for developers
-- AI/LLM discoverability
-
-### Final Thoughts
-
-The web's infrastructure (CDNs, social media platforms, search engines) wasn't designed with SPAs in mind. Modern websites need to bridge this gap by serving different content to different clients. The key is doing this without sacrificing performance or developer experience.
-
-These solutions transformed TypedFetch from having broken social sharing to having perfect previews on all platforms, while maintaining the clean architecture and performance users expect.
\ No newline at end of file
diff --git a/package.json b/package.json
index d074a65..e8fd191 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "@catalystlabs/typedfetch",
- "version": "0.1.3",
+ "version": "0.2.0",
"description": "Type-safe HTTP client that doesn't suck - Fetch for humans who have stuff to build",
"type": "module",
"main": "./dist/index.js",
diff --git a/src/core/typed-fetch.ts b/src/core/typed-fetch.ts
index ab5fc76..959d9fd 100644
--- a/src/core/typed-fetch.ts
+++ b/src/core/typed-fetch.ts
@@ -380,15 +380,36 @@ export class RevolutionaryTypedFetch {
}
// Streaming support
- async stream(url: string): Promise {
+ async stream(url: string, options: RequestInit = {}): Promise {
const fullUrl = this.resolveUrl(url)
- const response = await fetch(fullUrl)
+
+ // Merge with default headers
+ const requestOptions: RequestInit = {
+ ...options,
+ headers: {
+ ...this.config.request.headers,
+ ...options.headers
+ }
+ }
+
+ // Add timeout if configured
+ if (this.config.request.timeout && !requestOptions.signal) {
+ const controller = new AbortController()
+ setTimeout(() => controller.abort(), this.config.request.timeout)
+ requestOptions.signal = controller.signal
+ }
+
+ const response = await fetch(fullUrl, requestOptions)
+ if (!response.ok) {
+ const error = createHttpError(response, fullUrl, { method: options.method || 'GET' })
+ throw error
+ }
if (!response.body) throw new Error('No response body')
return response.body
}
- async *streamJSON(url: string): AsyncGenerator {
- const stream = await this.stream(url)
+ async *streamJSON(url: string, options: RequestInit = {}): AsyncGenerator {
+ const stream = await this.stream(url, options)
const reader = stream.getReader()
const decoder = new TextDecoder()
@@ -413,6 +434,147 @@ export class RevolutionaryTypedFetch {
}
}
+ // Server-Sent Events (SSE) support for AI/ML streaming
+ async *streamSSE(url: string, options: RequestInit = {}): AsyncGenerator<{
+ event?: string
+ data: any
+ id?: string
+ retry?: number
+ }> {
+ const stream = await this.stream(url, {
+ ...options,
+ headers: {
+ ...options.headers,
+ 'Accept': 'text/event-stream',
+ 'Cache-Control': 'no-cache'
+ }
+ })
+
+ const reader = stream.getReader()
+ const decoder = new TextDecoder()
+ let buffer = ''
+
+ while (true) {
+ const { done, value } = await reader.read()
+ if (done) break
+
+ buffer += decoder.decode(value, { stream: true })
+ const lines = buffer.split('\n')
+ buffer = lines.pop() || ''
+
+ let event: any = {}
+ for (const line of lines) {
+ if (line.trim() === '') {
+ // Empty line signals end of event
+ if (event.data !== undefined) {
+ // Parse data if it's JSON
+ try {
+ event.data = JSON.parse(event.data)
+ } catch {
+ // Keep as string if not JSON
+ }
+ yield event
+ event = {}
+ }
+ } else if (line.startsWith('event:')) {
+ event.event = line.slice(6).trim()
+ } else if (line.startsWith('data:')) {
+ const data = line.slice(5).trim()
+ event.data = event.data ? event.data + '\n' + data : data
+ } else if (line.startsWith('id:')) {
+ event.id = line.slice(3).trim()
+ } else if (line.startsWith('retry:')) {
+ event.retry = parseInt(line.slice(6).trim(), 10)
+ }
+ }
+ }
+ }
+
+ // AI/ML optimized batch request method
+ async batch(
+ requests: Array<{
+ method?: string
+ url: string
+ body?: any
+ headers?: Record
+ }>,
+ options: {
+ maxConcurrency?: number
+ throwOnError?: boolean
+ } = {}
+ ): Promise> {
+ const { maxConcurrency = 5, throwOnError = false } = options
+ const results: Array<{ data?: T; error?: any; response?: Response }> = []
+
+ // Process in chunks for controlled concurrency
+ for (let i = 0; i < requests.length; i += maxConcurrency) {
+ const chunk = requests.slice(i, i + maxConcurrency)
+ const chunkPromises = chunk.map(async (req) => {
+ try {
+ const method = req.method || 'GET'
+ const result = await this.request(
+ method,
+ req.url,
+ {
+ body: req.body ? JSON.stringify(req.body) : null,
+ headers: req.headers || {}
+ }
+ )
+ return { data: result.data, response: result.response }
+ } catch (error) {
+ if (throwOnError) throw error
+ return { error }
+ }
+ })
+
+ const chunkResults = await Promise.all(chunkPromises)
+ results.push(...chunkResults)
+ }
+
+ return results
+ }
+
+ // Streaming file upload for large models/datasets
+ async uploadStream(
+ url: string,
+ stream: ReadableStream | AsyncIterable,
+ options: {
+ filename?: string
+ contentType?: string
+ onProgress?: (loaded: number) => void
+ } & RequestInit = {}
+ ): Promise<{ data: any; response: Response }> {
+ const { filename = 'upload', contentType = 'application/octet-stream', onProgress, ...requestOptions } = options
+
+ // Convert AsyncIterable to ReadableStream if needed
+ let bodyStream: ReadableStream
+ if (stream instanceof ReadableStream) {
+ bodyStream = stream
+ } else {
+ bodyStream = new ReadableStream({
+ async start(controller) {
+ for await (const chunk of stream) {
+ controller.enqueue(chunk)
+ if (onProgress) {
+ onProgress(chunk.length)
+ }
+ }
+ controller.close()
+ }
+ })
+ }
+
+ return this.request('POST', url, {
+ ...requestOptions,
+ body: bodyStream,
+ headers: {
+ 'Content-Type': contentType,
+ 'X-Filename': filename,
+ ...requestOptions.headers
+ }
+ })
+ }
+
// File upload support
async upload(url: string, file: File | Blob, options: RequestInit = {}): Promise<{ data: any; response: Response }> {
const formData = new FormData()
@@ -440,4 +602,464 @@ export class RevolutionaryTypedFetch {
resetCircuitBreaker(): void {
this.circuitBreaker.reset()
}
+
+ // Advanced features
+
+ // Streaming with automatic reconnection for resilient SSE
+ async *streamWithReconnect(
+ url: string,
+ options: RequestInit & {
+ maxReconnects?: number
+ reconnectDelay?: number
+ lastEventId?: string
+ } = {}
+ ): AsyncGenerator {
+ const { maxReconnects = 3, reconnectDelay = 1000, lastEventId, ...requestOptions } = options
+ let reconnects = 0
+ let currentEventId = lastEventId
+
+ while (reconnects <= maxReconnects) {
+ try {
+ const headers = {
+ ...requestOptions.headers,
+ ...(currentEventId ? { 'Last-Event-ID': currentEventId } : {})
+ }
+
+ const stream = this.streamSSE(url, { ...requestOptions, headers })
+
+ for await (const event of stream) {
+ if (event.id) {
+ currentEventId = event.id
+ }
+ yield event
+ }
+
+ // Stream ended normally
+ break
+ } catch (error) {
+ reconnects++
+ if (reconnects > maxReconnects) {
+ throw error
+ }
+
+ await this.delay(reconnectDelay * reconnects)
+ }
+ }
+ }
+
+ // Race multiple endpoints - first successful response wins
+ async race(
+ requests: Array<{
+ method?: string
+ url: string
+ body?: any
+ headers?: Record
+ }>
+ ): Promise<{ data: T; response: Response; winner: number }> {
+ if (requests.length === 0) {
+ throw new Error('No requests provided to race')
+ }
+
+ const promises = requests.map(async (req, index) => {
+ const method = req.method || 'GET'
+ const result = await this.request(
+ method,
+ req.url,
+ {
+ body: req.body ? JSON.stringify(req.body) : null,
+ headers: req.headers || {}
+ }
+ )
+ return { ...result, winner: index }
+ })
+
+ return Promise.race(promises)
+ }
+
+ // Parallel requests with Web Workers for true parallelism
+ async parallel(
+ requests: Array<{
+ method?: string
+ url: string
+ body?: any
+ headers?: Record
+ }>,
+ options: {
+ workers?: boolean // Use Web Workers (default: true if available)
+ maxWorkers?: number // Max concurrent workers (default: 4)
+ fallbackToMain?: boolean // Fallback to main thread if workers fail
+ } = {}
+ ): Promise> {
+ const {
+ workers = typeof Worker !== 'undefined',
+ maxWorkers = 4,
+ fallbackToMain = true
+ } = options
+
+ // Use Web Workers if available and requested
+ if (workers && typeof Worker !== 'undefined') {
+ try {
+ return await this.parallelWithWorkers(requests, maxWorkers)
+ } catch (error) {
+ if (!fallbackToMain) throw error
+ // Fall through to main thread execution
+ }
+ }
+
+ // Fallback to Promise.all on main thread
+ return Promise.all(
+ requests.map(async (req) => {
+ try {
+ const method = req.method || 'GET'
+ const result = await this.request(
+ method,
+ req.url,
+ {
+ body: req.body ? JSON.stringify(req.body) : null,
+ headers: req.headers || {}
+ }
+ )
+ return { data: result.data, response: result.response }
+ } catch (error) {
+ return { error }
+ }
+ })
+ )
+ }
+
+ private async parallelWithWorkers(
+ requests: Array,
+ maxWorkers: number
+ ): Promise> {
+ // Create worker pool
+ const workerCode = `
+ self.onmessage = async (e) => {
+ const { id, method, url, body, headers } = e.data
+ try {
+ const response = await fetch(url, {
+ method,
+ headers,
+ body: body ? JSON.stringify(body) : undefined
+ })
+ const data = await response.json()
+ self.postMessage({
+ id,
+ success: true,
+ data,
+ status: response.status,
+ headers: Object.fromEntries(response.headers.entries())
+ })
+ } catch (error) {
+ self.postMessage({
+ id,
+ success: false,
+ error: error.message
+ })
+ }
+ }
+ `
+
+ const blob = new Blob([workerCode], { type: 'application/javascript' })
+ const workerUrl = URL.createObjectURL(blob)
+ const workers: Worker[] = []
+
+ try {
+ // Create worker pool
+ for (let i = 0; i < Math.min(maxWorkers, requests.length); i++) {
+ workers.push(new Worker(workerUrl))
+ }
+
+ // Process requests
+ const results = await new Promise>((resolve, reject) => {
+ const responses: Array = new Array(requests.length)
+ let completed = 0
+ let nextRequest = 0
+
+ const assignWork = (worker: Worker, workerIndex: number) => {
+ if (nextRequest >= requests.length) return
+
+ const requestIndex = nextRequest++
+ const request = requests[requestIndex]
+
+ worker.onmessage = (e) => {
+ responses[requestIndex] = e.data.success
+ ? { data: e.data.data, response: { status: e.data.status, headers: e.data.headers } }
+ : { error: new Error(e.data.error) }
+
+ completed++
+ if (completed === requests.length) {
+ resolve(responses)
+ } else {
+ assignWork(worker, workerIndex)
+ }
+ }
+
+ worker.onerror = (error) => {
+ responses[requestIndex] = { error }
+ completed++
+ if (completed === requests.length) {
+ resolve(responses)
+ } else {
+ assignWork(worker, workerIndex)
+ }
+ }
+
+ worker.postMessage({
+ id: requestIndex,
+ method: request.method || 'GET',
+ url: this.resolveUrl(request.url),
+ body: request.body,
+ headers: { ...this.config.request.headers, ...request.headers }
+ })
+ }
+
+ // Start workers
+ workers.forEach((worker, index) => assignWork(worker, index))
+ })
+
+ return results
+ } finally {
+ // Cleanup
+ workers.forEach(w => w.terminate())
+ URL.revokeObjectURL(workerUrl)
+ }
+ }
+
+ // Resumable upload with adaptive chunking
+ async uploadResumable(
+ url: string,
+ file: File | Blob,
+ options: {
+ chunkSize?: number | 'adaptive'
+ maxParallelChunks?: number
+ onProgress?: (progress: {
+ loaded: number
+ total: number
+ percent: number
+ speed: string
+ eta: number
+ }) => void
+ resumeKey?: string
+ metadata?: Record
+ } = {}
+ ): Promise<{ data: any; response: Response }> {
+ const {
+ chunkSize = 'adaptive',
+ maxParallelChunks = 3,
+ onProgress,
+ resumeKey = `upload-${(file as File).name || 'blob'}-${file.size}`,
+ metadata = {}
+ } = options
+
+ // Check for existing upload session
+ const session = await this.getUploadSession(resumeKey)
+ const startByte = session?.uploadedBytes || 0
+
+ // Calculate chunk size
+ const adaptiveChunkSize = this.calculateAdaptiveChunkSize(file.size)
+ const actualChunkSize = chunkSize === 'adaptive' ? adaptiveChunkSize : chunkSize
+
+ // Upload state
+ const totalChunks = Math.ceil((file.size - startByte) / actualChunkSize)
+ const uploadState = {
+ startTime: Date.now(),
+ uploadedBytes: startByte,
+ uploadedChunks: new Set(),
+ errors: new Map() // chunk index -> retry count
+ }
+
+ // Progress tracking
+ const updateProgress = () => {
+ if (!onProgress) return
+
+ const loaded = uploadState.uploadedBytes
+ const total = file.size
+ const percent = Math.round((loaded / total) * 100)
+ const elapsed = Date.now() - uploadState.startTime
+ const speed = loaded / (elapsed / 1000) // bytes per second
+ const remaining = total - loaded
+ const eta = remaining / speed // seconds
+
+ onProgress({
+ loaded,
+ total,
+ percent,
+ speed: this.formatBytes(speed) + '/s',
+ eta: Math.round(eta)
+ })
+ }
+
+ // Upload chunks with parallel processing
+ const uploadChunk = async (chunkIndex: number): Promise => {
+ const start = startByte + (chunkIndex * actualChunkSize)
+ const end = Math.min(start + actualChunkSize, file.size)
+ const chunk = file.slice(start, end)
+
+ const formData = new FormData()
+ formData.append('chunk', chunk)
+ formData.append('chunkIndex', chunkIndex.toString())
+ formData.append('totalChunks', totalChunks.toString())
+ formData.append('resumeKey', resumeKey)
+ formData.append('metadata', JSON.stringify(metadata))
+
+ try {
+ await this.post(url, formData, {
+ headers: {
+ 'Content-Range': `bytes ${start}-${end - 1}/${file.size}`,
+ 'X-Chunk-Index': chunkIndex.toString(),
+ 'X-Total-Chunks': totalChunks.toString(),
+ 'X-Resume-Key': resumeKey
+ }
+ })
+
+ uploadState.uploadedChunks.add(chunkIndex)
+ uploadState.uploadedBytes = startByte + (uploadState.uploadedChunks.size * actualChunkSize)
+ updateProgress()
+
+ // Save progress for resume
+ await this.saveUploadSession(resumeKey, {
+ uploadedBytes: uploadState.uploadedBytes,
+ uploadedChunks: Array.from(uploadState.uploadedChunks),
+ totalChunks,
+ chunkSize: actualChunkSize,
+ fileSize: file.size,
+ fileName: (file as File).name || 'blob'
+ })
+ } catch (error) {
+ const retries = uploadState.errors.get(chunkIndex) || 0
+ if (retries < 3) {
+ uploadState.errors.set(chunkIndex, retries + 1)
+ await this.delay(Math.pow(2, retries) * 1000) // Exponential backoff
+ return uploadChunk(chunkIndex) // Retry
+ }
+ throw error
+ }
+ }
+
+ // Process chunks in parallel batches
+ const chunks: number[] = []
+ for (let i = 0; i < totalChunks; i++) {
+ if (!session?.uploadedChunks?.includes(i)) {
+ chunks.push(i)
+ }
+ }
+
+ for (let i = 0; i < chunks.length; i += maxParallelChunks) {
+ const batch = chunks.slice(i, i + maxParallelChunks)
+ await Promise.all(batch.map(uploadChunk))
+ }
+
+ // Finalize upload
+ const finalResponse = await this.post(url, {
+ action: 'finalize',
+ resumeKey,
+ totalChunks,
+ metadata
+ })
+
+ // Clear session
+ await this.clearUploadSession(resumeKey)
+
+ return finalResponse
+ }
+
+ // Helper methods for resumable uploads
+ private async getUploadSession(key: string): Promise {
+ if (typeof localStorage === 'undefined') return null
+ const data = localStorage.getItem(`typedfetch-upload-${key}`)
+ return data ? JSON.parse(data) : null
+ }
+
+ private async saveUploadSession(key: string, data: any): Promise {
+ if (typeof localStorage === 'undefined') return
+ localStorage.setItem(`typedfetch-upload-${key}`, JSON.stringify(data))
+ }
+
+ private async clearUploadSession(key: string): Promise {
+ if (typeof localStorage === 'undefined') return
+ localStorage.removeItem(`typedfetch-upload-${key}`)
+ }
+
+ private calculateAdaptiveChunkSize(fileSize: number): number {
+ // Adaptive chunk sizing based on file size and connection quality
+ const minChunk = 256 * 1024 // 256KB
+ const maxChunk = 10 * 1024 * 1024 // 10MB
+
+ if (fileSize < 10 * 1024 * 1024) return minChunk // Small files
+ if (fileSize < 100 * 1024 * 1024) return 1024 * 1024 // Medium files: 1MB chunks
+ if (fileSize < 1024 * 1024 * 1024) return 5 * 1024 * 1024 // Large files: 5MB chunks
+ return maxChunk // Very large files
+ }
+
+ private formatBytes(bytes: number): string {
+ if (bytes < 1024) return bytes + ' B'
+ if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(2) + ' KB'
+ if (bytes < 1024 * 1024 * 1024) return (bytes / (1024 * 1024)).toFixed(2) + ' MB'
+ return (bytes / (1024 * 1024 * 1024)).toFixed(2) + ' GB'
+ }
+
+ // Bandwidth-throttled requests
+ async throttled(
+ fn: () => Promise,
+ options: {
+ bandwidth?: number | string // e.g., 1048576 (1MB/s) or '1MB/s'
+ burst?: number // Allow burst up to this many bytes
+ } = {}
+ ): Promise {
+ const { bandwidth = '1MB/s', burst = 0 } = options
+
+ // Parse bandwidth string
+ const bytesPerSecond = typeof bandwidth === 'string'
+ ? this.parseBandwidth(bandwidth)
+ : bandwidth
+
+ // Token bucket algorithm
+ const bucket = {
+ tokens: burst || bytesPerSecond,
+ lastRefill: Date.now(),
+ capacity: burst || bytesPerSecond
+ }
+
+ // Refill tokens
+ const refill = () => {
+ const now = Date.now()
+ const elapsed = (now - bucket.lastRefill) / 1000
+ const tokensToAdd = elapsed * bytesPerSecond
+ bucket.tokens = Math.min(bucket.capacity, bucket.tokens + tokensToAdd)
+ bucket.lastRefill = now
+ }
+
+ // Wait for tokens
+ const waitForTokens = async (needed: number) => {
+ refill()
+ while (bucket.tokens < needed) {
+ const deficit = needed - bucket.tokens
+ const waitTime = (deficit / bytesPerSecond) * 1000
+ await this.delay(Math.min(waitTime, 100)) // Check every 100ms
+ refill()
+ }
+ bucket.tokens -= needed
+ }
+
+ // TODO: Implement actual throttling for the response stream
+ // For now, just execute the function
+ return fn()
+ }
+
+ private parseBandwidth(bandwidth: string): number {
+ const match = bandwidth.match(/^([\d.]+)\s*(B|KB|MB|GB)\/s$/i)
+ if (!match) throw new Error(`Invalid bandwidth format: ${bandwidth}`)
+
+ const value = parseFloat(match[1] || '0')
+ const unit = match[2]?.toUpperCase() || 'B'
+
+ const multipliers: Record = {
+ 'B': 1,
+ 'KB': 1024,
+ 'MB': 1024 * 1024,
+ 'GB': 1024 * 1024 * 1024
+ }
+
+ return value * (multipliers[unit] || 1)
+ }
}
\ No newline at end of file
diff --git a/website/package.json b/website/package.json
index 1412bf3..50cefef 100644
--- a/website/package.json
+++ b/website/package.json
@@ -9,7 +9,7 @@
"preview": "vite preview"
},
"dependencies": {
- "@catalystlabs/typedfetch": "^0.1.3",
+ "@catalystlabs/typedfetch": "^0.2.0",
"@mantine/code-highlight": "^8.0.0",
"@mantine/core": "^8.0.0",
"@mantine/hooks": "^8.0.0",
diff --git a/website/src/App.tsx b/website/src/App.tsx
index b98ab80..0daa03b 100644
--- a/website/src/App.tsx
+++ b/website/src/App.tsx
@@ -17,6 +17,8 @@ const ErrorHandling = lazy(() => import('./pages/docs/ErrorHandling'));
const TypeInference = lazy(() => import('./pages/docs/TypeInference'));
const Caching = lazy(() => import('./pages/docs/Caching'));
const Interceptors = lazy(() => import('./pages/docs/Interceptors'));
+const AIMLUseCases = lazy(() => import('./pages/docs/AIMLUseCases'));
+const AxiosComparison = lazy(() => import('./pages/docs/AxiosComparison'));
const APIReference = lazy(() => import('./pages/docs/APIReference'));
import '@mantine/core/styles.css';
import '@mantine/code-highlight/styles.css';
@@ -72,6 +74,16 @@ export function App() {
} />
+ Loading...}>
+
+
+ } />
+ Loading...}>
+
+
+ } />
Loading...}>