Skip to main content

Overview

The Ilumiera AI B2B API may apply rate limiting to prevent abuse and ensure service quality for all users. During the beta period, rate limits are generous and subject to change based on usage patterns.

Current Status

  • Beta Period: Rate limiting may be applied but specific limits are not yet defined
  • CORS: Enabled for all origins during beta
  • HTTPS Only: All endpoints are served over HTTPS

Rate Limit Headers

When rate limiting is implemented, API responses will include standard rate limit headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 850
X-RateLimit-Reset: 1640995200
  • X-RateLimit-Limit: Your request limit for the time window
  • X-RateLimit-Remaining: Number of requests remaining
  • X-RateLimit-Reset: Unix timestamp when the limit resets

Best Practices

1. Monitor Response Headers

Always check for rate limit headers in API responses:
const response = await fetch(
  "https://b2b-api-backend-95487.ondigitalocean.app/source/ingestSource",
  {
    method: "POST",
    headers: {
      "api-key": "YOUR_API_KEY",
      "Content-Type": "application/json",
    },
    body: JSON.stringify(data),
  }
);

// Check rate limit headers
const rateLimit = response.headers.get("X-RateLimit-Limit");
const remaining = response.headers.get("X-RateLimit-Remaining");
const resetTime = response.headers.get("X-RateLimit-Reset");

if (remaining) {
  console.log(`Requests remaining: ${remaining}/${rateLimit}`);
}

2. Handle Rate Limit Errors

If rate limiting is enforced, you’ll receive a 429 status code:
if (response.status === 429) {
  const retryAfter = response.headers.get("Retry-After") || 60;
  console.log(`Rate limited. Retry after ${retryAfter} seconds`);

  // Wait before retrying
  await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
}

3. Implement Exponential Backoff

Use exponential backoff for automatic retries:
async function makeRequestWithBackoff(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.ok) {
      return response;
    }

    if (response.status === 429 && attempt < maxRetries - 1) {
      const delay = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s
      await new Promise((resolve) => setTimeout(resolve, delay));
      continue;
    }

    throw new Error(`Request failed: ${response.status}`);
  }
}

4. Optimize API Usage

To minimize hitting rate limits:

Batch Operations

Instead of generating quiz, flashcards, and mindmap in rapid succession:
// Space out content generation requests
async function generateAllContent(sourceId, userId) {
  const delay = 1000; // 1 second between requests

  // Generate quiz
  const quiz = await generateQuiz(sourceId, userId);
  await new Promise((resolve) => setTimeout(resolve, delay));

  // Generate flashcards
  const flashcards = await generateFlashcards(sourceId, userId);
  await new Promise((resolve) => setTimeout(resolve, delay));

  // Generate mindmap
  const mindmap = await generateMindmap(sourceId, userId);

  return { quiz, flashcards, mindmap };
}

Cache Responses

Store generated content to avoid repeated API calls:
const contentCache = new Map();

async function getCachedOrGenerate(sourceId, contentType, generateFn) {
  const cacheKey = `${sourceId}-${contentType}`;

  if (contentCache.has(cacheKey)) {
    return contentCache.get(cacheKey);
  }

  const content = await generateFn();
  contentCache.set(cacheKey, content);

  return content;
}

Monitor Usage Patterns

Track your API usage to identify optimization opportunities:
class ApiUsageTracker {
  constructor() {
    this.requests = [];
  }

  trackRequest(endpoint, timestamp = Date.now()) {
    this.requests.push({ endpoint, timestamp });

    // Clean up old entries (older than 1 hour)
    const oneHourAgo = timestamp - 3600000;
    this.requests = this.requests.filter((r) => r.timestamp > oneHourAgo);
  }

  getUsageStats() {
    const now = Date.now();
    const oneHourAgo = now - 3600000;

    const recentRequests = this.requests.filter(
      (r) => r.timestamp > oneHourAgo
    );

    return {
      totalRequests: recentRequests.length,
      requestsPerEndpoint: this.groupByEndpoint(recentRequests),
      averageRequestsPerMinute: recentRequests.length / 60,
    };
  }

  groupByEndpoint(requests) {
    return requests.reduce((acc, req) => {
      acc[req.endpoint] = (acc[req.endpoint] || 0) + 1;
      return acc;
    }, {});
  }
}

Planning for Scale

As you scale your usage of the Ilumiera AI B2B API:
  1. Monitor Usage: Keep track of your request patterns
  2. Implement Caching: Cache responses when appropriate
  3. Queue Requests: Use a queue system for non-urgent operations
  4. Contact Support: Reach out if you need higher limits

Future Updates

Rate limits are subject to change as the API evolves. We recommend:
  • Implementing flexible rate limit handling in your code
  • Subscribing to API updates for notifications about changes
  • Testing your rate limit handling logic regularly

Support

If you have questions about rate limits or need higher limits for your use case, please contact [email protected].