Skip to content
Last updated

Handle production traffic with smart caching, rate limits, and connection management.


Rate Limits

Kadindexer enforces rate limits based on your subscription tier.

TierMonthly CreditsBurst Rate LimitMax Query ComplexitySLASupport
Basic (Free)10,00010 req/min2,50099.5%Community
Developer2,500,000100 req/min5,00099.9%Priority
Team10,000,0001,000 req/min10,00099.9%Dedicated Team

Pricing:

  • Developer: $99/mo ($79/mo yearly)
  • Team: $225/mo ($180/mo yearly)

All tiers include unlimited historical data access and support for all Kadena chains.

Monitor usage via response headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 73
X-RateLimit-Reset: 1609459200

Handling 429 Responses

Implement exponential backoff when rate limited:

async function queryWithRetry(query, variables, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await client.request(query, variables);
    } catch (error) {
      if (error.response?.status === 429) {
        const delay = Math.min(1000 * Math.pow(2, i), 60000); // Cap at 60s
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
  throw new Error('Rate limit exceeded after retries');
}

Track Remaining Quota

let remainingQuota = 100;

async function trackRateLimit(query, variables) {
  const response = await fetch('https://graph.kadindexer.io', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ query, variables })
  });
  
  remainingQuota = parseInt(response.headers.get('X-RateLimit-Remaining'));
  
  if (remainingQuota < 10) {
    console.warn('Approaching rate limit:', remainingQuota);
  }
  
  return response.json();
}

Caching Strategy

Cache immutable blockchain data aggressively. Cache recent data conservatively.

What to Cache

Data TypeTTLRationale
Finalized blocks (minimumDepth: 20+)Forever (no TTL)Immutable after finality
Historical transactionsForeverNever change
Account balances5 minutesUpdates with new transactions
Recent transactions (minimumDepth: 0-5)1 minuteMay be reorganized
Network stats30 secondsFrequently changing

Implementation

import NodeCache from 'node-cache';

// Separate caches for different TTLs
const permanentCache = new NodeCache({ stdTTL: 0 }); // Never expires
const balanceCache = new NodeCache({ stdTTL: 300 }); // 5 minutes
const recentCache = new NodeCache({ stdTTL: 60 });   // 1 minute

async function getBlock(height, chainId) {
  const key = `block_${chainId}_${height}`;
  
  // Check if finalized (height < currentHeight - 20)
  const currentHeight = await getCurrentChainHeight(chainId);
  const isFinalized = height < (currentHeight - 20);
  
  if (isFinalized) {
    const cached = permanentCache.get(key);
    if (cached) return cached;
  }
  
  const result = await client.request(blockQuery, { height, chainId });
  
  if (isFinalized) {
    permanentCache.set(key, result);
  } else {
    recentCache.set(key, result);
  }
  
  return result;
}

async function getAccountBalance(accountName, chainId) {
  const key = `balance_${accountName}_${chainId}`;
  
  const cached = balanceCache.get(key);
  if (cached) return cached;
  
  const result = await client.request(balanceQuery, { accountName, chainId });
  balanceCache.set(key, result);
  
  return result;
}

HTTP Caching Headers

Configure your HTTP client to respect standard cache headers:

import { GraphQLClient } from 'graphql-request';

const client = new GraphQLClient('https://graph.kadindexer.io', {
  headers: {
    'Cache-Control': 'max-age=300', // 5 minutes
  }
});

Connection Management

Reuse HTTP connections to reduce overhead and improve performance.

HTTP Connection Pooling

import { GraphQLClient } from 'graphql-request';

// Create one client, reuse for all queries
const client = new GraphQLClient('https://graph.kadindexer.io', {
  keepalive: true,
  timeout: 30000,
  headers: {
    'Content-Type': 'application/json'
  }
});

// Export for reuse across your app
export async function queryKadindexer(query, variables) {
  return client.request(query, variables);
}

For WebSocket Subscriptions

import { createClient } from 'graphql-ws';

class SubscriptionManager {
  constructor() {
    this.client = createClient({
      url: 'wss://graph.kadindexer.io/graphql',
      keepAlive: 10000,
      retryAttempts: 5
    });
    this.activeSubscriptions = new Map();
  }
  
  subscribe(query, variables, onData) {
    const key = `${query}_${JSON.stringify(variables)}`;
    
    // Reuse existing subscription
    if (this.activeSubscriptions.has(key)) {
      return this.activeSubscriptions.get(key);
    }
    
    const unsubscribe = this.client.subscribe(
      { query, variables },
      {
        next: onData,
        error: (err) => console.error('Subscription error:', err),
        complete: () => this.activeSubscriptions.delete(key)
      }
    );
    
    this.activeSubscriptions.set(key, unsubscribe);
    return unsubscribe;
  }
  
  cleanup() {
    this.activeSubscriptions.forEach(unsub => unsub());
    this.activeSubscriptions.clear();
    this.client.dispose();
  }
}

// Usage
const manager = new SubscriptionManager();

manager.subscribe(
  newBlocksQuery,
  { chainIds: ['0', '1'], quantity: 20 },
  (data) => console.log('New blocks:', data)
);

When to Upgrade

Upgrade to Developer ($99/mo) if:

  • Hitting 10 req/min burst limit regularly
  • Need higher query complexity (5,000 vs 2,500)
  • Production application with real users
  • Need priority support response

Upgrade to Team ($225/mo) if:

  • Requiring 100+ req/min sustained traffic
  • Building high-complexity analytics dashboards
  • Multiple dApps or services using Kadindexer
  • Need dedicated support team
  • Enterprise SLA requirements

Contact for custom plans: toni@hackachain.io


Performance Monitoring

Track these metrics to identify bottlenecks and optimize performance.

Essential Metrics

const metrics = {
  requestsPerMinute: 0,
  avgResponseTime: 0,
  errorRate: 0,
  cacheHitRate: 0,
  complexityUsage: 0
};

// Track request metrics
async function monitoredRequest(query, variables) {
  const start = Date.now();
  metrics.requestsPerMinute++;
  
  try {
    const result = await client.request(query, variables);
    const duration = Date.now() - start;
    
    // Moving average
    metrics.avgResponseTime = 
      (metrics.avgResponseTime * 0.9) + (duration * 0.1);
    
    return result;
  } catch (error) {
    metrics.errorRate++;
    throw error;
  }
}

Performance Targets

MetricTargetWarningCritical
P95 Response Time<300ms>500ms>1000ms
Error Rate<0.1%>1%>5%
Rate Limit Usage<70%>80%>95%
Cache Hit Rate>80%<70%<50%

Configure Alerts

// Example: Alert when approaching rate limit
function checkMetrics() {
  if (remainingQuota < (rateLimit * 0.2)) {
    alertTeam('Approaching rate limit: ' + remainingQuota);
  }
  
  if (metrics.avgResponseTime > 1000) {
    alertTeam('High response times: ' + metrics.avgResponseTime + 'ms');
  }
  
  if (metrics.errorRate > 0.01) {
    alertTeam('High error rate: ' + (metrics.errorRate * 100) + '%');
  }
}

setInterval(checkMetrics, 60000); // Check every minute

Optimization Patterns

Pattern 1: Prefetch Static Data

Cache network configuration at startup:

async function initializeApp() {
  const networkInfo = await client.request(`
    query {
      networkInfo {
        networkId
        numberOfChains
        nodeChains
        coinsInCirculation
      }
      graphConfiguration {
        version
        minimumBlockHeight
      }
    }
  `);
  
  permanentCache.set('network_info', networkInfo);
}

Pattern 2: Batch Multi-Chain Queries

Instead of sequential requests per chain:

// ❌ Sequential (slow, uses more rate limit)
for (const chainId of ['0', '1', '2']) {
  await getChainActivity(chainId);
}

// ✅ Batched (fast, single request)
const result = await client.request(`
  query {
    c0: transactions(chainId: "0", first: 10) { ... }
    c1: transactions(chainId: "1", first: 10) { ... }
    c2: transactions(chainId: "2", first: 10) { ... }
  }
`);

Pattern 3: Progressive Loading

Load critical data first, details later:

async function loadAccountDashboard(accountName) {
  // 1. Fast: Show balance immediately
  const summary = await client.request(`
    query {
      fungibleAccount(accountName: "${accountName}") {
        totalBalance
      }
    }
  `);
  
  renderBalance(summary);
  
  // 2. Background: Load transaction history
  const history = await client.request(`
    query {
      fungibleAccount(accountName: "${accountName}") {
        transactions(first: 20) {
          edges {
            node {
              hash
              cmd { meta { creationTime sender } }
            }
          }
        }
      }
    }
  `);
  
  renderHistory(history);
}

Pattern 4: Debounce User Input

Prevent excessive queries from search/filter inputs:

import debounce from 'lodash/debounce';

const searchTransactions = debounce(async (searchTerm) => {
  const result = await client.request(query, { 
    accountName: searchTerm 
  });
  updateUI(result);
}, 500); // Wait 500ms after user stops typing

Query Complexity Management

Each tier has a maximum query complexity limit. Structure queries to stay within bounds.

Free Tier: 2,500 max complexity Developer Tier: 5,000 max complexity Team Tier: 10,000 max complexity

Complexity Calculation

complexity = base_value × (first × nested_first × ...)

Example:

query {
  blocks(first: 50) {              # 50
    edges {
      node {
        transactions(first: 20) {  # 50 × 20 = 1,000
          edges {
            node {
              result {
                ... on TransactionResult {
                  events(first: 10) { # 1,000 × 10 = 10,000 (exceeds free tier!)
                    edges { node { parameters } }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Staying Under Limits

Split queries:

# Query 1: Get blocks
query { blocks(first: 50) { edges { node { hash height } } } }

# Query 2: Get transactions for specific block
query { 
  block(hash: "ABC123") { 
    transactions(first: 50) { ... } 
  } 
}

Reduce page sizes:

# Instead of first: 100
query {
  blocks(first: 20) {
    edges {
      node {
        transactions(first: 20) { ... }
      }
    }
  }
}

Checklist

Before going to production:

  • Caching implemented with appropriate TTLs
  • Rate limit monitoring and exponential backoff
  • HTTP connection pooling enabled
  • Query complexity under tier limits
  • Performance metrics tracked
  • Alerts configured for critical thresholds
  • Appropriate tier selected for traffic volume
  • Queries optimized (see Query Optimization)

Next Steps

Need help scaling? toni@hackachain.io