Handle production traffic with smart caching, rate limits, and connection management.
Kadindexer enforces rate limits based on your subscription tier.
Tier | Monthly Credits | Burst Rate Limit | Max Query Complexity | SLA | Support |
---|---|---|---|---|---|
Basic (Free) | 10,000 | 10 req/min | 2,500 | 99.5% | Community |
Developer | 2,500,000 | 100 req/min | 5,000 | 99.9% | Priority |
Team | 10,000,000 | 1,000 req/min | 10,000 | 99.9% | Dedicated Team |
Pricing:
- Developer: $99/mo ($79/mo yearly)
- Team: $225/mo ($180/mo yearly)
All tiers include unlimited historical data access and support for all Kadena chains.
Monitor usage via response headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 73
X-RateLimit-Reset: 1609459200
Implement exponential backoff when rate limited:
async function queryWithRetry(query, variables, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await client.request(query, variables);
} catch (error) {
if (error.response?.status === 429) {
const delay = Math.min(1000 * Math.pow(2, i), 60000); // Cap at 60s
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
throw new Error('Rate limit exceeded after retries');
}
let remainingQuota = 100;
async function trackRateLimit(query, variables) {
const response = await fetch('https://graph.kadindexer.io', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query, variables })
});
remainingQuota = parseInt(response.headers.get('X-RateLimit-Remaining'));
if (remainingQuota < 10) {
console.warn('Approaching rate limit:', remainingQuota);
}
return response.json();
}
Cache immutable blockchain data aggressively. Cache recent data conservatively.
Data Type | TTL | Rationale |
---|---|---|
Finalized blocks (minimumDepth: 20+ ) | Forever (no TTL) | Immutable after finality |
Historical transactions | Forever | Never change |
Account balances | 5 minutes | Updates with new transactions |
Recent transactions (minimumDepth: 0-5 ) | 1 minute | May be reorganized |
Network stats | 30 seconds | Frequently changing |
import NodeCache from 'node-cache';
// Separate caches for different TTLs
const permanentCache = new NodeCache({ stdTTL: 0 }); // Never expires
const balanceCache = new NodeCache({ stdTTL: 300 }); // 5 minutes
const recentCache = new NodeCache({ stdTTL: 60 }); // 1 minute
async function getBlock(height, chainId) {
const key = `block_${chainId}_${height}`;
// Check if finalized (height < currentHeight - 20)
const currentHeight = await getCurrentChainHeight(chainId);
const isFinalized = height < (currentHeight - 20);
if (isFinalized) {
const cached = permanentCache.get(key);
if (cached) return cached;
}
const result = await client.request(blockQuery, { height, chainId });
if (isFinalized) {
permanentCache.set(key, result);
} else {
recentCache.set(key, result);
}
return result;
}
async function getAccountBalance(accountName, chainId) {
const key = `balance_${accountName}_${chainId}`;
const cached = balanceCache.get(key);
if (cached) return cached;
const result = await client.request(balanceQuery, { accountName, chainId });
balanceCache.set(key, result);
return result;
}
Configure your HTTP client to respect standard cache headers:
import { GraphQLClient } from 'graphql-request';
const client = new GraphQLClient('https://graph.kadindexer.io', {
headers: {
'Cache-Control': 'max-age=300', // 5 minutes
}
});
Reuse HTTP connections to reduce overhead and improve performance.
import { GraphQLClient } from 'graphql-request';
// Create one client, reuse for all queries
const client = new GraphQLClient('https://graph.kadindexer.io', {
keepalive: true,
timeout: 30000,
headers: {
'Content-Type': 'application/json'
}
});
// Export for reuse across your app
export async function queryKadindexer(query, variables) {
return client.request(query, variables);
}
import { createClient } from 'graphql-ws';
class SubscriptionManager {
constructor() {
this.client = createClient({
url: 'wss://graph.kadindexer.io/graphql',
keepAlive: 10000,
retryAttempts: 5
});
this.activeSubscriptions = new Map();
}
subscribe(query, variables, onData) {
const key = `${query}_${JSON.stringify(variables)}`;
// Reuse existing subscription
if (this.activeSubscriptions.has(key)) {
return this.activeSubscriptions.get(key);
}
const unsubscribe = this.client.subscribe(
{ query, variables },
{
next: onData,
error: (err) => console.error('Subscription error:', err),
complete: () => this.activeSubscriptions.delete(key)
}
);
this.activeSubscriptions.set(key, unsubscribe);
return unsubscribe;
}
cleanup() {
this.activeSubscriptions.forEach(unsub => unsub());
this.activeSubscriptions.clear();
this.client.dispose();
}
}
// Usage
const manager = new SubscriptionManager();
manager.subscribe(
newBlocksQuery,
{ chainIds: ['0', '1'], quantity: 20 },
(data) => console.log('New blocks:', data)
);
- Hitting 10 req/min burst limit regularly
- Need higher query complexity (5,000 vs 2,500)
- Production application with real users
- Need priority support response
- Requiring 100+ req/min sustained traffic
- Building high-complexity analytics dashboards
- Multiple dApps or services using Kadindexer
- Need dedicated support team
- Enterprise SLA requirements
Contact for custom plans: toni@hackachain.io
Track these metrics to identify bottlenecks and optimize performance.
const metrics = {
requestsPerMinute: 0,
avgResponseTime: 0,
errorRate: 0,
cacheHitRate: 0,
complexityUsage: 0
};
// Track request metrics
async function monitoredRequest(query, variables) {
const start = Date.now();
metrics.requestsPerMinute++;
try {
const result = await client.request(query, variables);
const duration = Date.now() - start;
// Moving average
metrics.avgResponseTime =
(metrics.avgResponseTime * 0.9) + (duration * 0.1);
return result;
} catch (error) {
metrics.errorRate++;
throw error;
}
}
Metric | Target | Warning | Critical |
---|---|---|---|
P95 Response Time | <300ms | >500ms | >1000ms |
Error Rate | <0.1% | >1% | >5% |
Rate Limit Usage | <70% | >80% | >95% |
Cache Hit Rate | >80% | <70% | <50% |
// Example: Alert when approaching rate limit
function checkMetrics() {
if (remainingQuota < (rateLimit * 0.2)) {
alertTeam('Approaching rate limit: ' + remainingQuota);
}
if (metrics.avgResponseTime > 1000) {
alertTeam('High response times: ' + metrics.avgResponseTime + 'ms');
}
if (metrics.errorRate > 0.01) {
alertTeam('High error rate: ' + (metrics.errorRate * 100) + '%');
}
}
setInterval(checkMetrics, 60000); // Check every minute
Cache network configuration at startup:
async function initializeApp() {
const networkInfo = await client.request(`
query {
networkInfo {
networkId
numberOfChains
nodeChains
coinsInCirculation
}
graphConfiguration {
version
minimumBlockHeight
}
}
`);
permanentCache.set('network_info', networkInfo);
}
Instead of sequential requests per chain:
// ❌ Sequential (slow, uses more rate limit)
for (const chainId of ['0', '1', '2']) {
await getChainActivity(chainId);
}
// ✅ Batched (fast, single request)
const result = await client.request(`
query {
c0: transactions(chainId: "0", first: 10) { ... }
c1: transactions(chainId: "1", first: 10) { ... }
c2: transactions(chainId: "2", first: 10) { ... }
}
`);
Load critical data first, details later:
async function loadAccountDashboard(accountName) {
// 1. Fast: Show balance immediately
const summary = await client.request(`
query {
fungibleAccount(accountName: "${accountName}") {
totalBalance
}
}
`);
renderBalance(summary);
// 2. Background: Load transaction history
const history = await client.request(`
query {
fungibleAccount(accountName: "${accountName}") {
transactions(first: 20) {
edges {
node {
hash
cmd { meta { creationTime sender } }
}
}
}
}
}
`);
renderHistory(history);
}
Prevent excessive queries from search/filter inputs:
import debounce from 'lodash/debounce';
const searchTransactions = debounce(async (searchTerm) => {
const result = await client.request(query, {
accountName: searchTerm
});
updateUI(result);
}, 500); // Wait 500ms after user stops typing
Each tier has a maximum query complexity limit. Structure queries to stay within bounds.
Free Tier: 2,500 max complexity Developer Tier: 5,000 max complexity Team Tier: 10,000 max complexity
complexity = base_value × (first × nested_first × ...)
Example:
query {
blocks(first: 50) { # 50
edges {
node {
transactions(first: 20) { # 50 × 20 = 1,000
edges {
node {
result {
... on TransactionResult {
events(first: 10) { # 1,000 × 10 = 10,000 (exceeds free tier!)
edges { node { parameters } }
}
}
}
}
}
}
}
}
}
}
Split queries:
# Query 1: Get blocks
query { blocks(first: 50) { edges { node { hash height } } } }
# Query 2: Get transactions for specific block
query {
block(hash: "ABC123") {
transactions(first: 50) { ... }
}
}
Reduce page sizes:
# Instead of first: 100
query {
blocks(first: 20) {
edges {
node {
transactions(first: 20) { ... }
}
}
}
}
Before going to production:
- Caching implemented with appropriate TTLs
- Rate limit monitoring and exponential backoff
- HTTP connection pooling enabled
- Query complexity under tier limits
- Performance metrics tracked
- Alerts configured for critical thresholds
- Appropriate tier selected for traffic volume
- Queries optimized (see Query Optimization)
- Deploy safely: Production Readiness →
- Optimize queries: Query Optimization →
- Explore schema: GraphQL API Reference →
Need help scaling? toni@hackachain.io