Write efficient GraphQL queries that return exactly what you need, faster.
GraphQL is designed around a graph data model. In Kadindexer, entities like Block
, Transaction
, Account
, and Transfer
are interconnected nodes. Query these relationships in a single request rather than making multiple round trips.
Example: Account with nested data
query {
fungibleAccount(
accountName: "k:5a2afbc4564b76b2c27ce5a644cab643c43663835ea0be22433b209d3351f937"
fungibleName: "coin"
) {
totalBalance
transactions(first: 10) {
edges {
node {
hash
cmd {
meta {
creationTime
}
}
}
}
}
transfers(first: 10) {
edges {
node {
amount
receiverAccount
}
}
}
}
}
This single query replaces 3 separate API calls.
Every field you request adds to response size and processing time. Be explicit about fields.
❌ Over-fetching:
query {
transactions(first: 10) {
edges {
node {
hash
cmd {
meta {
chainId
creationTime
gasLimit
gasPrice
sender
ttl
}
networkId
nonce
payload {
... on ExecutionPayload {
code
data
}
}
}
result {
... on TransactionResult {
gas
goodResult
badResult
logs
}
}
}
}
}
}
✅ Optimized:
query {
transactions(first: 10) {
edges {
node {
hash
cmd {
meta {
sender
creationTime
}
}
}
}
}
}
Impact: 85% smaller payload, 4x faster parsing.
Always paginate large result sets. Kadindexer uses cursor-based pagination following the Relay specification.
Pattern:
query GetTransactions($cursor: String) {
transactions(first: 50, after: $cursor) {
edges {
node {
hash
}
cursor
}
pageInfo {
hasNextPage
endCursor
}
}
}
Best practices:
- Start with
first: 50
(optimal balance) - Maximum
first: 100
per request - Use
totalCount
sparingly (adds complexity cost) - Store
endCursor
for subsequent pages
Example: Paginating through account transfers
let cursor = null;
let allTransfers = [];
do {
const result = await client.request(query, {
accountName: "k:abc...",
cursor
});
allTransfers = allTransfers.concat(
result.transfers.edges.map(e => e.node)
);
cursor = result.transfers.pageInfo.endCursor;
} while (result.transfers.pageInfo.hasNextPage);
Push filtering logic to the server rather than fetching everything and filtering client-side.
❌ Client-side:
const all = await client.request(`
query {
transactions(first: 1000) {
edges { node { hash cmd { meta { sender chainId } } } }
}
}
`);
const filtered = all.transactions.edges
.filter(e => e.node.cmd.meta.sender === "k:abc..." && e.node.cmd.meta.chainId === "1");
✅ Server-side:
query {
transactions(
accountName: "k:5a2afbc4564b76b2c27ce5a644cab643c43663835ea0be22433b209d3351f937"
chainId: "1"
first: 50
) {
edges {
node {
hash
}
}
}
}
Available filters:
Query | Key Filters |
---|---|
transactions | accountName , chainId , minHeight , maxHeight , blockHash , requestKey , fungibleName |
events | qualifiedEventName , chainId , minHeight , maxHeight , blockHash , requestKey , minimumDepth |
transfers | accountName , chainId , blockHash , requestKey , fungibleName , isNFT |
blocksFromHeight | chainIds , startHeight , endHeight |
blocksFromDepth | chainIds , minimumDepth |
Never interpolate user input directly into queries. Use GraphQL variables.
❌ String interpolation (SQL injection risk):
const query = `
query {
fungibleAccount(accountName: "${userInput}") {
balance
}
}
`;
✅ Parameterized:
query GetAccount($accountName: String!, $chainId: String!) {
fungibleChainAccount(
accountName: $accountName
chainId: $chainId
fungibleName: "coin"
) {
balance
guard {
... on KeysetGuard {
keys
predicate
}
}
}
}
const result = await client.request(query, {
accountName: userInput,
chainId: "1"
});
Kadindexer enforces complexity limits using the @complexity
directive. Deeply nested queries with large pagination multipliers can exceed limits.
Complexity formula:
complexity = base_value × (multiplier1 × multiplier2 × ...)
Example: High complexity query
query {
blocks(first: 100) { # complexity: 100
edges {
node {
transactions(first: 100) { # complexity: 100 × 100 = 10,000
edges {
node {
result {
... on TransactionResult {
events(first: 100) { # complexity: 10,000 × 100 = 1,000,000!
edges {
node {
parameters
}
}
}
}
}
}
}
}
}
}
}
}
Response: 400 Query complexity exceeds maximum
Solutions:
- Reduce pagination sizes:
query {
blocks(first: 20) {
edges {
node {
transactions(first: 20) {
edges {
node {
hash
}
}
}
}
}
}
}
- Split into multiple queries:
# Query 1: Get blocks
query { blocks(first: 20) { edges { node { hash } } } }
# Query 2: Get transactions for specific block
query { block(hash: "ABC...") { transactions(first: 50) { ... } } }
- Use targeted queries:
# Instead of blocks -> transactions -> events
# Query events directly with filters
query {
events(
qualifiedEventName: "coin.TRANSFER"
minHeight: 5000000
first: 50
) {
edges {
node {
parameterText
block {
height
}
}
}
}
}
Guidelines:
- Maximum query depth: 5 levels
- Pagination limit: 100 items per request
- Avoid nesting multiple paginated lists
- Prefer direct queries over deep nesting
Combine multiple queries into one request using aliases.
Example: Multi-chain dashboard
query DashboardData {
chain0: transactions(chainId: "0", first: 5, minHeight: 5000000) {
edges {
node {
hash
cmd { meta { creationTime } }
}
}
}
chain1: transactions(chainId: "1", first: 5, minHeight: 5000000) {
edges {
node {
hash
cmd { meta { creationTime } }
}
}
}
networkStats: networkInfo {
transactionCount
coinsInCirculation
}
accountBalance: fungibleChainAccount(
accountName: "k:5a2afbc4564b76b2c27ce5a644cab643c43663835ea0be22433b209d3351f937"
chainId: "1"
fungibleName: "coin"
) {
balance
}
}
Benefits:
- 1 HTTP request instead of 4
- Lower rate limit consumption
- Consistent snapshot of data
All major types implement the Node
interface with a globally unique id
. Use this for caching and efficient lookups.
Example: Direct node lookup
query {
node(id: "VHJhbnNhY3Rpb246MTIzNDU2") {
... on Transaction {
hash
cmd {
meta {
sender
}
}
}
}
}
Example: Batch node lookup
query {
nodes(ids: [
"QmxvY2s6YWJjMTIz",
"VHJhbnNhY3Rpb246eHl6Nzg5"
]) {
... on Block {
hash
height
}
... on Transaction {
hash
}
}
}
This enables efficient client-side caching strategies (e.g., Apollo Client, urql).
Monitor query performance and optimize when thresholds are exceeded:
Query Type | Target | Warning Threshold |
---|---|---|
Simple account query | <200ms | >500ms |
Paginated list (50 items) | <300ms | >800ms |
Complex nested query | <500ms | >1000ms |
Historical range scan | <800ms | >2000ms |
Tracking performance:
const start = Date.now();
const result = await client.request(query, variables);
const duration = Date.now() - start;
if (duration > 1000) {
console.warn('Slow query', {
duration,
queryName: query.definitions[0].name.value,
variables
});
}
# ❌ Could return millions of records
query { transactions { edges { node { hash } } } }
# ✅ Always paginate
query { transactions(first: 50) { edges { node { hash } } } }
// ❌ N+1 problem
for (const chainId of ['0', '1', '2']) {
await getChainData(chainId);
}
// ✅ Use batching with aliases
query {
c0: transactions(chainId: "0", first: 10) { ... }
c1: transactions(chainId: "1", first: 10) { ... }
c2: transactions(chainId: "2", first: 10) { ... }
}
# ❌ May return non-finalized blocks
query { blocks(first: 10) { ... } }
# ✅ Ensure finality
query {
blocksFromDepth(chainIds: ["0"], minimumDepth: 20, first: 10) { ... }
}
# ❌ Fetching full transaction details for every transfer
query {
transfers(first: 50) {
edges {
node {
amount
transaction {
cmd { ... } # Heavy nested object
result { ... } # Heavy nested object
}
}
}
}
}
# ✅ Request only what's needed
query {
transfers(first: 50) {
edges {
node {
amount
requestKey # Lighter - just the hash
}
}
}
}
Before deploying queries:
- Only selected necessary fields
- Pagination implemented (
first: 50
) - Server-side filters applied (chainId, accountName, minHeight)
- Query variables used for all user input
- Query complexity tested with realistic data
- Related queries batched with aliases
- N+1 patterns avoided with nested queries
- Query depth under 5 levels
-
minimumDepth
used for finalized data
- Scale your queries: Performance & Scaling →
- Deploy safely: Production Readiness →
- Explore schema: GraphQL API Reference →
Need help? toni@hackachain.io