From harness-claude
Optimizes GraphQL API performance with DataLoader for N+1 fixes, query depth/complexity limits, @cacheControl, persisted queries, and CDN caching. Use for slow responses under load or query abuse.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Optimize GraphQL API performance with query complexity analysis, caching, persisted queries, and DataLoader
Optimizes GraphQL API performance with query complexity analysis, batching, caching strategies, depth limiting, monitoring, tracing, and database query optimization.
Covers GraphQL schema design, resolvers, DataLoader for N+1 prevention, federation for microservices, and Apollo/urql client integration.
Guides GraphQL schema design, resolvers, DataLoader for N+1 prevention, federation, subscriptions, and Apollo/urql clients. Use for efficient APIs with complex data relationships.
Share bugs, ideas, or general feedback.
Optimize GraphQL API performance with query complexity analysis, caching, persisted queries, and DataLoader
Use DataLoader to batch and deduplicate database queries. This is the single most impactful optimization for most GraphQL APIs. See the graphql-dataloader-pattern skill for detailed implementation.
Limit query depth to prevent deeply nested abuse. Use graphql-depth-limit to reject queries that exceed a reasonable nesting level.
import depthLimit from 'graphql-depth-limit';
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules: [depthLimit(10)],
});
type Query {
users(first: Int): UserConnection @complexity(value: 1, multipliers: ["first"])
}
type User {
posts: [Post!]! @complexity(value: 2)
}
import { createComplexityLimitRule } from 'graphql-validation-complexity';
const server = new ApolloServer({
validationRules: [
createComplexityLimitRule(1000, {
scalarCost: 1,
objectCost: 2,
listFactor: 10,
}),
],
});
@cacheControl directives for response caching. Mark types and fields with their cache-ability and max age. The Apollo cache control plugin aggregates these into HTTP cache headers.type Product @cacheControl(maxAge: 3600) {
id: ID!
name: String!
price: Money! @cacheControl(maxAge: 60)
reviews: [Review!]! @cacheControl(maxAge: 0)
}
import { ApolloServerPluginCacheControl } from '@apollo/server/plugin/cacheControl';
import { KeyValueCache } from '@apollo/utils.keyvaluecache';
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [ApolloServerPluginCacheControl({ defaultMaxAge: 5 })],
cache: new KeyvAdapter(new Keyv('redis://localhost:6379')),
});
Implement response caching at the CDN or reverse proxy level. When queries have @cacheControl hints, set Cache-Control headers so Cloudflare, Fastly, or Varnish can cache responses without hitting your server.
Use @defer for progressive loading (if supported). @defer allows the server to return parts of the response incrementally, sending critical data first and deferring expensive fields.
query ProductPage($id: ID!) {
product(id: $id) {
name
price
... @defer {
reviews {
content
rating
}
recommendations {
name
}
}
}
}
info argument or look-ahead libraries to fetch only the fields the client requested from the database.import { parseResolveInfo } from 'graphql-parse-resolve-info';
const resolvers = {
Query: {
user: (_parent, { id }, _context, info) => {
const fields = parseResolveInfo(info);
const select = Object.keys(fields.fieldsByTypeName.User);
return db.users.findById(id, { select });
},
},
};
Monitor query performance in production. Use Apollo Studio, Grafana, or custom logging to track per-operation latency, error rates, and cache hit ratios. Identify the slowest operations and optimize them specifically.
Set a timeout on resolver execution. Prevent runaway resolvers from holding connections open indefinitely.
const withTimeout = <T>(promise: Promise<T>, ms: number): Promise<T> => {
const timeout = new Promise<never>((_, reject) =>
setTimeout(() => reject(new Error('Resolver timeout')), ms)
);
return Promise.race([promise, timeout]);
};
N+1 problem explained: A query for 50 users, each with posts, executes 1 query for users + 50 queries for posts = 51 queries. DataLoader reduces this to 2 queries (1 for users, 1 batched for all posts).
Caching layers (from hot to cold):
Query allowlisting (persisted queries): In high-security environments, only allow pre-registered queries. Reject any query not in the allowlist. This prevents attackers from crafting expensive or introspection queries.
Pagination impact: Always paginate list fields. An unpaginated users: [User!]! that returns 100K records will crush your server regardless of other optimizations.
Tracing: Enable Apollo Tracing or OpenTelemetry to see per-resolver execution times. The slowest resolver in a query determines the response time (for sequential resolution) or indicates the bottleneck (for parallel resolution).
https://www.apollographql.com/docs/apollo-server/performance/caching/