From figma-pack
Load tests Figma API integrations with k6 scripts to benchmark throughput, probe rate limits, and plan capacity for high-volume use.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin figma-packThis skill is limited to using the following tools:
Test and plan for the throughput limits of your Figma API integration. Figma's rate limits use a leaky bucket algorithm -- this skill helps you find the bucket size for your plan tier and design your integration to stay within it.
Optimizes Figma API usage to minimize costs by tracking request volumes, reducing calls with techniques like depth parameters, and selecting plan tiers based on limits.
Load tests Canva Connect API with k6, plans capacity around rate limits, and auto-scales integrations using Kubernetes.
Implements k6 load testing and Kubernetes HPA auto-scaling for Adobe APIs including Firefly, PDF Services, and Photoshop. Handles async delays and rate limits.
Share bugs, ideas, or general feedback.
Test and plan for the throughput limits of your Figma API integration. Figma's rate limits use a leaky bucket algorithm -- this skill helps you find the bucket size for your plan tier and design your integration to stay within it.
brew install k6 or apt install k6)// figma-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Trend } from 'k6/metrics';
const figmaErrors = new Rate('figma_errors');
const figmaLatency = new Trend('figma_latency', true);
export const options = {
scenarios: {
// Test 1: Find your rate limit ceiling
rate_limit_probe: {
executor: 'constant-arrival-rate',
rate: 10, // 10 requests per second
timeUnit: '1s',
duration: '2m',
preAllocatedVUs: 5,
maxVUs: 20,
},
},
thresholds: {
figma_errors: ['rate<0.10'], // Less than 10% errors
figma_latency: ['p(95)<3000'], // P95 under 3 seconds
http_req_duration: ['p(99)<5000'], // P99 under 5 seconds
},
};
const PAT = __ENV.FIGMA_PAT;
const FILE_KEY = __ENV.FIGMA_FILE_KEY;
export default function () {
// Use a lightweight endpoint for rate limit testing
const res = http.get(
`https://api.figma.com/v1/files/${FILE_KEY}?depth=1`,
{
headers: { 'X-Figma-Token': PAT },
tags: { endpoint: 'files' },
}
);
figmaLatency.add(res.timings.duration);
const isError = res.status !== 200;
figmaErrors.add(isError);
check(res, {
'status is 200': (r) => r.status === 200,
'not rate limited': (r) => r.status !== 429,
'latency < 2s': (r) => r.timings.duration < 2000,
});
if (res.status === 429) {
const retryAfter = parseInt(res.headers['Retry-After'] || '60');
console.log(`Rate limited. Retry-After: ${retryAfter}s`);
sleep(retryAfter);
} else {
sleep(0.1); // 100ms between requests
}
}
# Probe rate limits
k6 run \
--env FIGMA_PAT="${FIGMA_PAT}" \
--env FIGMA_FILE_KEY="${FIGMA_FILE_KEY}" \
figma-load-test.js
# Export results to JSON for analysis
k6 run \
--env FIGMA_PAT="${FIGMA_PAT}" \
--env FIGMA_FILE_KEY="${FIGMA_FILE_KEY}" \
--out json=results.json \
figma-load-test.js
interface FigmaCapacityPlan {
planTier: string;
measuredLimitPerMinute: number;
currentUsagePerMinute: number;
headroomPercent: number;
recommendation: string;
}
function planCapacity(
measuredLimit: number,
currentUsage: number,
planTier: string
): FigmaCapacityPlan {
const headroom = ((measuredLimit - currentUsage) / measuredLimit) * 100;
let recommendation: string;
if (headroom > 50) {
recommendation = 'Adequate capacity. Monitor monthly.';
} else if (headroom > 20) {
recommendation = 'Approaching limits. Implement caching and batching.';
} else {
recommendation = 'Near capacity. Upgrade plan or reduce request volume.';
}
return {
planTier,
measuredLimitPerMinute: measuredLimit,
currentUsagePerMinute: currentUsage,
headroomPercent: Math.round(headroom),
recommendation,
};
}
// Strategy 1: Request coalescing
// Multiple callers requesting the same file get a single API call
class RequestCoalescer {
private pending = new Map<string, Promise<any>>();
async get(key: string, fetcher: () => Promise<any>): Promise<any> {
if (this.pending.has(key)) {
return this.pending.get(key)!;
}
const promise = fetcher().finally(() => this.pending.delete(key));
this.pending.set(key, promise);
return promise;
}
}
const coalescer = new RequestCoalescer();
// 10 simultaneous requests for the same file = 1 API call
const results = await Promise.all(
Array(10).fill(null).map(() =>
coalescer.get(fileKey, () => figmaClient.getFile(fileKey))
)
);
// Strategy 2: Stagger requests across time
import PQueue from 'p-queue';
const figmaQueue = new PQueue({
concurrency: 3,
interval: 1000,
intervalCap: 5, // Max 5 requests per second
});
// Strategy 3: Pre-fetch during off-peak hours
// Run design token sync at 3 AM, cache results for the day
## Figma API Benchmark Report
**Date:** YYYY-MM-DD
**Plan:** [Starter/Pro/Org/Enterprise]
**Seat:** [Full/Collab/Viewer]
### Rate Limit Findings
| Endpoint | Measured Limit/min | First 429 At | Retry-After |
|----------|-------------------|--------------|-------------|
| GET /v1/files/:key?depth=1 | ~30 | Request #31 | 60s |
| GET /v1/files/:key/nodes | ~30 | Request #32 | 60s |
| GET /v1/images/:key | ~20 | Request #21 | 60s |
### Latency
| Endpoint | P50 | P95 | P99 |
|----------|-----|-----|-----|
| /v1/files (depth=1) | 200ms | 500ms | 1200ms |
| /v1/files (full) | 800ms | 2000ms | 4000ms |
| /v1/images | 300ms | 800ms | 1500ms |
### Recommendations
- Cache file metadata (changes infrequently)
- Use webhooks instead of polling
- Batch node IDs in single requests
- Use `depth=1` unless full tree is needed
| Issue | Cause | Solution |
|---|---|---|
| All requests 429'd | Rate too aggressive | Start lower, ramp gradually |
| Inconsistent limits | Shared rate limit bucket | Other services using same token |
| k6 connection errors | Too many parallel VUs | Reduce preAllocatedVUs |
| Results vary between runs | Leaky bucket state | Wait 5min between test runs |
For reliability patterns, see figma-reliability-patterns.