From mindtickle-pack
Optimize MindTickle API integration performance with caching, bulk progress queries, and webhook processing. Use when learner progress queries are slow, report generation times out, or completion webhooks cause backpressure. Trigger with "mindtickle performance tuning".
npx claudepluginhub flight505/skill-forge --plugin mindtickle-packThis skill is limited to using the following tools:
MindTickle's API serves sales enablement data across courses, quizzes, and analytics — enterprise deployments tracking thousands of reps make bulk progress queries and report generation the primary bottlenecks. This skill covers caching learner data, batching progress operations, and handling rate limits during high-volume training campaigns.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
MindTickle's API serves sales enablement data across courses, quizzes, and analytics — enterprise deployments tracking thousands of reps make bulk progress queries and report generation the primary bottlenecks. This skill covers caching learner data, batching progress operations, and handling rate limits during high-volume training campaigns.
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
// Course catalog changes rarely — cache 30 minutes
// User progress updates frequently during campaigns — cache 2 minutes
const TTL = { courses: 1800, progress: 120, reports: 600, users: 900 } as const;
async function getCachedCourses(orgId: string): Promise<MindTickleCourse[]> {
const key = `mt:courses:${orgId}`;
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const courses = await mindtickleApi.listCourses(orgId);
await redis.setex(key, TTL.courses, JSON.stringify(courses));
return courses;
}
async function getCachedProgress(userId: string, courseId: string): Promise<UserProgress> {
const key = `mt:progress:${userId}:${courseId}`;
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const progress = await mindtickleApi.getUserProgress(userId, courseId);
await redis.setex(key, TTL.progress, JSON.stringify(progress));
return progress;
}
import pLimit from "p-limit";
const limit = pLimit(6); // MindTickle allows moderate concurrency
// Bulk fetch progress for all reps in a training campaign
async function batchFetchTeamProgress(
userIds: string[],
courseId: string
): Promise<UserProgress[]> {
return Promise.all(
userIds.map((uid) => limit(() => getCachedProgress(uid, courseId)))
);
}
// Paginate through all users in an organization
async function fetchAllUsers(orgId: string): Promise<MindTickleUser[]> {
const users: MindTickleUser[] = [];
let offset = 0;
const pageSize = 200;
do {
const page = await mindtickleApi.listUsers(orgId, { offset, limit: pageSize });
users.push(...page.users);
offset += pageSize;
if (page.users.length < pageSize) break;
} while (true);
return users;
}
import { Agent } from "undici";
const mindtickleAgent = new Agent({
connect: { timeout: 8_000 }, // Report endpoints can be slow
keepAliveTimeout: 30_000,
keepAliveMaxTimeout: 60_000,
pipelining: 1,
connections: 10, // Persistent pool for MindTickle API
});
async function mindtickleApiFetch(path: string, init?: RequestInit): Promise<Response> {
return fetch(`https://api.mindtickle.com/v2${path}`, {
...init,
// @ts-expect-error undici dispatcher
dispatcher: mindtickleAgent,
headers: { Authorization: `Token ${process.env.MINDTICKLE_API_KEY}`, ...init?.headers },
});
}
async function withRateLimit<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (err: any) {
if (err.status === 429) {
const retryAfter = parseInt(err.headers?.["retry-after"] ?? "10", 10);
const backoff = retryAfter * 1000 * Math.pow(2, attempt);
console.warn(`MindTickle rate limited. Retrying in ${backoff}ms (attempt ${attempt + 1})`);
await new Promise((r) => setTimeout(r, backoff));
continue;
}
throw err;
}
}
throw new Error("MindTickle API: max retries exceeded");
}
import { Counter, Histogram } from "prom-client";
const mtApiLatency = new Histogram({
name: "mindtickle_api_duration_seconds",
help: "MindTickle API call latency",
labelNames: ["endpoint", "status"],
buckets: [0.1, 0.5, 1, 2, 5, 10], // Reports can take 5-10s
});
const mtCacheHits = new Counter({
name: "mindtickle_cache_hits_total",
help: "Cache hits for MindTickle course and progress data",
labelNames: ["cache_type"], // courses | progress | reports | users
});
const mtWebhookLatency = new Histogram({
name: "mindtickle_webhook_processing_seconds",
help: "Time to process MindTickle completion webhooks",
buckets: [0.01, 0.05, 0.1, 0.25, 0.5],
});
| Issue | Cause | Fix |
|---|---|---|
| Report generation timeouts | Analytics queries over large date ranges | Limit date range to 30 days, cache reports for 10min |
| Stale progress data during live training | Reps complete modules but dashboard shows old state | Invalidate progress cache on completion webhook |
| Webhook processing backpressure | Training campaign triggers 1000+ completions in minutes | Queue webhooks in Redis, process with worker at controlled rate |
| 429 during bulk progress export | Fetching progress for all reps across all courses | Reduce concurrency to 3 and add 500ms delay between course batches |
| Incomplete user list pagination | Offset math error causes skipped or duplicate users | Always check page.users.length < pageSize as termination condition |
After applying these optimizations, expect:
// Full optimized team progress fetch — cache + rate limit + batching
const users = await fetchAllUsers("org-123");
const progress = await batchFetchTeamProgress(
users.map((u) => u.id),
"course-onboarding-2026"
);
// Alternative: use background refresh for reports instead of on-demand generation
const report = await getCachedReport("quarterly-readiness", { backgroundRefresh: true });
See mindtickle-reference-architecture.