From grammarly-pack
Optimizes Grammarly API performance with caching, parallel calls, and batching strategies to reduce latency in JS/TS integrations.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin grammarly-packThis skill is limited to using the following tools:
| API | Typical Latency | Notes |
Implements exponential backoff and queueing for Grammarly API rate limits using TypeScript. Handles 429 errors, retries, and optimizes throughput with PQueue.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
| API | Typical Latency | Notes |
|---|---|---|
| Writing Score | 1-3s | Depends on text length |
| AI Detection | 1-2s | Fast for short text |
| Plagiarism | 10-60s | Async, requires polling |
import { LRUCache } from 'lru-cache';
import { createHash } from 'crypto';
const scoreCache = new LRUCache<string, any>({ max: 500, ttl: 3600000 });
async function cachedScore(text: string, token: string) {
const key = createHash('sha256').update(text).digest('hex');
const cached = scoreCache.get(key);
if (cached) return cached;
const score = await grammarlyClient.score(text);
scoreCache.set(key, score);
return score;
}
// Score + AI detect in parallel (they're independent)
async function fullAudit(text: string, token: string) {
const [score, ai] = await Promise.all([
grammarlyClient.score(text),
grammarlyClient.detectAI(text),
]);
return { score, ai };
}
For cost optimization, see grammarly-cost-tuning.