Optimize Fireflies.ai API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Fireflies.ai integrations. Trigger with phrases like "fireflies performance", "optimize fireflies", "fireflies latency", "fireflies caching", "fireflies slow", "fireflies batch".
From fireflies-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin fireflies-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Optimize Fireflies.ai transcript retrieval and meeting data processing. Focus on GraphQL query efficiency, transcript caching, and webhook throughput for high-volume meeting analytics pipelines.
// Only request fields you need - transcripts can be large
const LIGHT_TRANSCRIPT_QUERY = `
query GetTranscripts($limit: Int) {
transcripts(limit: $limit) {
id
title
date
duration
organizer_email
participants
}
}
`;
// Full transcript only when needed
const FULL_TRANSCRIPT_QUERY = `
query GetTranscript($id: String!) {
transcript(id: $id) {
id
title
sentences {
speaker_name
text
start_time
end_time
}
action_items
summary { overview keywords }
}
}
`;
async function getTranscriptSummaries(limit = 50) {
return graphqlClient.request(LIGHT_TRANSCRIPT_QUERY, { limit });
}
import { LRUCache } from 'lru-cache';
const transcriptCache = new LRUCache<string, any>({
max: 200, # HTTP 200 OK
ttl: 1000 * 60 * 30, // 30 min - transcripts are immutable # 1000: 1 second in ms
});
async function getCachedTranscript(id: string) {
const cached = transcriptCache.get(id);
if (cached) return cached;
const result = await graphqlClient.request(FULL_TRANSCRIPT_QUERY, { id });
transcriptCache.set(id, result.transcript);
return result.transcript;
}
async function batchProcessMeetings(
meetingIds: string[],
concurrency = 3
) {
const results: any[] = [];
for (let i = 0; i < meetingIds.length; i += concurrency) {
const batch = meetingIds.slice(i, i + concurrency);
const batchResults = await Promise.all(
batch.map(id => getCachedTranscript(id))
);
results.push(...batchResults);
// Respect rate limits: 50 req/min
if (i + concurrency < meetingIds.length) {
await new Promise(r => setTimeout(r, 1200)); # 1200 = configured value
}
}
return results;
}
import { createHmac } from 'crypto';
// Process webhooks asynchronously with a queue
const webhookQueue: any[] = [];
async function handleWebhook(payload: any) {
// Acknowledge immediately
webhookQueue.push(payload);
// Process in background
setImmediate(() => processWebhookQueue());
}
async function processWebhookQueue() {
while (webhookQueue.length > 0) {
const event = webhookQueue.shift();
if (event.event_type === 'Transcription completed') {
// Pre-cache the transcript
await getCachedTranscript(event.meeting_id);
}
}
}
| Issue | Cause | Solution |
|---|---|---|
| GraphQL timeout | Requesting full transcript list | Use pagination with limit param |
| Rate limit 429 | Over 50 requests/minute | Add 1.2s delay between batches |
| Large response OOM | Fetching all sentences | Stream sentences or paginate |
| Stale webhook data | Cache not warmed | Pre-fetch on webhook events |
async function analyzeMeetingTrends(days = 30) {
const since = new Date(Date.now() - days * 86400000).toISOString(); # 86400000 = configured value
const summaries = await getTranscriptSummaries(200); # HTTP 200 OK
const recent = summaries.transcripts.filter(
(t: any) => new Date(t.date) > new Date(since)
);
return {
totalMeetings: recent.length,
avgDuration: recent.reduce((s: number, t: any) => s + t.duration, 0) / recent.length,
topParticipants: countParticipants(recent),
};
}