From intercom-pack
Optimizes Intercom API performance with caching, efficient searches, pagination, batching, and connection pooling. Use for slow responses in Node.js/TypeScript Intercom integrations.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin intercom-packThis skill is limited to using the following tools:
Optimize Intercom API performance through response caching, efficient search queries, cursor-based pagination, connection pooling, and request batching.
Optimizes Intercom API costs with usage auditing, caching, request reduction via webhooks, and monitoring to avoid rate limits and infrastructure overhead.
Optimizes HubSpot CRM API performance using batch reads, minimal property requests, and caching to handle slow responses and high throughput.
Optimizes Instantly.ai API v2 performance with caching for analytics/campaigns, batch lead operations with concurrency, and rate limit handling. Use for slow responses or high-volume leads.
Share bugs, ideas, or general feedback.
Optimize Intercom API performance through response caching, efficient search queries, cursor-based pagination, connection pooling, and request batching.
intercom-client SDK installed| Operation | Typical P50 | Typical P95 | Notes |
|---|---|---|---|
GET /me (health check) | 50ms | 150ms | Lightest endpoint |
GET /contacts/{id} | 80ms | 200ms | Single lookup |
POST /contacts/search | 120ms | 400ms | Depends on query complexity |
GET /conversations/{id} | 100ms | 300ms | Heavier with parts (up to 500) |
POST /contacts (create) | 150ms | 400ms | Write operation |
GET /contacts (list) | 100ms | 350ms | Paginated, 50 per page |
POST /messages | 200ms | 500ms | Triggers delivery pipeline |
Cache frequently accessed contacts and conversations to avoid repeated API calls.
import { LRUCache } from "lru-cache";
import { IntercomClient } from "intercom-client";
import { Intercom } from "intercom-client";
const contactCache = new LRUCache<string, Intercom.Contact>({
max: 5000,
ttl: 5 * 60 * 1000, // 5 minutes
});
const client = new IntercomClient({
token: process.env.INTERCOM_ACCESS_TOKEN!,
});
async function getContact(contactId: string): Promise<Intercom.Contact> {
const cached = contactCache.get(contactId);
if (cached) return cached;
const contact = await client.contacts.find({ contactId });
contactCache.set(contactId, contact);
return contact;
}
// Invalidate on update
async function updateContact(
contactId: string,
data: Partial<Intercom.UpdateContactRequest>
): Promise<Intercom.Contact> {
contactCache.delete(contactId);
const updated = await client.contacts.update({ contactId, ...data });
contactCache.set(contactId, updated);
return updated;
}
// Webhook-driven cache invalidation
function handleContactWebhook(notification: any): void {
const contactId = notification.data?.item?.id;
if (contactId) {
contactCache.delete(contactId);
}
}
Minimize search latency by using selective queries and limiting fields.
// BAD: Overly broad search, fetching too many results
const allUsers = await client.contacts.search({
query: { field: "role", operator: "=", value: "user" },
pagination: { per_page: 150 }, // Max is 150
});
// GOOD: Targeted search with specific filters
const recentPro = await client.contacts.search({
query: {
operator: "AND",
value: [
{ field: "role", operator: "=", value: "user" },
{ field: "custom_attributes.plan", operator: "=", value: "pro" },
{ field: "last_seen_at", operator: ">", value: Math.floor(Date.now() / 1000) - 86400 },
],
},
pagination: { per_page: 25 },
sort: { field: "last_seen_at", order: "descending" },
});
// Stream contacts with memory-efficient cursor pagination
async function* streamContacts(
client: IntercomClient,
perPage = 50
): AsyncGenerator<Intercom.Contact> {
let startingAfter: string | undefined;
do {
const page = await client.contacts.list({ perPage, startingAfter });
for (const contact of page.data) {
yield contact;
}
startingAfter = page.pages?.next?.startingAfter ?? undefined;
// Small delay to avoid rate limits on large datasets
if (startingAfter) {
await new Promise(r => setTimeout(r, 100));
}
} while (startingAfter);
}
// Process contacts in batches for efficiency
async function processContactsInBatches(
client: IntercomClient,
processor: (contacts: Intercom.Contact[]) => Promise<void>,
batchSize = 100
): Promise<number> {
let batch: Intercom.Contact[] = [];
let total = 0;
for await (const contact of streamContacts(client)) {
batch.push(contact);
if (batch.length >= batchSize) {
await processor(batch);
total += batch.length;
batch = [];
}
}
if (batch.length > 0) {
await processor(batch);
total += batch.length;
}
return total;
}
import { Agent } from "https";
// Reuse TCP connections (HTTP keep-alive)
const agent = new Agent({
keepAlive: true,
maxSockets: 10, // Max concurrent connections
maxFreeSockets: 5, // Keep idle connections warm
timeout: 30000, // Connection timeout
});
// Apply to fetch calls if using raw API
const response = await fetch("https://api.intercom.io/contacts", {
headers: { Authorization: `Bearer ${token}` },
agent,
} as any);
import PQueue from "p-queue";
const queue = new PQueue({
concurrency: 5, // Max parallel requests
interval: 1000, // Per second
intervalCap: 100, // Max per interval
});
// Batch-lookup contacts by ID
async function getContactsBatch(
client: IntercomClient,
contactIds: string[]
): Promise<Map<string, Intercom.Contact>> {
const results = new Map<string, Intercom.Contact>();
await Promise.all(
contactIds.map(id =>
queue.add(async () => {
// Check cache first
const cached = contactCache.get(id);
if (cached) {
results.set(id, cached);
return;
}
try {
const contact = await client.contacts.find({ contactId: id });
contactCache.set(id, contact);
results.set(id, contact);
} catch {
// Skip not-found contacts
}
})
)
);
return results;
}
async function measuredCall<T>(
name: string,
operation: () => Promise<T>
): Promise<T> {
const start = performance.now();
try {
const result = await operation();
const duration = performance.now() - start;
console.log(JSON.stringify({
metric: "intercom.api.call",
operation: name,
duration_ms: Math.round(duration),
status: "success",
}));
return result;
} catch (error) {
const duration = performance.now() - start;
console.error(JSON.stringify({
metric: "intercom.api.call",
operation: name,
duration_ms: Math.round(duration),
status: "error",
error: (error as Error).message,
}));
throw error;
}
}
// Usage
const contact = await measuredCall("contacts.find", () =>
client.contacts.find({ contactId: "abc123" })
);
| Issue | Cause | Solution |
|---|---|---|
| Cache stampede | Many concurrent cache misses | Use mutex/lock per key |
| Memory pressure | Cache too large | Set max on LRUCache |
| Stale data | TTL too long | Use webhook invalidation |
| Pagination timeouts | Large data set + slow network | Reduce per_page, add delays |
| Rate limit during batch | Too many parallel requests | Lower PQueue concurrency |
For cost optimization, see intercom-cost-tuning.