From fathom-pack
Optimizes Fathom API performance in Python with caching for transcripts, rate-limit-aware batch processing, and webhook recommendations over polling.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin fathom-packThis skill is limited to using the following tools:
```python
Provides reusable Fathom REST API client patterns in Python and TypeScript for listing meetings, fetching transcripts, and summaries. Use for data pipelines or API wrappers.
Optimizes Fireflies.ai GraphQL queries via field selection, immutable transcript caching (LRU/Redis), and batching to cut latency and boost throughput.
Fetches Fathom AI call recordings, transcripts, summaries, and searches transcripts via CLI scripts. Use for queries on meetings, call history, or past conversations.
Share bugs, ideas, or general feedback.
from functools import lru_cache
import time
class CachedFathomClient(FathomClient):
def __init__(self, cache_ttl=300, **kwargs):
super().__init__(**kwargs)
self._cache = {}
self._cache_ttl = cache_ttl
def get_transcript_cached(self, recording_id: str) -> dict:
key = f"transcript:{recording_id}"
if key in self._cache:
data, ts = self._cache[key]
if time.time() - ts < self._cache_ttl:
return data
result = self.get_transcript(recording_id)
self._cache[key] = (result, time.time())
return result
Instead of polling for new meetings, use webhooks (see fathom-webhooks-events) to receive data as soon as it is ready.
import time
def process_meetings_batch(client, meeting_ids, batch_size=50):
for i in range(0, len(meeting_ids), batch_size):
batch = meeting_ids[i:i+batch_size]
for mid in batch:
client.get_transcript(mid)
if i + batch_size < len(meeting_ids):
time.sleep(60) # Respect 60 req/min limit
For cost optimization, see fathom-cost-tuning.