From focus-skills
Build Distill microservices that collect, process, and summarize content from platforms (Twitter/X, Email, GitHub, YouTube). Use when implementing watch/unwatch lifecycle, sync endpoints, AI-generated summaries, or service discovery patterns. Trigger when user needs to create a content aggregation service, implement standard API patterns, or build LLM-optimized service interfaces.
npx claudepluginhub the-focus-ai/claude-marketplace --plugin focus-skillsThis skill uses the workspace's default tool permissions.
Build microservices for the Distill content aggregation platform following standard patterns for service discovery, authentication, and data synchronization.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Build microservices for the Distill content aggregation platform following standard patterns for service discovery, authentication, and data synchronization.
Use this skill when:
Distill services are independent microservices that:
Every Distill service MUST implement:
| Endpoint | Method | Auth | Purpose |
|---|---|---|---|
/api | GET | None | Service documentation for LLMs |
/capabilities | GET | None | Service features and requirements |
/health | GET | None | Health check |
/watch | POST | JWT | Start monitoring user |
/unwatch | POST | JWT | Stop monitoring user |
/sync | POST | JWT | Trigger immediate sync |
/metadata | GET | JWT | Discover available data |
All protected endpoints require JWT bearer token:
Authorization: Bearer <jwt_token>
Required JWT claims:
{
"user_id": "user_123",
"email": "user@example.com",
"credits_available": true
}
Set up service structure
Implement user lifecycle
POST /watch: Register user for monitoringPOST /unwatch: Stop monitoring and cleanupBuild sync logic
Add data endpoints
/content/recent: Latest content/content/summary/{period}: AI-generated summaries/content/search: Search collected contentGenerate AI summaries
GET /api){
"service": "platform-feeder",
"version": "1.0.0",
"description": "Collects and processes content from [Platform]",
"authentication": "Bearer JWT required",
"endpoints": { ... },
"usage_example": "...",
"rate_limits": { ... }
}
{
"status": "watching",
"since": "2024-01-10T10:00:00Z",
"next_sync": "2024-01-10T10:30:00Z",
"backfill_status": "queued"
}
{
"user_status": {
"watching": true,
"since": "...",
"last_sync": "...",
"next_sync": "..."
},
"available_endpoints": [
{
"path": "/content/recent",
"method": "GET",
"description": "...",
"item_count": 47
}
],
"statistics": { ... }
}
{
"error": "error_code",
"message": "Human-readable message",
"retry_after": "2024-01-10T11:00:00Z",
"user_action_required": false
}
| Code | HTTP | Description |
|---|---|---|
not_watching | 404 | User not registered |
no_platform_auth | 403 | Platform credentials missing |
token_expired | 401 | Platform token needs refresh |
rate_limited | 429 | Platform API rate limit |
insufficient_credits | 402 | User out of credits |
sync_in_progress | 409 | Sync already running |
Choose based on needs:
Filesystem (simple, good for prototyping):
/data/users/{user_id}/
raw/
2024-01-10/content.json
processed/
summaries/daily_2024-01-10.json
metadata.json
PostgreSQL (complex queries):
CREATE TABLE user_content (
user_id VARCHAR(255),
platform_id VARCHAR(255),
collected_at TIMESTAMP,
content JSONB,
processed BOOLEAN DEFAULT false
);
Redis (rate limiting, caching):
rate_limit:{user_id} → remaining requests
last_sync:{user_id} → timestamp
cache:{user_id}:recent → serialized items
Before deploying a Distill service:
/api, /capabilities, /health, /watch, /unwatch, /sync, /metadata)For complete specification including all endpoint schemas, implementation examples, and advanced patterns, see: