From ccfg-core
This skill should be used when persisting context between sessions, saving project state, loading previous session context, or managing longitudinal memory beyond beads issue tracking.
npx claudepluginhub jsamuelsen11/claude-config --plugin ccfg-coreThis skill uses the workspace's default tool permissions.
This skill defines conventions for managing longitudinal memory and context across sessions. It
Manages persistent memory across Claude Code sessions via AutoMem. Recall project context, architectural decisions, bug fixes, user preferences, and patterns at session start or debugging.
Loads and applies project memories from prior sessions for consistent decisions, conventions, and preferences. Stores new entries automatically or via /remember.
Proactively saves decisions, conventions, bugs, discoveries, and preferences to persistent Engram memory across sessions using mem_save and related tools.
Share bugs, ideas, or general feedback.
This skill defines conventions for managing longitudinal memory and context across sessions. It establishes when to use different persistence mechanisms (Serena memories, beads notes, git commits) and how to keep context organized and accessible.
Serena memories are for knowledge that needs to persist across multiple sessions and is discovery-based or decision-oriented. Use memories for:
Design decisions with rationale:
API contracts and integration points:
Architecture choices:
Discovered constraints and limitations:
Codebase conventions:
External service configurations:
Beads notes are for task-specific context that's relevant while the task is active. Use beads notes for:
Implementation progress:
Blocking issues:
Work-in-progress notes:
Task-specific context that won't be needed after completion:
Git commits are for code changes with descriptive messages. Every code change must have a commit. Commits should:
Document what changed and why:
feat(auth): add password reset flow
Implements password reset via email token. Users receive a link
valid for 1 hour. Includes rate limiting (3 requests per hour)
to prevent abuse.
Closes #142
Reference related issues/tasks:
Capture technical details:
perf(db): add composite index on user_events table
Query performance improved from 2.3s to 45ms for date range
queries. Index covers (user_id, event_type, created_at) which
matches our most common query pattern.
Before: sequential scan on 8.5M rows
After: index scan on ~10k rows per typical query
Use descriptive, kebab-case names that clearly indicate the topic and scope:
api-authentication-design.md (not auth.md or api_auth.txt)database-schema-decisions.md (not db.md or schema_notes.md)frontend-routing-architecture.md (not routes.md or FrontendRoutes.md)stripe-webhook-integration.md (not stripe.md or webhooks.md)performance-optimization-results.md (not perf.md or optimization.md)Include scope and topic in the name:
user-service-api-contract.md (scope: user-service, topic: API contract)contract.md (what contract? which service?)Avoid abbreviations unless universally understood:
api-rate-limiting-strategy.mdapi-rl-strat.mdUse version suffixes only when maintaining historical records:
database-schema-v2.md (current version)database-schema-v1-deprecated.md (archived old version)Design decisions with rationale:
Record not just what was decided, but why. Future sessions (and future developers) need to understand the trade-offs:
# Decision: GraphQL over REST for mobile API
## Context
Mobile app needs to fetch user profile, posts, and comments in a single request to minimize latency
on slow connections.
## Options Considered
1. REST with multiple endpoints (3 requests)
2. REST with composite endpoint (1 request, over-fetching)
3. GraphQL (1 request, precise data)
## Decision
Chose GraphQL. Mobile app can specify exactly what data it needs, reducing payload size by ~60% in
typical cases.
## Trade-offs
- Pro: Reduced network payload, better mobile performance
- Pro: Self-documenting schema with GraphQL introspection
- Con: Increased backend complexity (GraphQL server setup)
- Con: Need to implement N+1 query protection (using DataLoader)
API contracts:
Document expected request/response formats, especially for external integrations:
# Stripe Webhook Integration
## Endpoint
POST /webhooks/stripe
## Headers
- stripe-signature: HMAC signature for verification
## Payload
Standard Stripe event object
## Response
200 OK with empty body (Stripe ignores response body)
## Error Handling
- 400 for invalid signature (Stripe will retry)
- 500 for processing errors (Stripe will retry)
Architecture choices:
Capture the big picture decisions that affect how the system works:
# State Management Architecture
## Global State (Redux)
- User authentication/authorization
- Application-wide settings
- UI theme and preferences
## Server State (React Query)
- API data fetching/caching
- Optimistic updates
- Background refetching
## Local State (useState/useReducer)
- Form inputs
- UI-only state (modals, dropdowns)
- Component-specific state
Discovered constraints:
Document limits, quirks, and gotchas discovered through experimentation:
# AWS Lambda Constraints
## Memory/CPU
- Memory range: 128 MB to 10,240 MB
- CPU scales linearly with memory (1,769 MB = 1 vCPU)
- Our image processing needs 3GB minimum for reliable performance
## Execution Time
- Max execution: 15 minutes
- Our video processing can take 10-12 minutes for 1080p
- Need to split into chunks or use Step Functions for longer videos
## Cold Start
- Cold start with 3GB memory: ~2-3 seconds
- Warm invocations: ~100ms
- Using provisioned concurrency (5 instances) for API endpoints
Performance baselines:
Record performance measurements to track improvements/regressions:
# API Performance Baselines (2026-02-08)
## User Profile Endpoint
- p50: 120ms
- p95: 280ms
- p99: 450ms
## Search Endpoint
- p50: 350ms
- p95: 1200ms
- p99: 2400ms (needs optimization)
## Database Queries
- Most common query (user posts): 45ms avg with index
- Slowest query (analytics aggregation): 1.8s (runs async)
Session-specific temporary data:
Don't save information that's only relevant to the current session:
File listings and search results:
Don't save output from searches or file system operations:
These can be regenerated on demand and become stale quickly.
Transient state:
Don't save things that change frequently or are easily derived:
At the beginning of each session, follow this protocol to load relevant context:
# List all memories to see what context exists
list_memories
Review memory titles to identify relevant ones for the current work.
# Read specific memories related to the current task
read_memory("api-authentication-design")
read_memory("database-schema-decisions")
Don't read all memories indiscriminately. Be selective based on the task.
# Check for ready tasks
bd ready
# Check in-progress tasks
bd list --status=in_progress
# View specific task details if needed
bd show <task-id>
This gives you the current work context and any blocking issues.
Briefly summarize what you learned:
"I've loaded context from memories: using JWT auth with refresh tokens, PostgreSQL database with schema version 2. Beads shows task #42 in progress (implement password reset). Previous session got blocked on email service configuration."
Then proceed with the work, informed by this context.
Update existing memory when:
Example: If api-authentication-design.md describes JWT auth, and you add refresh token rotation,
update the existing memory rather than creating api-authentication-refresh-tokens.md.
Create new memory when:
Example: If you have user-service-api-contract.md and now need to document the payment service
API, create payment-service-api-contract.md as a separate memory.
Delete obsolete memories when explicitly asked. Don't proactively delete memories unless:
When deleting, consider archiving instead:
# DEPRECATED: GraphQL API Design (v1)
This approach was replaced by REST API in Feb 2026. See `rest-api-design.md` for current
implementation.
[Original content kept for historical reference...]
Keep memories focused and current:
Avoid memory bloat:
# API Error Handling Strategy
## Decision
All API endpoints return consistent error format with HTTP status codes and structured error
objects.
## Error Response Format
{ "error": { "code": "VALIDATION_ERROR", "message": "User-friendly error message", "details": [
{"field": "email", "message": "Invalid email format"} ] } }
## HTTP Status Codes
- 400: Client errors (validation, malformed requests)
- 401: Authentication required
- 403: Authenticated but not authorized
- 404: Resource not found
- 409: Conflict (duplicate resource)
- 422: Unprocessable entity (business logic error)
- 429: Rate limit exceeded
- 500: Server error (logged but not exposed to client)
## Error Codes
Standardized error codes in SCREAMING_SNAKE_CASE:
- VALIDATION_ERROR
- AUTHENTICATION_REQUIRED
- PERMISSION_DENIED
- RESOURCE_NOT_FOUND
- etc.
## Rationale
Consistent error handling makes client integration easier and reduces support burden. Structured
errors allow clients to programmatically handle specific error cases.
Task #42: Implement password reset flow
**Progress:**
- Created email template
- Implemented token generation (1-hour expiry)
- Added database table for reset tokens
**Next:**
- Add rate limiting (3 requests/hour per email)
- Write integration tests
- Update API documentation
**Blockers:**
- Need SMTP credentials for staging environment
- Asked DevOps in Slack #infrastructure
feat(auth): implement password reset flow
Adds password reset via email token with 1-hour expiry.
Implementation:
- POST /auth/reset-password/request sends email with token
- POST /auth/reset-password/confirm validates token and updates password
- Rate limiting: 3 requests per hour per email address
Security:
- Tokens are cryptographically random (32 bytes)
- Tokens hashed before storage (SHA-256)
- Old tokens invalidated when new one requested
Closes #142
Use the right tool for the right job:
Keep context organized, accessible, and current. Don't over-persist or under-persist. Find the balance.