By flight505
Claude Code skill pack for LangChain 1.0 + LangGraph 1.0 (Python) - 34 skills covering chains, agents, RAG, middleware, checkpointing, HITL, streaming, and production patterns
npx claudepluginhub flight505/skill-forge --plugin langchain-py-packWire LangChain 1.0 / LangGraph 1.0 tests into a GitHub Actions pipeline — unit tests with FakeListChatModel, VCR-gated integration tests, warning-filter policy, and eval-regression merge gates. Complements langchain-local-dev-loop (F23) which covers the inner loop; THIS covers the CI wire-up. Use when setting up GHA for a new LLM service, after a VCR cassette leak incident, or hardening an existing pipeline. Trigger with "langchain ci", "langchain github actions", "langchain test pipeline", "vcr ci", "langchain eval gate", "pytest -W error langchain".
Paste-match catalog of 14 real LangChain 1.0 / LangGraph 1.0 exceptions with named causes and named fixes, plus a triage decision tree. Use when you have a traceback and want the specific fix, not speculative documentation. Covers ImportError (0.2/0.3 → 1.0 migration), AttributeError on AIMessage.content, KeyError in LCEL and prompts, GraphRecursionError, silent thread_id memory loss, JSON-serialization crashes, and graphs that halt without reaching END. Trigger with "langchain error", "langgraph traceback", "OutputParserException", "GraphRecursionError", "ImportError langchain", "AttributeError AIMessage content".
Works correctly with LangChain 1.0's typed content blocks on AIMessage.content — text, tool_use, image, thinking, document — across Claude, GPT-4o, and Gemini, including multi-modal composition and tool-call iteration. Use when composing multi-modal messages, iterating tool_use blocks, handling Claude's thinking content, or unifying image inputs across providers. Trigger with "langchain content blocks", "AIMessage.content", "tool_use block", "claude image input", "langchain multimodal", "thinking block replay", "claude citations".
Compose LangChain 1.0 chains with RunnableParallel, RunnableBranch, RunnablePassthrough.assign, and RunnableLambda — correct input/output shapes, debug probes, and typed composition that catches dict-shape bugs before invocation. Use when wiring multi-step chains, parallel retrievals, conditional routing, or threading state through a chain for RAG, classification, or extraction pipelines. Trigger with "runnable parallel", "runnable branch", "langchain rag composition", "passthrough assign", "langchain lcel", "runnable lambda", "debug probe".
Control LangChain 1.0 AI spend with accurate streaming token accounting, model tiering, provider-specific cache hit tuning, per-tenant budgets, and retry dedup. Use when AI spend grows faster than traffic, a cost regression lands, or you need per-tenant budget enforcement. Trigger with "langchain cost", "langchain token accounting", "langchain per-tenant budget", "langchain model tiering", "prompt cache savings".
Load and chunk documents for LangChain 1.0 RAG pipelines correctly — language-aware splitters, table-safe PDF loaders, Cloudflare-compatible web loaders, chunk-boundary strategies that survive real-world structure. Use when building a RAG pipeline, diagnosing why retrieval misquotes a table, or debugging a crawler returning blank content. Trigger with "langchain document loader", "text splitter", "chunking strategy", "pdf loader", "markdown splitter", "webbaseloader".
Produce a reproducible, sanitized diagnostic bundle for a LangChain / LangGraph incident — environment snapshot, version manifest, filtered astream_events(v2) transcript, propagating callback stack, LangSmith trace URL — so a debug colleague can reproduce the failure without a live terminal. Use when triaging a production incident, filing a Discord or GitHub bug report, asking for help on the LangChain forum, or archiving a post-mortem artifact. Trigger with "langchain debug bundle", "langgraph debug dump", "langchain diagnostic export", "langsmith trace export", "astream_events dump", "langchain incident bundle".
Build a LangGraph 1.0 Deep Agent — planner + subagents + virtual filesystem + reflection loop — without the state-growth and prompt-inheritance traps. Use when building a long-horizon agent that must plan, delegate subtasks, work against a scratchpad filesystem, and reflect on progress. Trigger with "langchain deep agent", "planner subagent", "virtual filesystem agent", "reflection loop", "langgraph deep agent".
Deploy a LangChain 1.0 / LangGraph 1.0 app to Cloud Run, Vercel, or LangServe correctly — timeouts sized for chain length, cold-start mitigation, SSE anti-buffering headers, Secret Manager over `.env`. Use when prepping first prod deploy, debugging a stream that hangs behind a proxy, or diagnosing p99 latency spikes. Trigger with "langchain deploy", "langchain cloud run", "langchain vercel python", "langchain langserve", "langchain docker".
Build and query vector stores with LangChain 1.0 without getting burned by flipped score semantics, embedding-dim mismatches, reranker quirks, and chunk-splitter bugs. Use when building a RAG pipeline, choosing between FAISS / Pinecone / Chroma / PGVector, filtering by similarity score, or adding a reranker. Trigger with "langchain embeddings", "vector store similarity search", "langchain RAG retrieval", "FAISS score", "Pinecone score", "reranker score".
Enforce tenant isolation and role-based access across LangChain 1.0 chains and LangGraph 1.0 agents — per-request retriever construction, tenant-scoped rate limits, role-scoped tool allowlists, and structured audit logs. Use when building multi-tenant saas, passing soc2 review, or debugging cross-tenant leak. Trigger with "langchain multi-tenant", "langchain tenant isolation", "langchain rbac", "langchain row-level security", "langchain audit log".
Build reproducible evaluation pipelines for LangChain 1.0 chains and LangGraph 1.0 agents — golden datasets, LangSmith evaluate(), ragas RAG metrics, deepeval LLM-as-judge, agent trajectory analysis, and CI gating on quality regressions. Use when setting up quality measurement for a new chain, diagnosing regression after a model switch, or building an evaluation gate for a pull request. Trigger with "langchain eval", "langsmith evaluate", "ragas", "llm-as-judge", "agent trajectory eval", "eval regression gate".
Triage LangChain 1.0 / LangGraph 1.0 production incidents — LLM-specific SLOs, provider outage runbook, latency spike decision tree, cost-overrun response, agent loop containment. Use during an on-call page, in a post-mortem, or writing the team's first LLM runbook. Trigger with "langchain incident", "llm on-call", "langchain slo", "langchain outage", "langchain cost spike", "langchain agent loop".
Build a correct LangGraph 1.0 ReAct agent with `create_react_agent` — typed tools, error propagation, recursion caps, and stop conditions that actually stop. Use when writing your first tool-calling agent, migrating from `AgentExecutor` / `initialize_agent`, or diagnosing an agent that loops on vague prompts. Trigger with "langgraph agent", "create_react_agent", "langgraph tool calling", "AgentExecutor migration", "agent loop cost".
Build a correct LangGraph 1.0 StateGraph — typed TypedDict state with reducers, nodes, edges, compile, and recursion budgets — without hitting the silent-termination and state-replacement traps. Use when writing your first LangGraph StateGraph, diagnosing why a graph halted without reaching END, or picking recursion_limit. Trigger with "langgraph statgraph", "langgraph basics", "GraphRecursionError", "langgraph conditional edges".
Persist LangGraph agent state correctly with MemorySaver and PostgresSaver — thread_id discipline, JSON-serializable state rules, time-travel, schema migration. Use when adding chat memory, migrating from ConversationBufferMemory, or time-traveling an agent state to debug an incident. Trigger with "langgraph checkpointer", "MemorySaver", "PostgresSaver", "thread_id", "langgraph time travel", "langgraph state persistence".
Build LangGraph 1.0 human-in-the-loop approval flows with `interrupt_before` / `interrupt_after` and `Command(resume=...)` — JSON-serializable state, clean resume semantics, and UI wiring for approval decisions. Use when adding an approval gate before an expensive tool call, wiring a Slack/web UI for agent approvals, or debugging a graph that crashes on interrupt. Trigger with "langgraph human in loop", "langgraph interrupt_before", "langgraph approval flow", "Command resume", "langgraph HITL".
Pick the correct LangGraph 1.0 stream_mode ("messages" vs "updates" vs "values"), wire it into SSE or WebSocket without proxy-buffering gotchas, and filter astream_events(v2) server-side before forwarding to the browser. Use when building a live-token chat UI, a per-node progress bar, a debug/time-travel view, or diagnosing a LangGraph stream that hangs over a production proxy. Trigger with "langgraph streaming", "stream_mode messages", "stream_mode updates", "stream_mode values", "langgraph SSE", "langgraph astream_events", "SSE hangs behind nginx", "cloud run streaming".
Compose LangGraph 1.0 subgraphs correctly — shared state key propagation, Send / Command(graph=...) dispatch, callback scoping, per-subgraph recursion budgets, and testing each subgraph in isolation. Use when building a planner + executor, a nested agent team, or a reusable subgraph library. Trigger with "langgraph subgraph", "langgraph composition", "langgraph send", "nested agents", "langgraph state propagation", "Command(graph=...)", "langgraph subgraph callbacks".
Build a fast, deterministic local test loop for LangChain 1.0 / LangGraph 1.0 — FakeListChatModel fixtures, pytest config, VCR cassettes with key redaction, warning-filter policy. Use when adding tests to a new chain, fixing a flaky test, or making integration tests reproducible. Trigger with "langchain pytest", "FakeListChatModel", "VCR langchain", "langchain test fixtures", "langchain integration test".
Build composable middleware for LangChain 1.0 chains and LangGraph 1.0 agents — PII redaction, caching, retry, token budgets, guardrails — with ORDERING rules that avoid cache-key leakage and double-counting. Use when adding cross-cutting behavior, hardening against prompt injection, enforcing per-tenant budgets, or debugging cache-poisoning incidents. Trigger with "langchain middleware", "langgraph middleware", "PII redaction middleware", "cache middleware order", "langchain guardrails".
Invoke Claude, GPT-4o, and Gemini through LangChain 1.0 without tripping on the content-block, token-accounting, and structured-output quirks that silently break production code. Use when initializing chat models, routing across providers, iterating AIMessage content, or choosing a structured-output method. Trigger with "langchain model inference", "ChatAnthropic", "ChatOpenAI", "with_structured_output", "AIMessage content blocks", "langchain routing".
Build reliable dev / staging / prod isolation for LangChain 1.0 services — Pydantic `Settings` + `SecretStr`, cloud Secret Manager in prod, per-env prompt and model version pinning, env-specific checkpointer and observability. Use when graduating from `.env`-in-dev to real prod infra, or debugging a config that loaded the wrong values in the wrong env. Trigger with "langchain multi-env", "langchain pydantic settings", "langchain secret manager", "langchain env config", "langchain prod setup".
Wire LangSmith tracing and custom metric callbacks into a LangChain 1.0 chain or LangGraph 1.0 agent correctly — env-var spelling, subgraph propagation, per-tenant dimensions, cost and latency counters. Use when setting up observability on a new service, debugging blank traces in LangSmith, or adding per-tenant cost breakdowns. Trigger with "langchain observability", "langsmith tracing", "langchain callbacks", "langchain metrics".
Wire LangChain 1.0 / LangGraph 1.0 traces into an OpenTelemetry-native backend (Jaeger, Honeycomb, Grafana Tempo, Datadog) with LLM-specific SLOs, safe prompt-content policy, and subgraph-aware span propagation. Use when LangSmith is not the right fit (existing OTEL stack, compliance, multi-cloud) or alongside LangSmith for deep-system traces. Trigger with "langchain OTEL", "langchain opentelemetry", "langchain jaeger", "langchain honeycomb", "langchain SLO", "LLM span", "langchain tempo", "langchain datadog tracing".
Tune LangChain 1.0 / LangGraph 1.0 Python chains and agents for throughput, latency, and cost — streaming modes, explicit batch concurrency, semantic plus exact caches, persistent message history, and async-safe retriever patterns. Use when p95 latency exceeds target, batching "does not work", cost grows linearly with traffic, or a process restart wipes chat history. Trigger with "langchain performance", "langchain slow batch", "langchain throughput", "langchain p95 latency", "semantic cache hit rate".
Manage LangChain 1.0 prompts like code — LangSmith prompt hub versioning, XML-tag conventions for Claude, few-shot example selection, discriminated-union extraction schemas, and A/B test wiring. Use when taking ad-hoc prompts into version control, migrating prompts from f-strings to ChatPromptTemplate, optimizing prompts for Claude vs GPT-4o vs Gemini, or A/B testing a prompt change. Trigger with "langchain prompt hub", "langsmith prompts", "prompt versioning", "claude xml prompt", "few-shot example selector", "prompt engineering".
Rate-limit LangChain 1.0 calls correctly across multi-worker deployments — Redis-backed limiters, asyncio.Semaphore, narrow exception whitelists, and provider-specific throttle handling. Use when hitting 429s in production, scaling workers horizontally, or tuning throughput against Anthropic, OpenAI, or Gemini tier limits. Trigger with "langchain rate limit", "langchain 429", "langchain semaphore", "langchain token bucket", "anthropic rpm", "openai rpm throttling", "InMemoryRateLimiter", "redis rate limiter".
A reference layered architecture for production LangChain 1.0 / LangGraph 1.0 services — LLM factory with version-safe defaults, chain/graph registry, retriever and tool DI, Pydantic-validated config, per-request tenant scoping, middleware ordering, checkpointer selection per environment. Use when starting a new service, refactoring a tangled chain, or onboarding a team to existing code. Trigger with "langchain architecture", "langchain llm factory", "langchain chain registry", "langchain dependency injection", "langchain project structure".
Compose LangChain 1.0 Python runnables with the production defaults the docs do not warn about: parallel batching, narrow fallbacks, and brace-safe prompts. Use when building an LCEL chain with RunnableSequence / RunnableParallel, adding resilience via `.with_fallbacks()`, tuning throughput with `.batch()` or `.abatch()`, or wrapping user input in a prompt template. Trigger with "langchain runnable", "with_fallbacks", "langchain batch", "runnable sequence", "lcel", "runnableparallel", "chain composition".
Harden a LangChain 1.0 chain or LangGraph agent against prompt injection, tool abuse, PII leakage in traces, and secrets exfiltration — wrap user content in XML tags, enforce the tool allowlist via provider-native tool calling, redact PII in middleware upstream of cache and tracing, validate outputs with Pydantic, and lock down secrets behind a secret manager. Use when prepping for a security review, responding to an incident, building a multi-tenant SaaS, or writing a threat model. Trigger with "langchain security", "prompt injection defense", "langchain tool allowlist", "langchain PII redaction", "langchain secrets management".
Migrate a LangChain 0.3.x Python codebase to LangChain 1.0 / LangGraph 1.0 without breaking production — named breaking changes, codemod patterns, and a phased rollout. Use when upgrading LangChain or LangGraph from 0.2 or 0.3 to 1.0, when hitting ImportError after an upgrade, or when preparing a migration PR. Trigger with "langchain 1.0 migration", "langchain upgrade", "LLMChain removed", "initialize_agent removed", "ConversationBufferMemory removed", "astream_log deprecated", "langchain-anthropic 1.0".
Dispatch LangChain 1.0 chain/agent events to external systems — webhooks, Kafka, Redis Streams, SNS — via async fire-and-forget callbacks, subgraph-aware wiring, and HMAC-signed delivery with idempotency keys. Use when firing webhooks on tool calls, pushing telemetry to Kafka / Redis Streams, or fanning progress to multiple subscribers without blocking the chain. Trigger with "langchain webhook", "langchain event dispatch", "langchain callback kafka", "langchain pubsub", "langchain per-tool webhook", "BaseCallbackHandler webhook", "on_tool_end webhook", "langchain analytics event".
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Frontend design skill for UI/UX implementation
Creative skill for generating algorithmic and generative art. Produces visual designs using mathematical patterns, fractals, and procedural generation.
Humanise text and remove AI writing patterns. Detects and fixes 24 AI tell-tales including inflated language, promotional tone, AI vocabulary, filler phrases, sycophantic tone, and formulaic structure.
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). Proactively activates in projects with cacheComponents: true, providing patterns for 'use cache' directive, cacheLife(), cacheTag(), cache invalidation, and parameter permutation rendering.
Share bugs, ideas, or general feedback.
Personal fork — local development environment for ccpi (Claude Code plugin & skills CLI).
| Directory | Purpose |
|---|---|
packages/cli/ | ccpi CLI source (TypeScript, builds to dist/) |
skills/ | 519 skills across 21 categories |
plugins/ | Plugin catalog for ccpi link |
tutorials/ | 11 Jupyter notebooks |
templates/ | Plugin scaffolding templates |
.claude-plugin/ | Marketplace catalog (marketplace.json) |
scripts/ | validate-skills-schema.py, batch-remediate.py, quick-test.sh, sync-marketplace.cjs |
cd packages/cli && pnpm build
ln -sf "$(pwd)/dist/index.js" ~/.local/bin/ccpi
See packages/cli/QUICKGUIDE.md for full usage.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim