From langchain-pack
Build LangChain LCEL chains with ChatPromptTemplate, output parsers, RunnableSequence, RunnableParallel, branching, and composition for LLM workflows.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-packThis skill is limited to using the following tools:
Build production chains using LCEL (LangChain Expression Language). Covers prompt templates, output parsers, RunnableSequence, RunnableParallel, RunnableBranch, RunnablePassthrough, and chain composition patterns.
Provides minimal TypeScript examples of LangChain LCEL chains for prompts, models, output parsers, and Zod-structured outputs. Use for new integrations, setup testing, or learning pipe syntax.
Designs LLM applications using LangChain framework with agents, chains, memory, tool integration, and document processing. Use for building AI agents, complex workflows, and production apps.
Designs LLM applications using LangChain 1.x and LangGraph for agents, state management, memory, tool integration, and workflows.
Share bugs, ideas, or general feedback.
Build production chains using LCEL (LangChain Expression Language). Covers prompt templates, output parsers, RunnableSequence, RunnableParallel, RunnableBranch, RunnablePassthrough, and chain composition patterns.
langchain-install-auth completed@langchain/core and at least one provider installedimport { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
// Simple template
const simple = ChatPromptTemplate.fromTemplate(
"Translate '{text}' to {language}"
);
// Multi-message template with chat history slot
const chat = ChatPromptTemplate.fromMessages([
["system", "You are a {role}. Respond in {style} style."],
new MessagesPlaceholder("history"), // dynamic message injection
["human", "{input}"],
]);
// Inspect required variables
console.log(chat.inputVariables);
// ["role", "style", "history", "input"]
// Pre-fill some variables, leave others for later
const partial = await chat.partial({
role: "senior engineer",
style: "concise",
});
// Now only needs: history, input
const result = await partial.invoke({
history: [],
input: "Explain LCEL",
});
import { StringOutputParser } from "@langchain/core/output_parsers";
import { JsonOutputParser } from "@langchain/core/output_parsers";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
// String output (most common)
const strParser = new StringOutputParser();
// JSON output with Zod schema
const jsonParser = StructuredOutputParser.fromZodSchema(
z.object({
answer: z.string(),
confidence: z.number(),
sources: z.array(z.string()),
})
);
// Get format instructions to inject into prompt
const instructions = jsonParser.getFormatInstructions();
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
// Extract key points, then summarize
const extractPrompt = ChatPromptTemplate.fromTemplate(
"Extract 3 key points from:\n{text}"
);
const summarizePrompt = ChatPromptTemplate.fromTemplate(
"Summarize these points in one sentence:\n{points}"
);
const chain = RunnableSequence.from([
// Step 1: extract points
{
points: extractPrompt
.pipe(model)
.pipe(new StringOutputParser()),
},
// Step 2: summarize
summarizePrompt,
model,
new StringOutputParser(),
]);
const summary = await chain.invoke({
text: "Long article text here...",
});
import { RunnableParallel } from "@langchain/core/runnables";
// Run multiple chains simultaneously on the same input
const analysis = RunnableParallel.from({
summary: ChatPromptTemplate.fromTemplate("Summarize: {text}")
.pipe(model)
.pipe(new StringOutputParser()),
keywords: ChatPromptTemplate.fromTemplate("Extract 5 keywords from: {text}")
.pipe(model)
.pipe(new StringOutputParser()),
sentiment: ChatPromptTemplate.fromTemplate("Sentiment of: {text}")
.pipe(model)
.pipe(new StringOutputParser()),
});
const results = await analysis.invoke({ text: "Your input text" });
// { summary: "...", keywords: "...", sentiment: "..." }
import { RunnableBranch } from "@langchain/core/runnables";
const technicalChain = ChatPromptTemplate.fromTemplate(
"Give a technical explanation: {input}"
).pipe(model).pipe(new StringOutputParser());
const simpleChain = ChatPromptTemplate.fromTemplate(
"Explain like I'm 5: {input}"
).pipe(model).pipe(new StringOutputParser());
const router = RunnableBranch.from([
[
(input: { input: string; level: string }) => input.level === "expert",
technicalChain,
],
// Default fallback
simpleChain,
]);
const answer = await router.invoke({ input: "What is LCEL?", level: "expert" });
import { RunnablePassthrough } from "@langchain/core/runnables";
// Pass through original input while adding computed fields
const chain = RunnablePassthrough.assign({
wordCount: (input: { text: string }) => input.text.split(" ").length,
uppercase: (input: { text: string }) => input.text.toUpperCase(),
}).pipe(
ChatPromptTemplate.fromTemplate(
"The text has {wordCount} words. Summarize: {text}"
)
).pipe(model).pipe(new StringOutputParser());
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
llm = ChatOpenAI(model="gpt-4o-mini")
# Sequential: prompt | model | parser
chain = ChatPromptTemplate.from_template("Summarize: {text}") | llm | StrOutputParser()
# Parallel
analysis = RunnableParallel(
summary=ChatPromptTemplate.from_template("Summarize: {text}") | llm | StrOutputParser(),
keywords=ChatPromptTemplate.from_template("Keywords: {text}") | llm | StrOutputParser(),
)
# Passthrough with computed fields
chain = (
RunnablePassthrough.assign(context=lambda x: fetch_context(x["query"]))
| prompt | llm | StrOutputParser()
)
| Error | Cause | Fix |
|---|---|---|
Missing value for input | Template variable not provided | Check inputVariables on your prompt |
Expected mapping type | Passing string instead of object | Use { input: "text" } not "text" |
OutputParserException | LLM output doesn't match schema | Use .withStructuredOutput() instead of manual parsing |
| Parallel timeout | One branch hangs | Add timeout to model config |
Proceed to langchain-core-workflow-b for agents and tool calling.