From langchain-pack
Provides minimal TypeScript examples of LangChain LCEL chains for prompts, models, output parsers, and Zod-structured outputs. Use for new integrations, setup testing, or learning pipe syntax.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-packThis skill is limited to using the following tools:
Minimal working examples demonstrating LCEL (LangChain Expression Language) -- the `.pipe()` chain syntax that is the foundation of all LangChain applications.
Build LangChain LCEL chains with ChatPromptTemplate, output parsers, RunnableSequence, RunnableParallel, branching, and composition for LLM workflows.
Designs LLM applications using LangChain framework with agents, chains, memory, tool integration, and document processing. Use for building AI agents, complex workflows, and production apps.
Guides LangChain JS/TS development for LLM apps: models (OpenAI, Anthropic), chains, agents, tools, RAG, prompts, streaming, and structured outputs with Zod.
Share bugs, ideas, or general feedback.
Minimal working examples demonstrating LCEL (LangChain Expression Language) -- the .pipe() chain syntax that is the foundation of all LangChain applications.
langchain-install-auth setupimport { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
// Three components: prompt -> model -> parser
const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}");
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const parser = new StringOutputParser();
// LCEL: chain them with .pipe()
const chain = prompt.pipe(model).pipe(parser);
const result = await chain.invoke({ topic: "TypeScript" });
console.log(result);
// "Why do TypeScript developers wear glasses? Because they can't C#!"
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a {persona}. Keep answers under 50 words."],
["human", "{question}"],
]);
const chain = prompt
.pipe(new ChatOpenAI({ model: "gpt-4o-mini" }))
.pipe(new StringOutputParser());
const answer = await chain.invoke({
persona: "senior DevOps engineer",
question: "What is the most important Kubernetes concept?",
});
console.log(answer);
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const ReviewSchema = z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number().min(0).max(1),
summary: z.string().describe("One-sentence summary"),
});
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const structuredModel = model.withStructuredOutput(ReviewSchema);
const prompt = ChatPromptTemplate.fromTemplate(
"Analyze the sentiment of this review:\n\n{review}"
);
const chain = prompt.pipe(structuredModel);
const result = await chain.invoke({
review: "LangChain makes building AI apps surprisingly straightforward.",
});
console.log(result);
// { sentiment: "positive", confidence: 0.92, summary: "..." }
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const chain = ChatPromptTemplate.fromTemplate("Write a haiku about {topic}")
.pipe(new ChatOpenAI({ model: "gpt-4o-mini" }))
.pipe(new StringOutputParser());
// Stream tokens as they arrive
const stream = await chain.stream({ topic: "coding" });
for await (const chunk of stream) {
process.stdout.write(chunk);
}
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()
# LCEL uses | operator in Python
chain = prompt | model | parser
result = chain.invoke({"topic": "LangChain"})
print(result)
Every component in an LCEL chain implements the Runnable interface:
| Method | Purpose |
|---|---|
.invoke(input) | Single input, single output |
.batch(inputs) | Process array of inputs |
.stream(input) | Yield output chunks |
.pipe(next) | Chain to next runnable |
The .pipe() method (or | in Python) creates a RunnableSequence where each step's output feeds the next step's input. Every LangChain component -- prompts, models, parsers, retrievers -- is a Runnable.
| Error | Cause | Fix |
|---|---|---|
Missing value for input topic | Template variable not in invoke args | Match invoke({}) keys to template {variables} |
Cannot read properties of undefined | Chain not awaited | Add await before .invoke() |
Rate limit reached | Too many API calls | Add delay or use gpt-4o-mini for testing |
Proceed to langchain-core-workflow-a for advanced chain composition.