Building LLM-powered React applications with the Hashbrown library. Use when the user asks to (1) Build generative UI where LLMs render React components, (2) Add client-side tool calling for LLM-app interaction, (3) Stream LLM responses in React applications, (4) Execute LLM-generated JavaScript safely in a sandbox, (5) Build browser agents or AI-powered UIs with hashbrown, (6) Control React UI from LLM output, (7) Integrate with LLM providers like OpenAI, Anthropic, Google, Azure, Bedrock, or Ollama in React apps, (8) Create chatbots, form builders, predictive text inputs, or multi-threaded conversations with LLMs, (9) Transform natural language to structured data in TypeScript React applications.
From dev-specialismsnpx claudepluginhub aaronbassett/agent-foundry --plugin dev-specialismsThis skill uses the workspace's default tool permissions.
assets/components/chat-with-voice-input/ChatWithVoiceInput.tsxassets/components/chat-with-voice-input/README.mdassets/components/client-side-tool-calling/ClientTool.tsxassets/components/client-side-tool-calling/README.mdassets/components/js-runtime-chart-generator/ChartGenerator.tsxassets/components/js-runtime-chart-generator/README.mdassets/components/multi-threaded-chat-ui/MultiThreadedChatUI.tsxassets/components/multi-threaded-chat-ui/README.mdassets/components/multi-threaded-chat-ui/hooks/useThreads.tsassets/components/predictive-text-input/PredictiveTextInput.tsxassets/components/predictive-text-input/README.mdassets/components/simple-chat/README.mdassets/components/simple-chat/SimpleChat.tsxassets/components/streaming-chat-ui/README.mdassets/components/streaming-chat-ui/StreamingChatUI.tsxassets/components/structured-data-form/README.mdassets/components/structured-data-form/StructuredDataForm.tsxassets/components/structured-data-form/schema.tsassets/components/ui-chat-with-components/README.mdassets/components/ui-chat-with-components/UIChat.tsxSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Hashbrown is a React library for building LLM-powered applications with generative UI, client-side tool calling, streaming, and sandboxed JavaScript execution. It provides React hooks (useChat, useUiChat, etc.) that connect to a Node.js backend adapter, which securely communicates with LLM providers (OpenAI, Anthropic, Google, Azure, Bedrock, Ollama).
Architecture: React frontend (using Hashbrown hooks) + Node.js backend adapter (proxies LLM API requests)
| Hook | Multi-turn Chat | Single Input | Structured Output | Tool Calling | Generate UI |
|---|---|---|---|---|---|
useChat | ✅ | ❌ | ❌ | ✅ | ❌ |
useStructuredChat | ✅ | ❌ | ✅ | ✅ | ❌ |
useCompletion | ❌ | ✅ | ❌ | ✅ | ❌ |
useStructuredCompletion | ❌ | ✅ | ✅ | ✅ | ❌ |
useUiChat | ✅ | ❌ | ✅ | ✅ | ✅ |
Use the scripts to scaffold components and servers:
# List available templates
python scripts/list-templates.py
# Generate a component
python scripts/generate-component.py simple-chat ./src/components
# Generate a backend server
python scripts/generate-server.py basic-chat-server ./backend
Available component templates:
simple-chat - Basic text-only chat with useChatui-chat-with-components - Generative UI with useUiChat and exposeComponentclient-side-tool-calling - Tool calling with useTooljs-runtime-chart-generator - Sandboxed JS execution for data visualizationstructured-data-form - Form generation from natural language with Skillet schemasstreaming-chat-ui - Streaming responses with loading statesmulti-threaded-chat-ui - Multi-conversation management with threadspredictive-text-input - Autocomplete/suggestions powered by LLMchat-with-voice-input - Voice input with speech recognitionAvailable server templates:
basic-chat-server - Simple Express server with OpenAI adapterstreaming-chat-server - Streaming support for real-time responseschat-server-with-data - Server with database/context injectionchat-server-with-threads - Persistent conversation threadsserver-with-authentication - Auth-protected endpointsLoad reference documentation as needed for deep implementation details:
Expose React components to the LLM so it can render your UI dynamically.
import { useUiChat, exposeComponent } from '@hashbrownai/react'
import { s } from '@hashbrownai/core'
import { MyCard } from './MyCard'
const exposedCard = exposeComponent(MyCard, {
name: 'MyCard',
description: 'A card to display information',
props: { title: s.string('The title'), content: s.string('The body') },
})
const { messages } = useUiChat({
components: [exposedCard],
model: 'gpt-4',
system: 'Render cards to show information to the user.',
})
// messages[1].ui will contain <MyCard title="..." content="..." />
When to use: Build browser agents, dynamic dashboards, form builders, or any UI that should adapt based on LLM decisions.
Reference: See core-concepts.md for detailed component exposure patterns.
Allow the LLM to call client-side functions to access app state or perform actions.
import { useChat, useTool } from '@hashbrownai/react'
const getUserTool = useTool({
name: 'getUser',
description: 'Get current user information',
handler: async () => ({ name: 'Jane Doe' }),
deps: [],
})
const { messages } = useChat({
tools: [getUserTool],
model: 'gpt-4',
})
When to use: Let the LLM access application state, trigger actions, or interact with external APIs.
Reference: See core-concepts.md for tool definition patterns.
Use Skillet schemas to get type-safe, validated JSON from the LLM.
import { useStructuredCompletion } from '@hashbrownai/react'
import { s } from '@hashbrownai/core'
const schema = s.object('Response', {
name: s.string('User name'),
age: s.number('User age'),
interests: s.array('List of interests', s.string('An interest')),
})
const { data } = useStructuredCompletion({
schema,
model: 'gpt-4',
prompt: 'Extract user info: John is 30 and likes hiking and reading.',
})
// data will be typed as { name: string; age: number; interests: string[] }
When to use: Extract structured information, build forms from natural language, parse documents, or transform unstructured text to JSON.
Reference: See structured-data.md for Skillet schema language details.
Execute LLM-generated JavaScript safely in a WASM-based QuickJS sandbox.
import { useRuntime, useToolJavaScript, useChat } from '@hashbrownai/react'
const runtime = useRuntime({ functions: [] })
const jsTool = useToolJavaScript({ runtime })
const chat = useChat({ tools: [jsTool], model: 'gpt-4' })
When to use: Let the LLM perform complex calculations, data transformations, or generate visualizations using code.
Reference: See core-concepts.md for runtime configuration.
Connect to any LLM provider via backend adapters:
// Backend server (Node.js)
import { HashbrownOpenAI } from '@hashbrownai/openai'
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY,
request,
})
Supported platforms: OpenAI, Anthropic, Google, Azure, Bedrock, Ollama, Writer, and custom adapters.
Reference: See platform-integration.md for all adapter configurations.
python scripts/generate-component.py simple-chat ./srcpython scripts/generate-server.py basic-chat-server ./backend<HashbrownProvider url="/api/chat">python scripts/generate-component.py ui-chat-with-components ./srcexposeComponent(Component, { name, description, props })useUiChat({ components: [...] })useTool({ name, description, handler })useChat({ tools: [...] })s.streaming modifier in Skillet schema:
const schema = s.object('Response', {
items: s.streaming.array('Items', s.string('Item')),
})
AbortSignal with timeout to JS runtime