npx claudepluginhub bitsky-tech/amphiloop --plugin AmphiLoopThis skill uses the workspace's default tool permissions.
Model-neutral LLM integration with protocol-driven capability declaration.
Unifies Python LLM API calls to 100+ providers (OpenAI, Anthropic, Ollama, llamafile) in OpenAI format with retries, fallbacks, exceptions, cost tracking. Triggers on litellm imports/completion().
Enables Claude Code's full tool system with OpenAI-compatible LLMs (GPT-4o, DeepSeek, Gemini, Ollama, Groq) via environment variables. Activate with CLAUDE_CODE_USE_OPENAI=1.
Provides LLM integration patterns for function calling, streaming responses, Ollama local inference, and fine-tuning customization. Use for tool use, SSE streaming, local deployment, LoRA/QLoRA, or multi-provider APIs.
Share bugs, ideas, or general feedback.
Model-neutral LLM integration with protocol-driven capability declaration.
| Package | BaseLlm | StructuredOutput | ToolSelection |
|---|---|---|---|
bridgic-llms-openai | yes | yes | yes |
bridgic-llms-openai-like | yes | no | no |
bridgic-llms-vllm | yes | yes | yes |
python-dotenv | — | — | — |
Install only the LLM provider package you need. python-dotenv is required for loading .env configuration.
Installation: Run the install script to set up all dependencies:
bash "skills/bridgic-llms/scripts/install-deps.sh" "$PWD" [PROVIDER]
Supported providers: openai (default), openai-like, vllm. The script checks uv availability, initializes a uv project if needed, installs any missing packages via uv add, and runs uv sync to finalize the environment. When it exits successfully the project is fully initialized and ready to use — no manual uv add / uv sync follow-up is required.
import os
from dotenv import load_dotenv
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
load_dotenv()
llm = OpenAILlm(
api_key=os.environ.get("LLM_API_KEY"),
api_base=os.environ.get("LLM_API_BASE"),
configuration=OpenAIConfiguration(
model=os.environ.get("LLM_MODEL", "gpt-4o"),
temperature=0.0,
max_tokens=16384,
),
timeout=180.0,
)
| Provider | When to Use |
|---|---|
OpenAILlm | Production use, need structured output or tool calling. Works with OpenAI API. |
OpenAILikeLlm | Third-party OpenAI-compatible APIs (DashScope, etc.), only need basic chat/stream. |
VllmServerLlm | Self-hosted vLLM inference server, full capability. |
Common pitfall: Do NOT use OpenAILikeLlm when you need structured output or tool selection — it does not implement those protocols. Use OpenAILlm instead.
All providers implement BaseLlm:
from bridgic.core.model.types import Message, Role
messages = [
Message.from_text("You are a helpful assistant.", role=Role.SYSTEM),
Message.from_text("Hello!", role=Role.USER),
]
# Chat — complete response
response = llm.chat(messages=messages, model="gpt-4o", temperature=0.7)
print(response.message.content)
# Stream — real-time chunks
for chunk in llm.stream(messages=messages, model="gpt-4o"):
print(chunk.delta, end="", flush=True)
See references/llm-integration.md for:
StructuredOutput — generate Pydantic model instances or JSON schema conformant outputToolSelection — function/tool calling with Tool definitions| Scenario | Load |
|---|---|
| Full API details, all providers, advanced protocols | llm-integration.md |