When calling LLM APIs from Python code. When connecting to llamafile or local LLM servers. When switching between OpenAI/Anthropic/local providers. When implementing retry/fallback logic for LLM calls. When code imports litellm or uses completion() patterns.
/plugin marketplace add Jamie-BitFlight/claude_skills/plugin install litellm@jamie-bitflight-skillsExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). Proactively activates in projects with cacheComponents enabled.
Adds educational insights about implementation choices and codebase patterns (mimics the deprecated Explanatory output style)
Easily create hooks to prevent unwanted behaviors by analyzing conversation patterns
Frontend design skill for UI/UX implementation