Build accurate mental models of AI behavior: understand context windows, probabilistic generation, hallucination causes, and predict failure modes before they occur.
Builds accurate mental models of AI behavior to predict failure modes before they occur.
/plugin marketplace add leobessa/claude-plugins-ai-fluency/plugin install leobessa-ai-fluency@leobessa/claude-plugins-ai-fluencyThis skill inherits all available tools. When active, it can use any tool Claude has access to.
AI System Literacy is Layer 1 of AI fluency—building accurate mental models of how AI systems actually work. This isn't about knowing model architectures but understanding behavior patterns that affect your work.
Core Principle: Predict failure modes before they occur.
Fluency Signal: Can anticipate likely errors before running a prompt.
Critical distinction: AI does not reason. It predicts likely next tokens based on training patterns.
Implications:
Test: Ask AI to solve a problem it couldn't have seen. Check the reasoning, not just the answer.
What it is: The maximum text AI can "see" at once (e.g., 100K-200K tokens).
Implications:
Best practices:
What it is: Each token is sampled from a probability distribution; output is non-deterministic.
Implications:
Best practices:
Why AI hallucinates:
| Cause | Description | Example |
|---|---|---|
| Overgeneralization | Applies patterns where they don't fit | Making up citations that "sound right" |
| Confidence inheritance | Training on confident text produces confident output | Stating false facts authoritatively |
| Gap-filling | Completes patterns even without information | Inventing plausible details |
| Recency bias | Overwights recent context | Contradicting earlier information |
| Anchoring | First framing persists even when wrong | Accepting user's false premise |
High-risk scenarios:
The model = the AI's pattern-matching capability The tool = the interface, context, instructions wrapped around it
Implications:
| Situation | Likely Error Type |
|---|---|
| Specific facts (dates, numbers) | Hallucination |
| Long-form reasoning | Logic gaps in middle |
| Multi-step tasks | Dropped steps |
| Ambiguous instructions | Assumption-filling |
| Requests for completeness | False exhaustiveness |
| Edge cases | Overgeneralization |
| Recent events | Training cutoff errors |
| Obscure topics | Confident fabrication |
Before running a prompt, ask:
Layer 1 Complete When:
Reality: AI predicts likely completions. It doesn't "understand" intent—it follows patterns. Clarity in prompts isn't optional.
Reality: More context means more noise. AI attention is limited. Curated, structured context beats raw volume.
Reality: Default behavior is confident completion. Uncertainty acknowledgment requires explicit prompting—and even then isn't reliable.
This skill should be used when the user asks about libraries, frameworks, API references, or needs code examples. Activates for setup questions, code generation involving libraries, or mentions of specific frameworks like React, Vue, Next.js, Prisma, Supabase, etc.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.