From magic-powers
Use when productizing AI features for end users — UX patterns for AI, streaming, loading states, error handling, fallback design, reliability, and responsible AI disclosure.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Designing UX for a chat, copilot, or AI-assisted feature
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Pattern When to use
──────────────────────────────────────────────────────────
Progressive disclosure Show AI suggestion; user confirms before applying
Streaming response Show tokens as they arrive (< 2s to first token)
Skeleton loaders For non-streaming (show structure while waiting)
Confidence indicators Show when AI is uncertain (avoid false confidence)
Thumbs up/down Inline feedback for quality signal + fine-tuning data
Edit in place Let user correct AI output (reduces friction)
Regenerate Always offer retry with optional guidance
Suggested follow-ups Guide users toward productive next steps
// Frontend streaming pattern (React)
const [output, setOutput] = useState('');
const [isStreaming, setIsStreaming] = useState(false);
async function callAI(prompt) {
setIsStreaming(true);
setOutput('');
const response = await fetch('/api/ai', {
method: 'POST',
body: JSON.stringify({ prompt }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
setOutput(prev => prev + decoder.decode(value));
}
setIsStreaming(false);
}
AI Error Type User-facing response
────────────────────────────────────────────────────────
Rate limit "High demand right now. Try in 30 seconds." + auto-retry
Model timeout Show partial response + "Continue?" option
Safety refusal "I can't help with that. Try rephrasing or [alternative]."
Low confidence Show output with disclaimer + "Verify this information"
Total AI failure Graceful degradation to manual workflow
Required disclosures (per EU AI Act + emerging standards):
User-facing:
Task completion rate (with vs without AI)
Time-to-complete (AI should save time)
Acceptance rate (how often users keep AI output)
Edit rate (how much users modify AI output)
Opt-out rate (users who disable AI feature)
Quality:
Thumbs up / down ratio
Regeneration rate (proxy for quality)
Error rate by type (timeout, refusal, safety)
Showing agent work to users requires different UX than single-call AI:
Displaying agent progress:
┌─────────────────────────────────────────┐
│ 🤖 Researching your question... │
│ │
│ ✅ Searched documentation (0.8s) │
│ ✅ Found 3 relevant sections (0.3s) │
│ ⏳ Analyzing and synthesizing... │
│ │
│ Step 3 of 4 — ~10s remaining │
└─────────────────────────────────────────┘
Agent UX principles:
Displaying tool use transparently:
Agent used: 🔍 web_search("Q3 revenue report 2024")
Agent used: 📄 read_file("annual_report.pdf")
Agent used: 🧮 calculate(formula="revenue * 0.15")
Collapsible by default — show on hover/expand for curious users.
For RAG-generated content, attribution builds trust:
Answer: The product launch is scheduled for Q2 2025.
Sources used:
[1] Product Roadmap 2025.pdf — p.3: "Q2 2025 launch target"
[2] Engineering Timeline.xlsx — Sheet: Milestones
Implementation:
// Track citations during RAG generation
const response = await generateWithCitations(query, retrievedChunks);
// response: { answer: "...", citations: [{source, page, quote}] }
// Render with inline citation markers
renderWithCitations(response.answer, response.citations);
// → "The launch is scheduled for Q2 [1]" with expandable citation [1]
When to show citations:
Accessibility for AI-specific UI elements:
// Streaming text — announce completion to screen readers
<div
role="log" // live region for screen readers
aria-live="polite" // don't interrupt, announce when idle
aria-label="AI response"
>
{streamingText}
</div>
// Loading state — meaningful label, not just spinner
<div role="status" aria-label="AI is generating a response, please wait">
<Spinner />
<span className="sr-only">Generating response...</span>
</div>
// Confidence indicator — explain what percentage means
<span
title="AI confidence: 85% - This answer is likely accurate but verify for important decisions"
aria-label="85% confidence"
>
●●●●○
</span>
Mobile patterns:
Internationalization of AI responses:
// AI error messages need translation
const AI_ERROR_MESSAGES = {
rate_limit: t('ai.error.rateLimitMessage'), // localized
timeout: t('ai.error.timeoutMessage'),
safety_refusal: t('ai.error.safetyMessage'),
};
// RTL support for streaming text
<div dir="auto" lang={userLocale}> {/* auto-detects RTL */}
{streamingText}
</div>
ai-safety-guardrails for output safety before showing to userllm-observability to monitor the metrics defined aboveprompt-engineering to improve acceptance rate and reduce regenerations