From looplia-writer
Generates structured ContentSummary JSON with 16 fields from media-reviewer analysis. Use after content review to produce complete documentation output.
npx claudepluginhub memorysaver/looplia-core --plugin looplia-writerThis skill uses the workspace's default tool permissions.
Transforms media-reviewer analysis into structured JSON documentation.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Transforms media-reviewer analysis into structured JSON documentation.
Takes your media-reviewer analysis and produces ContentSummary JSON with all 16 required fields.
You should have already used media-reviewer skill which provides:
contentId (string)
headline (string, 10-200 chars)
tldr (string, 20-500 chars)
bullets (string[], 1-10 items)
tags (string[], 1-20 items)
sentiment ("positive" | "neutral" | "negative")
category (string)
score.relevanceToUser (number, 0-1)
overview (string, min 50 chars)
keyThemes (string[], 3-7 items)
detailedAnalysis (string, min 100 chars)
narrativeFlow (string, min 50 chars)
coreIdeas (CoreIdea[], 1-10 items) Each item has:
importantQuotes (Quote[], 0-20 items) Each item has:
context (string, min 20 chars)
relatedConcepts (string[], 0-15 items)
For video/audio content:
0:30 - 30 seconds2:45 - 2 minutes 45 seconds1:30:00 - 1 hour 30 minutes{
"contentId": "abc123",
"headline": "Constitutional AI introduces a novel approach to aligning language models through self-critique",
"tldr": "This video explains Constitutional AI, Anthropic's method for training helpful and harmless AI assistants. The approach uses a set of principles (a 'constitution') to guide the model's self-improvement, reducing the need for human feedback while maintaining safety.",
"bullets": [
"Constitutional AI uses self-critique guided by explicit principles",
"The method reduces reliance on human feedback for safety training",
"Models learn to identify and correct their own harmful outputs"
],
"tags": ["ai", "safety", "alignment", "constitutional-ai", "anthropic", "rlhf"],
"sentiment": "positive",
"category": "video",
"score": { "relevanceToUser": 0.85 },
"overview": "This comprehensive video from Anthropic introduces Constitutional AI...",
"keyThemes": [
"AI Safety and Alignment",
"Self-supervised learning for safety",
"Reducing human feedback requirements",
"Explicit principles for AI behavior"
],
"detailedAnalysis": "The video opens with a clear problem statement...",
"narrativeFlow": "The presentation follows a classic problem-solution structure...",
"coreIdeas": [
{
"concept": "Constitutional AI",
"explanation": "An alignment approach where AI models critique and revise their own outputs based on explicit principles",
"examples": ["A model generating a harmful response, then self-critiquing"]
}
],
"importantQuotes": [
{
"text": "The key insight is that we can use AI to supervise AI",
"timestamp": "12:34",
"context": "Explaining the core mechanism that makes Constitutional AI scalable"
}
],
"context": "This video builds on prior work in RLHF...",
"relatedConcepts": ["RLHF", "red teaming", "scalable oversight", "AI alignment"]
}
Before outputting, verify: