Design and implement formative, summative, and developmental evaluations using logic models and mixed methods
Designs and implements program evaluations using logic models and mixed methods to assess effectiveness.
npx claudepluginhub a5c-ai/babysitterThis skill is limited to using the following tools:
Design and implement rigorous evaluations of social programs and policy interventions using established frameworks.
The Program Evaluation skill enables design and implementation of formative, summative, and developmental evaluations using logic models, theory of change frameworks, and mixed methods approaches for assessing program effectiveness and informing improvement.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.