From auto-claude-skills
Query PostHog metrics, synthesize outcome report, create follow-up Jira work (gated)
npx claudepluginhub damianpapadopoulos/auto-claude-skillsThis skill uses the workspace's default tool permissions.
Query analytics for a shipped feature, synthesize an outcome report, and optionally create follow-up Jira work. Entered independently after shipping — days or weeks later.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Query analytics for a shipped feature, synthesize an outcome report, and optionally create follow-up Jira work. Entered independently after shipping — days or weeks later.
Check which MCP tools are available:
Tier 1 — PostHog MCP:
If you have access to query-run, get-experiment, list-experiments, get-feature-flag, or create-annotation as MCP tools, use Tier 1.
Tier 2 — Manual Metrics: If no PostHog MCP tools are available, ask the user to provide metrics directly:
"I don't have PostHog MCP access. Please share any of the following:
- Dashboard screenshots or metric summaries
- Adoption numbers, funnel data, or error rates
- Experiment results if applicable
- Any specific concerns about the shipped feature"
~/.claude/.skill-learn-baselines/:
shipped_at, ship_method, hypotheses, and jira_ticket fieldsship_method is "pull_request", verify the PR was actually merged before proceeding (check pr_url via gh pr view)"Which feature should I review? Please provide the feature name, branch name, or Jira ticket ID."
Tier 1 (PostHog MCP available):
query-run with HogQL:
shipped_athypotheses, use each hypothesis's metric field to target specific events/properties instead of generic adoption querieslist-experiments to find experiments linked to the featureget-experiment for results, significance, and variant performanceget-feature-flag for rollout percentage and targeting rulesquery-run for error events associated with the featureTier 2 (Manual):
hypotheses, present each hypothesis and its metric to the user: "For H1 ([description]), I need the current value of [metric]. What is it?"Present a structured report:
Feature: [name] | Shipped: [date] | Branch: [name]
Adoption: [metrics summary — event counts, trend direction, comparison to pre-ship baseline]
Quality: [error rates, regression indicators]
Experiments: [results if applicable — significance, winning variant, effect size]
Assessment: One of:
Hypothesis Validation (when baseline has non-null hypotheses):
| ID | Hypothesis | Metric | Baseline | Target | Actual | Status |
|---|---|---|---|---|---|---|
| H1 | [description] | [metric] | [baseline] | [target] | [measured value] | [status] |
Status values:
Confirmed — Actual meets or exceeds targetNot confirmed — Actual does not meet targetInconclusive — Insufficient data, or validation window has not elapsedPartially confirmed — Directionally correct but below target thresholdWhen hypotheses is null in the baseline (or no baseline found): skip this section entirely. Fall back to the existing generic metrics flow with no behavioral change.
Recommendations: Specific next actions based on the assessment.
Present the report and ask:
"Based on this outcome review, would you like me to:
- Close the loop — no follow-up needed
- Create follow-up Jira tickets — I'll draft tickets for the recommended actions (requires your approval before creation)
- Investigate further — dig deeper into a specific metric or regression"
Wait for the user's choice.
If "Create follow-up tickets" (and Atlassian MCP available):
createJiraIssue to create the ticketaddCommentToJiraIssue on the original ticket with the outcome summaryIf Atlassian MCP unavailable:
"I don't have Atlassian MCP access. Here are the recommended follow-up tickets — please create them manually: [formatted ticket descriptions]"
If follow-up work was identified:
"If follow-up work is needed, invoke Skill(auto-claude-skills:product-discovery) or Skill(superpowers:brainstorming) to begin the next cycle."
If the loop is closed:
"Outcome review complete. The feature loop is closed."