From copywriter
This skill should be used when the user asks to 'analyze my LinkedIn performance', 'what content is working', or 'LinkedIn metrics'.
npx claudepluginhub jamon8888/cc-suite --plugin CopywriterThis skill uses the workspace's default tool permissions.
Numbers tell a story. This skill translates "Likes" and "Views" into actionable strategy. It answers the question: "Is my content strategy actually working?"
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Numbers tell a story. This skill translates "Likes" and "Views" into actionable strategy. It answers the question: "Is my content strategy actually working?"
┌─────────────────────────────────────────────────────────────────┐
│ STANDALONE (always works) │
│ ✓ Engagement Rate Calculation │
│ ✓ Format Performance Analysis (Carousel vs Text vs Video) │
│ ✓ Topic Resonance Mapping │
├─────────────────────────────────────────────────────────────────┤
│ SUPERCHARGED (connect ~~puppeteer-mcp / ~~linkedin-mcp) │
│ + Browser Investigation: Scrapes public profile stats. │
│ + Competitor Spy: Analyzes top-performing posts of rivals. │
│ + Real-Time Fetch: grabs latest metrics without API keys. │
└─────────────────────────────────────────────────────────────────┘
${CLAUDE_PLUGIN_ROOT}/data/2-Domaines/business-profile.json (Goals).${CLAUDE_PLUGIN_ROOT}/data/2-Domaines/analytics-history.json (Past Performance).puppeteer-mcp is active: Use browser_navigate to fetch real-time data from the profile URL.linkedin-mcp is active: Use API to fetch stats.Clicks / Viewsantislop-expert to punch them up.(Likes + Comments) / Views (Engagement Rate)# 📈 Performance Intelligence Report
**Period**: Last 30 Days
**Overall Grade**: B+
## 🏆 The Winners (Re-purpose these!)
1. **[Post Title]**
- 👁️ Views: 15,200
- 💬 Engagement: 4.5%
- 💡 _Why it won_: The "How-to" structure combined with a personal failure story.
## 📉 The Losers (Learn from these)
1. **[Post Title]**
- 👁️ Views: 800
- 💡 _Why if failed_: The hook was vague ("Thoughts on AI"). No clear value prop.
## 🧭 Strategic Directives (Next 30 Days)
### START (New Experiments)
- Try "Carousel" format for technical guides.
### STOP (Resource Drains)
- Stop posting "Generic Motivational Quotes".
### CONTINUE (Double Down)
- Keep writing "Contrarian Takes" on SaaS Pricing.
SENTINEL_ROOT = "${CLAUDE_PLUGIN_ROOT}/../sentinel-v8"
sentinel_installed = file_exists(f"{SENTINEL_ROOT}/.claude-plugin/plugin.json")
Skip this section entirely if sentinel_installed is False. LinkedIn Analytics works identically without it.
Two activation points — pre-publish and post-analysis. Both are opt-in; user must explicitly enable prediction tracking.
Pre-publish — calibration prediction (opt-in)
When a post is finalized in linkedin-post or linkedin-scheduler, optionally prompt:
"Track your prediction? State expected impressions and confidence (e.g. '500 impressions, 60% confident'). Takes 5 seconds, builds your content intuition over time."
If user agrees, record via calibration-coach:
{
"decision": "LinkedIn post: [first 8 words of post]",
"prediction": "[N] impressions within 7 days",
"confidence": [stated_confidence],
"review_date": "[publish_date + 7 days]",
"resolved": false,
"context": { "format": "[post format]", "topic": "[topic]", "day": "[day of week]" }
}
Post-analysis — SM2 check (attribution)
Load bias SM2 from ../sentinel-v8/domains/strategy-marketing/biases.yaml.
When linkedin-analytics produces a Keep / Stop / Continue recommendation, apply one check before finalizing each recommendation:
"What else changed in the [period] that could explain this result?"
Require one alternative explanation per major conclusion. Example:
Flag any recommendation that attributes performance to a single variable without ruling out confounders.
Calibration review (when 10+ predictions resolved)
Run calibration-coach analysis:
python3 ../sentinel-v8/skills/decision-hygiene/scripts/calibration.py --ledger ../sentinel-v8/data/decision-ledger.json --filter linkedin
Output:
If predictions are tracked, add to the analytics output:
## Prediction Accuracy (Sentinel)
Posts tracked: [N] | Resolved: [N]
[If enough data]:
Your accuracy pattern:
- [Format/topic] posts: you predict [X], typically get [Y] ([over/under] by [Z%])
- Best-calibrated: [format] posts
Recommendation: [one specific calibration adjustment for next month]
Attribution check on top recommendation:
"[Keep/Stop/Continue recommendation]"
→ Alternative explanation considered: [what else could explain this]
→ Confidence in recommendation: [High if alternatives ruled out / Medium if not]
Standard performance analysis with Keep / Stop / Continue recommendations and 30-day strategic directives. No change.