From anysite-skills
Monitors brand reputation, sentiment, and mentions across Twitter/X, Reddit, Instagram, YouTube, LinkedIn via anysite MCP tools. Tracks conversations, detects crises, measures health. Use for social listening tasks.
npx claudepluginhub anysiteio/agent-skills --plugin anysite-cliThis skill uses the workspace's default tool permissions.
Monitor and protect your brand reputation across social media platforms. Track mentions, analyze sentiment, and identify issues before they escalate.
Queries Octolens API for brand mentions on Twitter, Reddit, GitHub, LinkedIn, etc.; filters by keywords, sentiment, date ranges, followers, engagement using AND/OR logic.
Tracks brand mentions across the web, analyzes sentiment, retrieves structured brand info including logos, and finds PR opportunities via Brand.dev API. Useful for media monitoring and online presence analysis.
Tracks emerging trends, viral content, hashtags, momentum, and sentiment across Twitter/X, Reddit, YouTube, LinkedIn, Instagram using anysite MCP server. Useful for social listening and market shift analysis.
Share bugs, ideas, or general feedback.
Monitor and protect your brand reputation across social media platforms. Track mentions, analyze sentiment, and identify issues before they escalate.
Coverage: 65% - Pivoted from review platforms to social media monitoring; strong for Twitter, Reddit, Instagram, YouTube, LinkedIn
All data fetching uses the anysite MCP v2 universal meta-tools:
execute(source, category, endpoint, params) — fetch data from any source. Returns first page + cache_key.get_page(cache_key, offset, limit) — paginate through results when next_offset is returned.query_cache(cache_key, conditions, sort_by, aggregate, group_by) — filter, sort, or aggregate cached data without new API calls.export_data(cache_key, format) — export full dataset as CSV, JSON, or JSONL for reports.Always call discover(source, category) first if unsure about endpoint names or params.
v2 responses may include llm_hint fields with guidance on how to fix errors (e.g., wrong URN format, missing params). Always check llm_hint in error responses before retrying.
Step 1: Set Up Monitoring
Define:
Step 2: Search for Mentions
Platform searches:
Twitter: execute("twitter", "search", "search_posts", {"query": "brand name", "count": 100})
Reddit: execute("reddit", "search", "search_posts", {"query": "brand name", "count": 100})
Instagram: execute("instagram", "search", "search_posts", {"query": "#brandname", "count": 100})
LinkedIn: execute("linkedin", "post", "search_posts", {"keywords": "brand name", "count": 50})
Each call returns a cache_key — use it for pagination, filtering, and export.
Step 3: Analyze Sentiment
For each mention:
Use query_cache(cache_key, conditions=[...], sort_by=...) to filter high-engagement or negative mentions without re-fetching.
Step 4: Take Action
Based on findings:
Scenario: Monitor brand mentions across all platforms daily
Steps:
# Twitter (real-time pulse)
execute("twitter", "search", "search_posts", {"query": "brand name OR @brandhandle", "count": 100})
→ Returns cache_key_twitter; filter last 24h with from_date param (timestamp)
# Reddit (detailed discussions)
execute("reddit", "search", "search_posts", {"query": "brand name", "count": 50, "time_filter": "day"})
→ Returns cache_key_reddit
# Instagram (visual mentions)
execute("instagram", "search", "search_posts", {"query": "#brandname OR brand name", "count": 50})
→ Returns cache_key_instagram
# LinkedIn (professional mentions)
execute("linkedin", "post", "search_posts", {"keywords": "brand name", "count": 20})
→ Returns cache_key_linkedin
# YouTube (video coverage)
execute("youtube", "search", "search_videos", {"query": "brand name review OR brand name unboxing", "count": 20})
→ Returns cache_key_youtube
If any result includes next_offset, fetch more with:
get_page(cache_key, offset=next_offset, limit=50)
Use query_cache to sort and filter cached results:
# Find high-engagement mentions across platforms
query_cache(cache_key_twitter, sort_by=[{"field": "favorite_count", "order": "desc"}])
query_cache(cache_key_reddit, sort_by=[{"field": "vote_count", "order": "desc"}])
For each mention:
Sentiment:
- Positive: Praise, recommendation, satisfaction
- Negative: Complaint, criticism, problem
- Neutral: Question, general mention, factual
Category:
- Product feedback
- Customer service issue
- Feature request
- General discussion
- Competitor comparison
High Priority:
- Negative + High reach (viral potential)
- Multiple complaints about same issue
- Influencer negative mention
- Legal/safety concerns
Medium Priority:
- Individual complaints
- Feature requests
- Questions
- General feedback
Low Priority:
- Positive mentions
- Neutral discussions
- General brand awareness
Export data for reporting:
export_data(cache_key_twitter, "csv")
export_data(cache_key_reddit, "csv")
→ Returns download URLs for each dataset
Summary:
- Total mentions (by platform)
- Sentiment breakdown (% positive/negative/neutral)
- Top issues identified
- Viral/trending mentions
- Recommended actions
Expected Output:
Scenario: Identify and track potential PR crises
Steps:
Track baseline:
- Average mentions per day
- Average sentiment score
- Typical engagement levels
Alert triggers:
- Mentions >2x baseline
- Negative sentiment >50%
- Viral negative content (high engagement)
When alert triggered:
execute("twitter", "search", "search_posts", {"query": "brand name", "count": 500})
→ Identify what's driving spike; use get_page() to load all results
execute("reddit", "search", "search_posts", {"query": "brand name", "count": 200})
→ Check community discussions
For viral posts:
# Get specific Reddit post details and comments
execute("reddit", "posts", "posts", {"post_url": "<reddit_post_url>"})
execute("reddit", "posts", "posts_comments", {"post_url": "<reddit_post_url>"})
→ Analyze reach and engagement, read comments for context
# For viral tweets, scrape the tweet URL directly
execute("webparser", "parse", "parse", {"url": "<tweet_url>"})
→ Get tweet details and engagement metrics
Severity factors:
- Volume (how many mentions)
- Velocity (how fast growing)
- Reach (influencer involvement, media coverage)
- Sentiment (how negative)
- Validity (legitimate issue vs. misunderstanding)
Use query_cache to analyze cached data:
query_cache(cache_key, aggregate=[{"function": "count"}])
→ Total mention count without re-fetching
Hourly monitoring:
- Mention volume trend
- Sentiment shifts
- Platform spread
- Media pickup
- Official response impact
Track until:
- Volume returns to baseline
- Sentiment improves
- No new negative mentions for 24-48h
Expected Output:
Scenario: Compare brand sentiment vs. competitors
Steps:
List 3-5 main competitors
For brand + each competitor:
execute("twitter", "search", "search_posts", {"query": "<brand>", "count": 200})
execute("reddit", "search", "search_posts", {"query": "<brand>", "count": 100})
execute("linkedin", "post", "search_posts", {"keywords": "<brand>", "count": 50})
Each returns a cache_key — use get_page() if next_offset indicates more data.
For each brand:
Use query_cache to aggregate metrics from cached results:
query_cache(cache_key, aggregate=[{"function": "count"}])
→ Total mention volume
query_cache(cache_key, aggregate=[{"function": "avg", "field": "favorite_count"}])
→ Average engagement per mention
Mention Volume: Total mentions
Sentiment Score: (Positive - Negative) / Total
Engagement Rate: Avg engagement per mention
Share of Voice: Your mentions / Total category mentions
Compare:
- What are competitors praised for?
- What are competitors criticized for?
- Where do we excel?
- Where do we fall short?
Look for:
- Unmet customer needs (complaints about competitors)
- Messaging gaps (what they're not saying)
- Product differentiation opportunities
- Customer service advantages
Expected Output:
execute("twitter", "search", "search_posts", {"query": ..., "count": N}) — Find brand mentions. Supports from_date, to_date, min_likes, min_retweets, language filters.execute("twitter", "user", "user", {"user": ...}) — Check brand profile statsexecute("twitter", "user", "user_posts", {"user": ..., "count": N}) — Monitor brand account postsexecute("reddit", "search", "search_posts", {"query": ..., "count": N}) — Find discussions. Supports sort (relevance/hot/top/new) and time_filter (day/week/month/year).execute("reddit", "posts", "posts", {"post_url": ...}) — Get post details and sentimentexecute("reddit", "posts", "posts_comments", {"post_url": ...}) — Deep dive on discussionsexecute("instagram", "search", "search_posts", {"query": ..., "count": N}) — Find visual mentionsexecute("instagram", "post", "post", {"post": ...}) — Analyze mention engagementexecute("instagram", "post", "post_comments", {"post": ..., "count": N}) — Read feedbackexecute("youtube", "search", "search_videos", {"query": ..., "count": N}) — Find video mentionsexecute("youtube", "video", "video", {"video": ...}) — Get video detailsexecute("youtube", "video", "video_comments", {"video": ..., "count": N}) — Analyze sentimentexecute("linkedin", "post", "search_posts", {"keywords": ..., "count": N}) — Professional mentionsexecute("linkedin", "company", "company_posts", {"urn": {"type": "company", "value": "<id>"}, "count": N}) — Monitor own company posts. Requires company URN from execute("linkedin", "company", "company", {"company": "<alias>"}).get_page(cache_key, offset, limit) — Load next page of results from any execute() callquery_cache(cache_key, conditions, sort_by, aggregate, group_by) — Filter/sort/aggregate cached data without new API callsexport_data(cache_key, "csv"|"json"|"jsonl") — Export full dataset as downloadable fileManual Sentiment Classification:
Positive Indicators:
Negative Indicators:
Neutral Indicators:
Sentiment Score:
Score = (Positive mentions - Negative mentions) / Total mentions x 100
+50 to +100: Excellent
+20 to +49: Good
-19 to +19: Neutral/Mixed
-20 to -49: Poor
-50 to -100: Critical
Volume Metrics:
Sentiment Metrics:
Engagement Metrics:
Issue Tracking:
Chat Summary:
CSV Export (via export_data(cache_key, "csv")):
JSON Export (via export_data(cache_key, "json")):
Ready to monitor your brand? Ask Claude to help you track mentions, analyze sentiment, or identify reputation risks across social platforms!