From anysite-skills
Gathers competitive intelligence via web scraping, LinkedIn profiles, GitHub activity, Glassdoor sentiment, social media, and community insights. Profiles companies, founders, and strategies for analysis, comparisons, and threat assessment.
npx claudepluginhub anysiteio/agent-skills --plugin anysite-cliThis skill uses the workspace's default tool permissions.
Systematic framework for gathering and analyzing competitive intelligence using Anysite MCP v2 tools.
Gathers competitive intelligence via anysite MCP server from LinkedIn, social media, Y Combinator, and web. Tracks competitors, hiring patterns, content strategies, and market positioning for strategic analysis.
Gathers live web data via Bright Data CLI to analyze competitors' pricing, features, reviews, hiring patterns, content strategy, and market positioning for intelligence reports.
Generates structured markdown competitor profiles from URLs via site scraping, SEO metrics, reviews, and market data. Activates on competitor research or analysis requests.
Share bugs, ideas, or general feedback.
Systematic framework for gathering and analyzing competitive intelligence using Anysite MCP v2 tools.
All data fetching uses the unified v2 meta-tools:
execute(source, category, endpoint, params) - Fetch data. Returns first 10 items + cache_key. If next_offset is present, use get_page() to load more.get_page(cache_key, offset, limit) - Paginate through cached results from a previous execute(). Data cached 7 days.query_cache(cache_key, conditions, sort_by, aggregate, group_by) - Filter, sort, count, or aggregate already-fetched data without new API calls.export_data(cache_key, format) - Export full dataset as CSV, JSON, or JSONL. Returns a download URL.discover(source, category) - Inspect available endpoints and params before calling execute().Error handling: If execute() returns an error with llm_hint, follow the hint to fix the call. Common issues: wrong URN format, alias not found (search first), fsd_company vs company: prefix mismatch.
Trigger this skill when users ask to:
For single competitor analysis:
# 1. Generate analysis template
python scripts/analyze_competitor.py "Competitor Name" "https://competitor.com"
# 2. Use Anysite v2 tools to gather data (see workflow below)
# 3. Fill in the JSON template with findings
# 4. Generate final report
python scripts/analyze_competitor.py "Competitor Name" "https://competitor.com" | \
python -c "import sys,json; exec('from scripts.analyze_competitor import format_markdown_report; print(format_markdown_report(json.load(sys.stdin)))')" \
> /mnt/user-data/outputs/competitor_report.md
Step 1: Initialize Analysis Structure
Run the analysis script to create structured template:
python scripts/analyze_competitor.py "Competitor Name" "https://competitor.com" > /tmp/analysis.json
Step 2: Web Presence Reconnaissance
Scrape key pages to understand positioning:
# Homepage - core messaging
execute("webparser", "parse", "parse", {
"url": "https://competitor.com",
"only_main_content": true,
"strip_all_tags": true
})
# Pricing - cost structure
execute("webparser", "parse", "parse", {
"url": "https://competitor.com/pricing",
"only_main_content": true
})
# About - company background
execute("webparser", "parse", "parse", {
"url": "https://competitor.com/about",
"only_main_content": true,
"extract_contacts": true
})
Extract from homepage:
Extract from pricing:
Extract from about:
Step 3: Find Company Profile
# Search for company
execute("linkedin", "search", "search_companies", {
"keywords": "competitor name",
"count": 5
})
# Get detailed profile using slug from search results
execute("linkedin", "company", "company", {
"company": "company-slug-from-search"
})
Extract:
follower_count → Online presence indicatoremployee_count → Company sizedescription → Self-positioningheadquarters → Locationspecialties → Keywords they emphasizeStep 4: Analyze Team & Growth
# Check employee growth signals (use search_users with current_company filter)
execute("linkedin", "search", "search_users", {
"current_company": "company-slug",
"keywords": "engineer developer",
"count": 50
})
# Find leadership
execute("linkedin", "search", "search_users", {
"current_company": "company-slug",
"title": "CEO founder",
"count": 10
})
# Get employee stats breakdown (functions, seniority, growth trends)
# First get company URN from company profile, convert fsd_company to company: prefix
execute("linkedin", "company", "company_employee_stats", {
"urn": {"type": "company", "value": "COMPANY_ID_FROM_URN"}
})
Use findings to assess:
Tip: Use query_cache() on the employee search results to filter by title or sort by relevance without re-fetching:
query_cache(cache_key, conditions=[{"field": "headline", "operator": "contains", "value": "engineer"}])
Step 5: Content Strategy
# Analyze posting activity
# Use company: prefix URN from the company profile (convert fsd_company:{id} to company:{id})
execute("linkedin", "company", "company_posts", {
"urn": {"type": "company", "value": "COMPANY_ID_FROM_URN"},
"count": 20
})
Analyze posts for:
Tip: Use query_cache() to aggregate engagement metrics across fetched posts:
query_cache(cache_key, aggregate=[{"field": "comment_count", "function": "avg"}])
Step 6: Twitter Deep Dive
A. Company Account Analysis
# Get profile stats
execute("twitter", "user", "get", {
"username": "competitor_handle"
})
# Recent activity (analyze more posts)
execute("twitter", "user_tweets", "get", {
"username": "competitor_handle",
"count": 100
})
# If next_offset returned, use get_page() to load more:
# get_page(cache_key, offset=next_offset, limit=50)
Extract from company account:
Tip: Use query_cache() to find top-performing tweets:
query_cache(cache_key, sort_by=[{"field": "favorite_count", "order": "desc"}])
B. Founder/Executive Twitter Presence
# Find and analyze founder accounts
execute("twitter", "user", "get", {
"username": "founder_handle"
})
execute("twitter", "user_tweets", "get", {
"username": "founder_handle",
"count": 100
})
Leadership Twitter signals:
C. Brand Mentions & Sentiment
# Comprehensive mention search
execute("twitter", "search", "search_posts", {
"query": "competitor_name OR @handle OR #competitor_hashtag",
"count": 200
})
# Problem/complaint mentions
execute("twitter", "search", "search_posts", {
"query": "competitor_name (problem OR issue OR bug OR slow OR expensive)",
"count": 100
})
# Positive sentiment
execute("twitter", "search", "search_posts", {
"query": "competitor_name (love OR great OR amazing OR best OR solved)",
"count": 100
})
# Competitive mentions
execute("twitter", "search", "search_posts", {
"query": "competitor_name vs OR competitor_name alternative OR switching from competitor_name",
"count": 100
})
Sentiment scoring:
For each mention batch, calculate:
- Positive mentions: praise, recommendations, success stories
- Negative mentions: complaints, frustrations, churn signals
- Neutral mentions: questions, feature discussions
- Competitive mentions: comparisons with alternatives
Sentiment Score = (Positive - Negative) / Total
Range: -1.0 (very negative) to +1.0 (very positive)
Tip: Use query_cache() to filter cached mentions by sentiment keywords without re-fetching:
query_cache(cache_key, conditions=[{"field": "text", "operator": "contains", "value": "love"}])
D. Customer Voice Analysis
# Find actual users
execute("twitter", "search", "search_posts", {
"query": "using competitor_name OR tried competitor_name",
"count": 100
})
# Power users
execute("twitter", "search", "search_posts", {
"query": "@handle thanks OR @handle helped OR @handle support",
"count": 50
})
Extract:
Step 7: Reddit Deep Community Intelligence
A. Brand Presence Mapping
# General mentions across Reddit
execute("reddit", "search", "search_posts", {
"query": "competitor_name",
"count": 100
})
# Industry-specific searches (combine with subreddit keywords)
relevant_topics = [
"SaaS", "startups", "Entrepreneur", # Business
"webdev", "programming", "devops", # Tech
"nocode", "automation", # No-code
"datascience", "analytics" # Data
]
for topic in relevant_topics:
execute("reddit", "search", "search_posts", {
"query": f"competitor_name {topic}",
"count": 50
})
B. Competitive Discussions
# Direct comparisons
execute("reddit", "search", "search_posts", {
"query": "competitor_name vs",
"count": 100
})
# Alternative searches
execute("reddit", "search", "search_posts", {
"query": "alternative to competitor_name",
"count": 100
})
execute("reddit", "search", "search_posts", {
"query": "better than competitor_name",
"count": 50
})
# Problem space
execute("reddit", "search", "search_posts", {
"query": "[problem they solve] tools OR solutions",
"count": 100
})
C. Deep Thread Analysis
For high-engagement threads, get comments:
# Get specific post details
execute("reddit", "posts", "posts", {
"post_url": "reddit.com/r/subreddit/comments/..."
})
# Get all comments
execute("reddit", "posts", "posts_comments", {
"post_url": "reddit.com/r/subreddit/comments/..."
})
Analyze thread comments for:
Tip: Use query_cache() on fetched comments to sort by score and find the most upvoted opinions:
query_cache(cache_key, sort_by=[{"field": "score", "order": "desc"}])
D. Sentiment & Voice Analysis
Positive signals:
Negative signals:
Neutral/informational:
E. Community Size & Engagement
Calculate metrics:
Brand Awareness Score:
- Total unique mentions (last 30 days)
- Number of different subreddits mentioned in
- Average upvotes per mention
- Comment volume per mention
Community Health:
- Positive/Negative mention ratio
- Response rate to questions
- Problem resolution in comments
- Community helping each other
Step 7.5: Cross-Platform Insight Synthesis
Compare Twitter vs Reddit:
Twitter typically shows:
Reddit typically reveals:
Look for disconnects:
Step 8: Identify Key Leaders
# Find founders and C-level
execute("linkedin", "search", "search_users", {
"company_keywords": "competitor-name",
"title": "founder OR CEO OR CTO OR CPO",
"count": 10
})
# Get detailed profiles
execute("linkedin", "user", "user", {
"user": "founder-linkedin-username",
"with_experience": true,
"with_education": true,
"with_skills": true
})
Extract for each leader:
Step 9: Analyze Leadership Activity
# Get personal posts (use fsd_profile URN from user profile)
execute("linkedin", "user", "user_posts", {
"urn": {"type": "fsd_profile", "value": "USER_URN_VALUE"},
"count": 50
})
# Check comments on others' posts
execute("linkedin", "user", "user_comments", {
"urn": {"type": "fsd_profile", "value": "USER_URN_VALUE"},
"count": 30
})
# See what they're engaging with
execute("linkedin", "user", "user_reactions", {
"urn": {"type": "fsd_profile", "value": "USER_URN_VALUE"},
"count": 50
})
Analyze for:
Tip: Use query_cache() on leadership posts to aggregate engagement:
query_cache(cache_key, aggregate=[
{"field": "comment_count", "function": "avg"},
{"field": "comment_count", "function": "sum"}
])
Step 10: Twitter Leadership Presence
# Founder Twitter activity
execute("twitter", "user", "get", {
"username": "founder_handle"
})
execute("twitter", "user_tweets", "get", {
"username": "founder_handle",
"count": 100
})
Leadership indicators:
Step 11: Documentation Quality
# Scrape docs homepage
execute("webparser", "parse", "parse", {
"url": "https://competitor.com/docs",
"only_main_content": true
})
# Check API reference
execute("webparser", "parse", "parse", {
"url": "https://competitor.com/api",
"only_main_content": true
})
Assess:
Step 12: GitHub Presence (if applicable)
# Parse GitHub profile page
execute("webparser", "parse", "parse", {
"url": "https://github.com/competitor-org",
"only_main_content": true
})
# Check main repository
execute("webparser", "parse", "parse", {
"url": "https://github.com/competitor-org/main-repo",
"only_main_content": true
})
Extract:
Step 13: Alternative Data Sources
# Glassdoor reviews (if company page exists)
execute("webparser", "parse", "parse", {
"url": "https://www.glassdoor.com/Reviews/competitor-name",
"only_main_content": true
})
What to extract:
Step 14: Integration Ecosystem
# Get sitemap to find all pages
execute("webparser", "sitemap", "sitemap", {
"url": "https://competitor.com",
"count": 50
})
# Parse integrations page
execute("webparser", "parse", "parse", {
"url": "https://competitor.com/integrations",
"only_main_content": true
})
Extract:
Step 15: Competitive Analysis
Compare findings against your own product:
Strengths (what they do well):
Weaknesses (where they struggle):
Opportunities (what you can exploit):
Threats (what you need to watch):
Step 16: Strategic Insights
Synthesize everything into:
Key Takeaways (3-5 bullets):
Competitive Threats (2-3 bullets):
Opportunities to Exploit (3-5 bullets):
Watch Areas:
Step 17: Generate Final Report
Update the JSON template with all findings, then generate markdown:
import json
from scripts.analyze_competitor import save_analysis
# Load populated template
with open('/tmp/analysis.json', 'r') as f:
data = json.load(f)
# Generate reports
json_path, md_path = save_analysis(data)
print(f"Reports saved:\n JSON: {json_path}\n Markdown: {md_path}")
Move final files to outputs:
cp /tmp/analysis.json /mnt/user-data/outputs/
cp /tmp/analysis.md /mnt/user-data/outputs/
Tip: Use export_data() to export any collected dataset for the final report:
export_data(cache_key, "csv") # Returns download URL for spreadsheet import
export_data(cache_key, "json") # Returns download URL for programmatic use
For analyzing 3-5 competitors simultaneously:
Tip: Export each competitor's data and use query_cache() to compare metrics:
# After fetching data for each competitor, aggregate and compare
query_cache(cache_key, aggregate=[{"field": "follower_count", "function": "sum"}])
export_data(cache_key, "csv")
For quarterly updates (not full re-analysis):
Quick check (30 min):
# 1. Re-scrape pricing
execute("webparser", "parse", "parse", {"url": "https://competitor.com/pricing"})
# 2. Check recent posts (use company: prefix URN)
execute("linkedin", "company", "company_posts", {
"urn": {"type": "company", "value": "COMPANY_ID"},
"count": 10
})
# 3. Employee growth
execute("linkedin", "search", "search_users", {
"current_company": "company-slug",
"count": 20
})
# 4. Recent mentions
execute("twitter", "search", "search_posts", {
"query": "competitor",
"count": 50
})
Update only changed sections in JSON template.
For sales team quick reference:
Focus on:
Keep to 1-2 pages maximum.
When you need detailed guidance:
Data collection methodology: See data_collection.md
Analysis frameworks: See analysis_frameworks.md
For quick competitive scan:
For comprehensive analysis:
For pricing-specific analysis:
For founder/team intelligence:
Use when:
Be Systematic:
Think Strategically:
Verify Claims:
Focus on Actionable Insights:
Document Data Freshness:
Use v2 Efficiency Features:
get_page() instead of re-executing with higher countsquery_cache() to filter/sort/aggregate without new API callsexport_data() to generate shareable CSV/JSON filesllm_hint in error responses for fix guidanceGood competitive analysis includes:
Avoid:
"Can't find LinkedIn company":
"Pricing page missing/unclear":
"No social media presence":
"Too much data, overwhelmed":
"execute() returned error with llm_hint":
"Need more than 10 results from execute()":
next_offsetget_page(cache_key, offset=next_offset, limit=50) to load more