From anysite-skills
Conducts market research using Y Combinator data, SEC filings, social media insights from Reddit/Twitter/LinkedIn, and web scraping. Analyzes tech markets, startups, public companies, sentiments, and opportunities.
npx claudepluginhub anysiteio/agent-skills --plugin anysite-cliThis skill uses the workspace's default tool permissions.
Comprehensive market research using Y Combinator, SEC, social media, and web data through anysite MCP. Analyze tech markets, research startups, and study competitive landscapes.
Runs market research, competitive analysis, investor due diligence, and industry scans. Use for market sizing, competitor comparisons, fund research, or tech scans.
Conducts market research, competitive analysis, investor diligence, and tech scans with sourced findings, contrarian views, and decision recommendations.
Gathers competitive intelligence via anysite MCP server from LinkedIn, social media, Y Combinator, and web. Tracks competitors, hiring patterns, content strategies, and market positioning for strategic analysis.
Share bugs, ideas, or general feedback.
Comprehensive market research using Y Combinator, SEC, social media, and web data through anysite MCP. Analyze tech markets, research startups, and study competitive landscapes.
Coverage: 70% - Excellent for tech/startup markets; pivoted from local business to tech focus
All data fetching uses the universal execute() meta-tool. Always call discover(source, category) first if you need to verify endpoint names or parameters.
Core workflow:
execute(source, category, endpoint, params) -- fetch data (returns first page + cache_key)get_page(cache_key, offset, limit) -- paginate through remaining resultsquery_cache(cache_key, conditions, sort_by, aggregate, group_by) -- filter/sort/aggregate cached data without new API callsexport_data(cache_key, format) -- export to CSV, JSON, or JSONL for deliverablesError handling: check response for llm_hint field -- it contains actionable guidance when calls fail or return partial data.
Step 1: Define Research Scope
Choose focus:
execute("yc", "search", "search", {"query": ...})execute("sec", "search", "search", {"query": ...})execute("reddit", "search", "search", {"query": ...}), execute("twitter", "search", "search_users", {"query": ...})execute("linkedin", "search", "search_companies", {...})Step 2: Gather Data
Execute searches:
# Startup research
execute("yc", "search", "search", {"query": "fintech", "batch": "W24,S23"})
# Public company research
execute("sec", "search", "search", {"query": "tech company"})
# Market sentiment
execute("reddit", "search", "search", {"query": "fintech trends"})
→ use get_page(cache_key, offset, limit) to collect up to 100 results
Step 3: Analyze Results
Use query_cache() to slice data without re-fetching:
# Count startups by category
query_cache(cache_key, aggregate={"field": "category", "function": "count"})
# Filter high-engagement posts
query_cache(cache_key, conditions=[{"field": "score", "operator": ">", "value": 50}], sort_by={"field": "score", "order": "desc"})
Extract insights:
Step 4: Synthesize Findings
Use export_data(cache_key, "csv") or export_data(cache_key, "json") to deliver:
Scenario: Analyze fintech startup landscape
Steps:
execute("yc", "search", "search", {
"query": "fintech",
"batch": "W24,S23,W23,S22"
})
→ use get_page(cache_key, offset, limit) to paginate through all results
For each startup:
execute("yc", "company", "get", {"slug": company_slug})
Group by:
- Payments
- Lending
- Investment/Trading
- Banking
- Insurance
- B2B fintech tools
Or use query_cache to group:
query_cache(cache_key, group_by="category")
Identify:
- Hot subcategories (most startups)
- Team size distribution
- Geographic concentration
- Common tech stacks (from job postings)
Use query_cache for aggregation:
query_cache(cache_key, aggregate={"field": "team_size", "function": "avg"})
For promising startups:
execute("linkedin", "search", "search_companies", {"keywords": startup_name})
→ Check employee growth
execute("twitter", "search", "search_users", {"query": startup_name})
→ Check social presence and buzz
execute("webparser", "parse", "parse", {"url": startup_website})
→ Check positioning and features
Compare:
- Overcrowded categories
- Underserved segments
- Emerging opportunities
- Geographic gaps
Expected Output:
Use export_data(cache_key, "csv") to deliver the startup list as a spreadsheet.
Scenario: Research public competitors in cloud infrastructure
Steps:
execute("sec", "search", "search", {
"query": "cloud"
})
→ use get_page(cache_key, offset, limit) to collect up to 50 results
For each company:
execute("sec", "document", "get", {"url": document_url})
Extract:
- Revenue and growth
- Operating margins
- R&D spending
- Geographic breakdown
- Risk factors mentioned
From 10-K filings:
- Business model
- Target markets
- Competitive advantages
- Growth initiatives
- Challenges and risks
Compare year-over-year:
- Revenue growth trends
- Market focus shifts
- New initiatives
- Risk factor changes
execute("linkedin", "search", "search_companies", {"keywords": company_name})
→ Employee count, hiring patterns
execute("linkedin", "company", "get", {"company": company_urn})
→ Company details and strategic messaging
execute("reddit", "search", "search", {"query": company_name})
→ Customer sentiment
Use query_cache to filter sentiment:
query_cache(cache_key, conditions=[{"field": "text", "operator": "contains", "value": "review"}])
Expected Output:
Use export_data(cache_key, "json") for structured competitive data.
Scenario: Understand AI/ML market evolution
Steps:
execute("yc", "search", "search", {
"query": "AI OR machine learning OR artificial intelligence"
})
→ use get_page(cache_key, offset, limit) to collect up to 200 results
Group by batch to see:
- Trend over time
- Focus area shifts
- Team size changes
query_cache(cache_key, group_by="batch", aggregate={"field": "id", "function": "count"})
execute("sec", "search", "search", {
"query": "artificial intelligence"
})
→ use get_page(cache_key, offset, limit) to collect up to 50 results
Check 10-K mentions of:
- "AI" or "machine learning" frequency
- AI-related investments
- AI revenue segments
execute("reddit", "search", "search", {
"query": "AI trends 2026"
})
→ use get_page(cache_key, offset, limit) to collect up to 100 results
Analyze for:
- Excitement vs. concern
- Adoption barriers
- Use case validation
- Technology maturity
query_cache(cache_key, sort_by={"field": "score", "order": "desc"})
execute("linkedin", "post", "search_posts", {
"keywords": "artificial intelligence"
})
Check:
- Industry adoption
- Job market signals
- Skill requirements
- Thought leader opinions
For key AI companies:
execute("webparser", "parse", "parse", {"url": website + "/blog"})
→ Technology updates, product launches
Expected Output:
Use export_data(cache_key, "csv") for trend data tables.
execute(source, category, endpoint, params) -- Universal data fetcher; always returns cache_keyget_page(cache_key, offset, limit) -- Load additional pages from a previous execute()query_cache(cache_key, conditions, sort_by, aggregate, group_by) -- Filter, sort, and aggregate cached dataexport_data(cache_key, format) -- Export to CSV, JSON, or JSONL; returns download URLexecute("yc", "search", "search", {"query": ...}) -- Find startups by industry, batch, filtersexecute("yc", "company", "get", {"slug": ...}) -- Get detailed company profileexecute("sec", "search", "search", {"query": ...}) -- Find public companies and filingsexecute("sec", "document", "get", {"url": ...}) -- Get full document contentexecute("reddit", "search", "search", {"query": ...}) -- Community insights and sentimentexecute("twitter", "search", "search_users", {"query": ...}) -- Real-time market pulseexecute("linkedin", "post", "search_posts", {"keywords": ...}) -- Professional trendsexecute("linkedin", "search", "search_companies", {"keywords": ...}) -- Find companiesexecute("linkedin", "company", "get", {"company": ...}) -- Company detailsexecute("webparser", "parse", "parse", {"url": ...}) -- Extract website datadiscover(source, category) to explore available endpoints for any sourceexecute("webparser", "parse", "parse", {"url": ...}) -- Scrape any URL for market dataNote: Crunchbase endpoints are disabled in v2. Use LinkedIn company search and Y Combinator data as alternatives for company research.
TAM/SAM/SOM Analysis:
Total Addressable Market (TAM):
- Count YC companies in category x avg market size
- SEC filing market size mentions
- Industry reports via execute("webparser", "parse", "parse", {"url": report_url})
Serviceable Addressable Market (SAM):
- Filter by geography, segment using query_cache()
- LinkedIn company search by ICP
- YC companies by batch/stage
Serviceable Obtainable Market (SOM):
- Realistic capture based on competition
- Competitive analysis via LinkedIn/social
- Market share indicators
Porter's Five Forces:
Using anysite v2 data:
1. Competitive Rivalry:
- YC startups in space
- LinkedIn company counts
- Social mention volume
2. Threat of New Entrants:
- Recent YC batches
- Funding announcements
- Talent movement (LinkedIn)
3. Supplier Power:
- Technology dependencies
- Integration partners
4. Buyer Power:
- Customer reviews (Reddit)
- Pricing transparency
- Switching costs mentioned
5. Threat of Substitutes:
- Alternative solutions
- Adjacent markets
Chat Summary:
CSV Export (via export_data(cache_key, "csv")):
JSON Export (via export_data(cache_key, "json")):
Ready for market research? Ask Claude to help you analyze markets, research startups, or study competitive landscapes using this skill!