From mercury
Mercury Strategy collection stage 1 — company identity, benchmark check, situational awareness, and website inventory. Use when the consultant runs /ms-brief. This stage collects only. It produces no findings, no evaluations, and no recommendations.
npx claudepluginhub mb-uc/mercury --plugin mercuryThis skill uses the workspace's default tool permissions.
Collection stage 1 of 3. Produces a structured evidence manifest covering company identity, benchmark data, situational awareness, and a high-level website inventory. No analysis. No findings.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Collection stage 1 of 3. Produces a structured evidence manifest covering company identity, benchmark data, situational awareness, and a high-level website inventory. No analysis. No findings.
Read this file in full before starting. Then:
Establish and record the following. All fields are required before proceeding.
| Field | Source |
|---|---|
| Legal name | Company website, regulatory filings |
| Trading name (if different) | Website, news coverage |
| Stock ticker and exchange | IR landing page, Bloomberg, LSE/NYSE |
| Listing status | Listed / private / recently listed / delisted |
| Sector and sub-sector | Company description, SIC code |
| Headquarters | About or contact page — verify against a second source (e.g. Companies House, Bloomberg, annual report) before recording. HQ is the most trust-sensitive field; an error here undermines the entire brief. |
| Primary domain | Confirmed live URL |
| Subdomains (if any) | Note any careers / IR / sustainability subdomains |
| Geographic scope | Global / regional / single-market |
| Revenue scale (approximate) | Annual report, press coverage — record if available, note if not |
Record in the evidence manifest as company_identity.
Query the Connect.IQ benchmark dataset for this company. The dataset covers 747 companies including the FTSE 100, FTSE 250, S&P 500, and Euro STOXX 50.
Run three queries:
Query 1 — Company scores
SELECT company, overall, company_narrative, content_mix, channel_mix,
optimization, reach, about_us, ir, media, csr, careers,
reputational_resilience, index_name, dataset_year
FROM sector_intelligence.iq_benchmarks
WHERE LOWER(company) LIKE LOWER('%{company}%')
LIMIT 5
Query 2 — Index statistics
SELECT
AVG(overall) AS mean_score,
APPROX_QUANTILES(overall, 4)[OFFSET(2)] AS median_score,
APPROX_QUANTILES(overall, 4)[OFFSET(3)] AS p75_score,
COUNT(*) AS company_count
FROM sector_intelligence.iq_benchmarks
WHERE index_name = '{index_from_query_1}'
Query 3 — Rank within index
SELECT company, overall,
RANK() OVER (ORDER BY overall DESC) AS rank,
COUNT(*) OVER () AS total
FROM sector_intelligence.iq_benchmarks
WHERE index_name = '{index_from_query_1}'
ORDER BY overall DESC
Record in the evidence manifest as benchmark:
{
"overall": 0,
"index_name": "",
"sector_median": 0,
"sector_p75": 0,
"rank": 0,
"total_in_index": 0,
"dataset_year": ""
}
Record in the evidence manifest:
{
"benchmark": {
"status": "not_in_dataset",
"note": "Company not found in Connect.IQ benchmark dataset. No comparative scoring available."
}
}
Do not attempt to estimate a score. Do not search for alternative benchmark sources at this stage.
Fallback context: When a company is not in the dataset, query general index statistics (median, P25, P75) for the company's listing index (e.g. FTSE 100, FTSE 250) and record them in benchmark.index_context. This gives the consultant sector framing even without a company-specific score.
Gather recent intelligence on the company. Scope: last six months only. Do not surface older material unless it is a structural fact (founding date, listing history).
Run web searches covering:
Do not search for: share price, analyst ratings, or market cap data. These are not used in the findings stage.
Apply references/MATERIAL_EVENTS_CHECKLIST.md to the search results. For each event type in the checklist, record:
Record in the evidence manifest as situational_awareness:
{
"search_date": "",
"material_events": [],
"recent_news_summary": "",
"sources": []
}
Fetch the company's news or newsroom section directly — do not rely on search results alone for recent news. Use firecrawl_scrape on the newsroom landing page and note:
Establish the primary domain's structure using firecrawl_map. Check references/CRAWL_CONFIG.md for any domain-specific configuration before running the map.
Default call:
firecrawl_map(url: "{primary_domain}")
If a CRAWL_CONFIG entry exists for this domain, use the specified parameters instead.
From the map output:
references/CLASSIFICATION_RULES.md Priority 2)robots.txt at {domain}/robots.txt and record any significant Disallow rulesIf firecrawl_map returns fewer than 10 URLs, the site may be blocking the crawler. Record firecrawl_map: blocked and fall back to sitemap.xml.
Sitemap fallback:
Fetch {domain}/sitemap.xml. If it returns a sitemap index, fetch each child sitemap. Extract <loc> URLs and count them.
Record in the evidence manifest as site_discovery:
{
"firecrawl_map_status": "complete | partial | blocked",
"sitemap_status": "present | present_incomplete | not_found",
"total_urls_discovered": 0,
"top_level_sections": [],
"subdomains_found": [],
"robots_disallow": []
}
Scrape the homepage and four to six key section landing pages. Priority order:
For each page, use firecrawl_scrape with onlyMainContent: true (or CRAWL_CONFIG override if applicable).
Apply URL classification from references/CLASSIFICATION_RULES.md to assign each page a section_key and playbook_page_type.
Apply presence quality classification from references/CLASSIFICATION_RULES.md (Presence quality classification section) to each scraped page:
| Quality | Criteria |
|---|---|
present | 400+ words, structured headings, content addresses the expected concept |
present_thin | Page exists but fewer than 200 words or generic boilerplate |
present_stale | Content not updated in 18+ months |
present_documents_only | Page consists only of PDF download links |
present_external | Content served via external platform |
present_generic | Page exists but not configured for expected audience or purpose |
Record in the evidence manifest as section_inventory — one entry per page:
{
"url": "",
"section_key": "",
"playbook_page_type": "",
"classification_rule": "priority_1_exact | priority_2_segment | priority_3_deep | ambiguous",
"classification_confidence": "high | medium",
"presence_quality": "",
"word_count": 0,
"page_title": "",
"content_summary": ""
}
Probe for downloadable documents linked from the pages crawled in Step 5. Look for PDF, PPTX, XLSX links in the scraped content. Do not fetch document contents at this stage — record URLs and apparent document types only.
Classify by document type using references/CLASSIFICATION_RULES.md Document sub-classification table:
Priority document types (most valuable for the findings stage):
Present the document list to the consultant before extracting any documents. Show:
Wait for confirmation before proceeding. The consultant may choose to:
Do not extract document contents without explicit confirmation.
For approved documents, use firecrawl_scrape on the document URL. Record:
Read references/PEER_RESEARCH_GUIDE.md before beginning this step.
Propose 4–5 sector peers using the three primary filters from PEER_RESEARCH_GUIDE.md:
After applying the primary filters, use the secondary considerations to choose between eligible candidates (direct competitive overlap, acknowledged benchmarks, digital maturity contrast, avoid related parties).
Present the proposed peer set to the consultant before running any research. The peer table must be the last thing you output before stopping — do not append menus, follow-up options, or other text after the table. This ensures the table is visible on screen when the consultant reads it.
Proposed peer set for [Client]:
| # | Company | Sector | Index | Market cap | Rationale |
|---|---------|--------|-------|------------|-----------|
| 1 | [Name] | [sector] | [index] | [cap] | [1 sentence] |
| 2 | [Name] | [sector] | [index] | [cap] | [1 sentence] |
...
Reply "go" to confirm, or suggest replacements.
Wait for confirmation. Do not begin peer research until the peer set is locked. Do not show a choice menu or any other UI element alongside the peer table — the table itself is the prompt.
For each confirmed peer, run the research scope defined in PEER_RESEARCH_GUIDE.md:
Per peer (credit budget: ~7 credits)
Run firecrawl_map on the root domain. Record total URL count and subdomains. Classify sections present using references/CLASSIFICATION_RULES.md.
Scrape 6–10 targeted pages: homepage, IR landing, sustainability landing, careers landing, newsroom, and any sections relevant to the specific archetypes being investigated (check which archetypes are most likely given the company profile from Step 1).
Run the abbreviated document check: annual report, sustainability report, results presentation, TCFD report, and any documents directly relevant to the engagement focus. Do not run the full DOCUMENT_CHECKLIST at this stage — that is the client's domain (ms-crawl).
Note any features or capabilities that are notably stronger or weaker than the client.
For each peer, complete the feature matrix using the standard feature set from PEER_RESEARCH_GUIDE.md (F01–F39). Record each feature as present, present_thin, present_external, absent, or not_assessed.
Feature categories:
If the total peer research budget would exceed 35 credits, reduce pages scraped per peer rather than reducing peer count. Minimum viable pass per peer: homepage + IR landing + sustainability landing + careers landing (3 targeted pages, ~4 credits).
If a peer site blocks the crawl, record all feature matrix entries as not_assessed and note the limitation.
Where Connect.IQ benchmark data is available for peers, pull it to supplement the feature matrix:
SELECT company, overall, ir, csr, careers, media, about_us
FROM sector_intelligence.iq_benchmarks
WHERE LOWER(company) IN ([peer names lower-cased])
AND dataset_year = 2024
If Connect.IQ data is not available for a peer (company not in the benchmark universe), note this and rely solely on the feature matrix. Do not estimate IQ scores for peers.
Record peer research findings in the evidence manifest as peer_context:
{
"peer_context": {
"confirmed_peers": [
{
"name": "",
"domain": "",
"index": "",
"sector": "",
"market_cap_approx": "",
"pages_discovered": 0,
"subdomains": [],
"iq_scores": {},
"notable_features": [],
"notable_absences": [],
"research_limitations": []
}
],
"feature_matrix": {
"client": "",
"peers": [],
"research_date": "",
"features": [
{
"feature_id": "F01",
"category": "",
"feature": "",
"client": "present | present_thin | present_external | absent | not_assessed",
"peers": {},
"notes": ""
}
]
},
"peer_patterns": [
{
"pattern": "",
"client_position": "aligned | ahead | behind | not_assessed"
}
]
}
}
peer_patterns: after completing the feature matrix, identify patterns across the peer set — features present across most or all peers, features the client has that peers lack, and gaps shared across the entire peer set. These patterns feed directly into benchmark framing in ms-findings.
Save the complete evidence manifest as {company}-ms-brief-evidence.json.
Manifest structure:
{
"stage": "ms-brief",
"company": "",
"domain": "",
"collected_at": "",
"company_identity": {},
"benchmark": {},
"situational_awareness": {},
"site_discovery": {},
"section_inventory": [],
"document_inventory": [],
"document_extraction": [],
"peer_context": {},
"evidence_gaps": [],
"consultant_notes": []
}
evidence_gaps: record any sections that could not be accessed, pages that returned errors, or data points that could not be confirmed. These are inputs to the ms-findings limitations section.
consultant_notes: record any observations that require consultant judgement — ambiguous company structures, recently changed domains, apparent site migrations in progress, etc.
After saving the evidence manifest, store the brief results in BigQuery if the bigquery connector is available (mcp__bigquery__run_query in Cowork). This is a best-effort step — if the tool is unavailable, skip silently.
SELECT GENERATE_UUID() AS analysis_id
Then insert:
INSERT INTO sector_intelligence.ms_analyses
(analysis_id, company, domain, generated_at, analysis_type,
executive_summary, coverage_confidence, iq_score, index_name,
sector, listing_status, pages_loaded, sections_assessed,
evidence_gaps, limitations, loaded_at)
VALUES (
'{analysis_id}',
'{company}',
'{domain}',
CURRENT_TIMESTAMP(),
'ms_brief',
NULL,
NULL,
{benchmark.iq_score or NULL},
'{benchmark.index_name or NULL}',
'{sector or NULL}',
'{listing_status}',
0,
['{top_level_section_1}', '{top_level_section_2}', ...],
['{evidence_gap_1}', ...],
[],
CURRENT_TIMESTAMP()
)
The analysis_type is 'ms_brief'. Fields like executive_summary and coverage_confidence are NULL — they are populated when ms-findings runs and updates this row. The artefact_json column can optionally hold the full manifest JSON.
Do not block: If the INSERT fails, proceed to Stage completion.
After saving the manifest, show a clean summary to the consultant:
Show:
Do not show: raw JSON, criterion observations, findings, or recommendations.
Offer:
/ms-crawl (recommended next step)If document extraction is still pending consultant approval, surface the document list now and wait for instruction before offering to continue.