Analyze a company's visual design language across all marketing asset types — blog covers, changelogs, integrations, customer stories, testimonials, logo walls, stat cards, landing pages, and product UI presentation. Produces a structured evidence file with measured proportions, design system observations, and quality ratings. Use when asked to study, benchmark, or analyze another company's design, brand identity, or marketing graphics.
From gtmnpx claudepluginhub inkeep/team-skills --plugin gtmThis skill uses the workspace's default tool permissions.
references/analysis-rubric.mdreferences/output-template.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
Systematically analyze how a B2B company designs its marketing graphics. Produces a structured evidence file per company with measured proportions, visual analysis across every design facet, and actionable takeaways.
This skill is designed to be invoked via /nest-claude for parallelism — each session analyzes 1-4 companies with its own full context window. Multiple sessions run concurrently, each writing per-company evidence files to a shared output directory. An orchestrator session then synthesizes across all dissections.
$ARGUMENTS contains one or more company domains and an optional output directory:
/dissect-brand resend.com decagon.ai neon.com --output ~/reports/visual-playbook/dissections
Defaults:
--output is not specified, deliver findings in the conversation (not to a file)--output IS specified, write one file per company: {output-dir}/{domain-slug}.md (e.g., resend-com.md)Before starting any work, create a task for each step using TaskCreate with addBlockedBy to enforce ordering. Derive descriptions and completion criteria from each step's own workflow text.
Mark each task in_progress when starting and completed when its step's exit criteria are met. On re-entry, check TaskList first and resume from the first non-completed task.
Create directories for image downloads and persistent evidence. Use the company domain slug for namespacing:
# Temp workspace for raw downloads
mkdir -p /tmp/dissect/thumbs
# Persistent evidence directory (alongside the dissection .md files)
# If --output is ~/reports/visual-playbook/dissections, create:
# ~/reports/visual-playbook/dissections/images/{company-slug}/
# This preserves the actual images for human review and orchestrator spot-checking
For each company being analyzed, create its image evidence directory:
mkdir -p {output-dir}/images/{company-slug}
Why save images persistently: The dissection markdown describes what you saw, but the orchestrator and humans need to LOOK at the images to verify claims, spot-check proportional measurements, and make their own judgments. Text descriptions of visual design are inherently lossy — the image is the ground truth.
Visit the company's website and locate pages for each asset type. Not every company has all types — skip what doesn't exist, note it as "Not found."
| Asset type | Where to look | What you're looking for |
|---|---|---|
| Blog covers | /blog, /news, /posts | Thumbnail/hero images on blog listing + individual posts |
| Changelog graphics | /changelog, /updates, /releases, /whats-new | Per-entry graphics, template system |
| Integration cards | /integrations, /partners, /ecosystem, or integrations section on homepage | Logo pairings, integration showcase cards |
| Customer stories | /customers, /case-studies, /stories | Hero images, card thumbnails, metric displays |
| Testimonial quotes | Homepage, /customers, landing pages | Quote cards, avatar+quote formatting |
| Logo wall | Homepage "trusted by" section, /customers | Logo grid, treatment (mono vs color), density |
| Stat/metric cards | Blog thumbnails, case studies, landing page sections | Standalone stat graphics, data callouts |
| Landing page hero | Homepage (/) | Main product hero visual, above-the-fold graphic |
| Product UI presentation | Blog posts, landing pages, feature pages | How they show their product: raw screenshots, stylized mockups, abstract representations, or never shown |
Use WebFetch to load each page. Extract image URLs from og:image meta tags, img src attributes, or CSS background-image properties.
For each asset type, download 2-4 representative images. Save to BOTH temp (for analysis) and evidence directory (for persistence).
Naming convention: {type}-{slug}.png where type is one of: blog, changelog, integration, customer, quote, logo-wall, stat, hero, product-ui
# Download original
curl -sL -o /tmp/dissect/{company}-{type}-{slug}.{ext} "IMAGE_URL"
# Record source dimensions (ALWAYS do this — critical data)
sips -g pixelWidth -g pixelHeight /tmp/dissect/{company}-{type}-{slug}.{ext}
# Resize for analysis (800px max — prevents image reading crashes)
sips -s format png -Z 800 /tmp/dissect/{company}-{type}-{slug}.{ext} \
--out /tmp/dissect/thumbs/{company}-{type}-{slug}.png
# Save to persistent evidence directory (resized copy for human review)
cp /tmp/dissect/thumbs/{company}-{type}-{slug}.png \
{output-dir}/images/{company-slug}/{type}-{slug}.png
Convert non-PNG formats (avif, webp) during resize:
sips -s format png -Z 800 source.avif --out /tmp/dissect/thumbs/output.png
What gets saved persistently:
{output-dir}/images/{company-slug}/ — small enough to browse, large enough to see detailimages/{company-slug}/{type}-{slug}.pngWhat stays in /tmp only:
⛔ Read images ONE AT A TIME. After reading each image, write your complete analysis before reading the next. Reading multiple large images in one turn causes crashes.
Load: references/analysis-rubric.md for the complete list of facets to evaluate per image.
For each image, work through the rubric systematically. The proportional measurements (margins, heading size, content coverage) are the MOST valuable output — measure quantitatively, not qualitatively. Say "heading is ~14% of canvas height at 1920px source" not "large heading."
Look ACROSS all the images you've analyzed for this company:
Rate the company on each dimension (1-5 scale):
| Dimension | What it measures |
|---|---|
| Consistency | Do all assets feel like one brand? |
| Craft quality | Typography, spacing, color, shadow, alignment — attention to detail |
| Creativity/distinctiveness | Unique or generic? Would you remember this? |
| Product showcase | How effectively do they show what the product actually does? |
| Small-size readability | Does the thumbnail work at ~300px wide (card size)? |
| Brand system strength | How well-defined is the locked-vs-variable system? |
| Emotional coherence | Does the visual register match the product positioning? |
Write 3 key takeaways — the most important things another company could learn from studying this brand.
Write the evidence file in the format specified in references/output-template.md.
If --output was specified, write to {output-dir}/{domain-slug}.md. If not, deliver in conversation.
The goal is NOT a spreadsheet of dimensions. For each image, ask:
Not every company is good at every asset type. If their case study graphics are mediocre, say so — and explain WHY compared to their own blog covers or compared to other companies. The contrast between strong and weak work is instructive.
The best design is often defined by restraint. For each company, actively study:
Bad: "The heading is large and bold" Good: "The heading is ~14% of canvas height (~140px at 1920w source), set in what appears to be a Didone serif at weight ~700. It occupies the left 40% of the frame. The weight contrast between heading and subtitle is approximately 5:1."
Must have:
Should have: