From tsrun-toolkit
Reverse-engineer and clone one or more websites in one shot — extracts assets, CSS, and content section-by-section and proactively dispatches parallel builder agents in worktrees as it goes. Use this whenever the user wants to clone, replicate, rebuild, reverse-engineer, or copy any website. Also triggers on phrases like "make a copy of this site", "rebuild this page", "pixel-perfect clone". Provide one or more target URLs as arguments.
npx claudepluginhub tsrun/tsrun-toolkit --plugin tsrun-toolkitThis skill uses the workspace's default tool permissions.
You are about to reverse-engineer and rebuild **$ARGUMENTS** as pixel-perfect clones.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
You are about to reverse-engineer and rebuild $ARGUMENTS as pixel-perfect clones.
When multiple URLs are provided, process them independently and in parallel where possible, while keeping each site's extraction artifacts isolated in dedicated folders (for example, docs/research/<hostname>/).
This is not a two-phase process (inspect then build). You are a foreman walking the job site — as you inspect each section of the page, you write a detailed specification to a file, then hand that file to a specialist builder agent with everything they need. Extraction and construction happen in parallel, but extraction is meticulous and produces auditable artifacts.
The target is whatever page $ARGUMENTS resolves to. Clone exactly what's visible at that URL. Unless the user specifies otherwise, use these defaults:
If the user provides additional instructions (specific fidelity level, customizations, extra context), honor those over the defaults.
$ARGUMENTS as one or more URLs. Normalize and validate each URL; if any are invalid, ask the user to correct them before proceeding. For each valid URL, verify it is accessible via your browser MCP tool.npm run build. The Next.js + shadcn/ui + Tailwind v4 scaffold should already be in place. If not, tell the user to set it up first.docs/research/, docs/research/components/, docs/design-references/, scripts/. For multiple clones, also prepare per-site folders like docs/research/<hostname>/ and docs/design-references/<hostname>/.These are the truths that separate a successful clone from a "close enough" mess. Internalize them — they should inform every decision you make.
Every builder agent must receive everything it needs to do its job perfectly: screenshot, exact CSS values, downloaded assets with local paths, real text content, component structure. If a builder has to guess anything — a color, a font size, a padding value — you have failed at extraction. Take the extra minute to extract one more property rather than shipping an incomplete brief.
When an agent gets "build the entire features section," it glosses over details — it approximates spacing, guesses font sizes, and produces something "close enough" but clearly wrong. When it gets a single focused component with exact CSS values, it nails it every time.
Look at each section and judge its complexity. A simple banner with a heading and a button? One agent. A complex section with 3 different card variants, each with unique hover states and internal layouts? One agent per card variant plus one for the section wrapper. When in doubt, make it smaller.
Complexity budget rule: If a builder prompt exceeds ~150 lines of spec content, the section is too complex for one agent. Break it into smaller pieces. This is a mechanical check — don't override it with "but it's all related."
Extract the actual text, images, videos, and SVGs from the live site. This is a clone, not a mockup. Use element.textContent, download every <img> and <video>, extract inline <svg> elements as React components. The only time you generate content is when something is clearly server-generated and unique per session.
Layered assets matter. A section that looks like one image is often multiple layers — a background watercolor/gradient, a foreground UI mockup PNG, an overlay icon. Inspect each container's full DOM tree and enumerate ALL <img> elements and background images within it, including absolutely-positioned overlays. Missing an overlay image makes the clone look empty even if the background is correct.
Nothing can be built until the foundation exists: global CSS with the target site's design tokens (colors, fonts, spacing), TypeScript types for the content structures, and global assets (fonts, favicons). This is sequential and non-negotiable. Everything after this can be parallel.
A website is not a screenshot — it's a living thing. Elements move, change, appear, and disappear in response to scrolling, hovering, clicking, resizing, and time. If you only extract the static CSS of each element, your clone will look right in a screenshot but feel dead when someone actually uses it.
For every element, extract its appearance (exact computed CSS via getComputedStyle()) AND its behavior (what changes, what triggers the change, and how the transition happens). Not "it looks like 16px" — extract the actual computed value. Not "the nav changes on scroll" — document the exact trigger (scroll position, IntersectionObserver threshold, viewport intersection), the before and after states (both sets of CSS values), and the transition (duration, easing, CSS transition vs. JS-driven vs. CSS animation-timeline).
Examples of behaviors to watch for — these are illustrative, not exhaustive. The page may do things not on this list, and you must catch those too:
scroll-snap-type).lenis class or scroll container wrappersThis is the single most expensive mistake in cloning: building a click-based UI when the original is scroll-driven, or vice versa. Before writing any builder prompt for an interactive section, you must definitively answer: Is this section driven by clicks, scrolls, hovers, time, or some combination?
How to determine this:
IntersectionObserver, scroll-snap, position: sticky, animation-timeline, or JS scroll listeners.A section with a sticky sidebar and scrolling content panels is fundamentally different from a tabbed interface where clicking switches content. Getting this wrong means a complete rewrite, not a CSS tweak.
Many components have multiple visual states — a tab bar shows different cards per tab, a header looks different at scroll position 0 vs 100, a card has hover effects. You must extract ALL states, not just whatever is visible on page load.
For tabbed/stateful content:
For scroll-dependent elements:
Every component gets a specification file in docs/research/components/ BEFORE any builder is dispatched. This file is the contract between your extraction work and the builder agent. The builder receives the spec file contents inline in its prompt — the file also persists as an auditable artifact that the user (or you) can review if something looks wrong.
The spec file is not optional. It is not a nice-to-have. If you dispatch a builder without first writing a spec file, you are shipping incomplete instructions based on whatever you can remember from a browser MCP session, and the builder will guess to fill gaps.
Every builder agent must verify npx tsc --noEmit passes before finishing. After merging worktrees, you verify npm run build passes. A broken build is never acceptable, even temporarily.
Navigate to the target URL with browser MCP.
docs/design-references/ with descriptive namesExtract these from the page before doing anything else:
Fonts — Inspect <link> tags for Google Fonts or self-hosted fonts. Check computed font-family on key elements (headings, body, code, labels). Document every family, weight, and style actually used. Configure them in src/app/layout.tsx using next/font/google or next/font/local.
Colors — Extract the site's color palette from computed styles across the page. Update src/app/globals.css with the target's actual colors in the :root and .dark CSS variable blocks. Map them to shadcn's token names (background, foreground, primary, muted, etc.) where they fit. Add custom properties for colors that don't map to shadcn tokens.
Favicons & Meta — Download favicons, apple-touch-icons, OG images, webmanifest to public/seo/. Update layout.tsx metadata.
Global UI patterns — Identify any site-wide CSS or JS: custom scrollbar hiding, scroll-snap on the page container, global keyframe animations, backdrop filters, gradients used as overlays, smooth scroll libraries (Lenis, Locomotive Scroll — check for .lenis, .locomotive-scroll, or custom scroll container classes). Add these to globals.css and note any libraries that need to be installed.
This is a dedicated pass AFTER screenshots and BEFORE anything else. Its purpose is to discover every behavior on the page — many of which are invisible in a static screenshot.
Scroll sweep: Scroll the page slowly from top to bottom via browser MCP. At each section, pause and observe:
Click sweep: Click every element that looks interactive:
Hover sweep: Hover over every element that might have hover states:
Responsive sweep: Test at 3 viewport widths via browser MCP:
Save all findings to docs/research/BEHAVIORS.md. This is your behavior bible — reference it when writing every component spec.
Map out every distinct section of the page from top to bottom. Give each a working name. Document:
Save this as docs/research/PAGE_TOPOLOGY.md — it becomes your assembly blueprint.
This is sequential. Do it yourself (not delegated to an agent) since it touches many files:
layout.tsx to match the target site's actual fontssrc/types/ for the content structures you've observed<svg> elements on the page, deduplicate them, and save as named React components in src/components/icons.tsx. Name them by visual function (e.g., SearchIcon, ArrowRightIcon, LogoIcon).scripts/download-assets.mjs) that downloads all images, videos, and other binary assets from the page to public/. Preserve meaningful directory structure.npm run build passesUse browser MCP to enumerate all assets on the page:
// Run this via browser MCP to discover all assets
JSON.stringify({
images: [...document.querySelectorAll('img')].map(img => ({
src: img.src || img.currentSrc,
alt: img.alt,
width: img.naturalWidth,
height: img.naturalHeight,
parentClasses: img.parentElement?.className,
siblings: img.parentElement ? [...img.parentElement.querySelectorAll('img')].length : 0,
position: getComputedStyle(img).position,
zIndex: getComputedStyle(img).zIndex
})),
videos: [...document.querySelectorAll('video')].map(v => ({
src: v.src || v.querySelector('source')?.src,
poster: v.poster,
autoplay: v.autoplay,
loop: v.loop,
muted: v.muted
})),
backgroundImages: [...document.querySelectorAll('*')].filter(el => {
const bg = getComputedStyle(el).backgroundImage;
return bg && bg !== 'none';
}).map(el => ({
url: getComputedStyle(el).backgroundImage,
element: el.tagName + '.' + el.className?.split(' ')[0]
})),
svgCount: document.querySelectorAll('svg').length,
fonts: [...new Set([...document.querySelectorAll('*')].slice(0, 200).map(el => getComputedStyle(el).fontFamily))],
favicons: [...document.querySelectorAll('link[rel*="icon"]')].map(l => ({ href: l.href, sizes: l.sizes?.toString() }))
});
Then write a download script that fetches everything to public/. Use batched parallel downloads (4 at a time) with proper error handling.
This is the core loop. For each section in your page topology (top to bottom), you do THREE things: extract, write the spec file, then dispatch builders.
For each section, use browser MCP to extract everything:
Screenshot the section in isolation (scroll to it, screenshot the viewport). Save to docs/design-references/.
Extract CSS for every element in the section. Use the extraction script below — don't hand-measure individual properties. Run it once per component container and capture the full output:
// Per-component extraction — run via browser MCP
// Replace SELECTOR with the actual CSS selector for the component
(function(selector) {
const el = document.querySelector(selector);
if (!el) return JSON.stringify({ error: 'Element not found: ' + selector });
const props = [
'fontSize','fontWeight','fontFamily','lineHeight','letterSpacing','color',
'textTransform','textDecoration','backgroundColor','background',
'padding','paddingTop','paddingRight','paddingBottom','paddingLeft',
'margin','marginTop','marginRight','marginBottom','marginLeft',
'width','height','maxWidth','minWidth','maxHeight','minHeight',
'display','flexDirection','justifyContent','alignItems','gap',
'gridTemplateColumns','gridTemplateRows',
'borderRadius','border','borderTop','borderBottom','borderLeft','borderRight',
'boxShadow','overflow','overflowX','overflowY',
'position','top','right','bottom','left','zIndex',
'opacity','transform','transition','cursor',
'objectFit','objectPosition','mixBlendMode','filter','backdropFilter',
'whiteSpace','textOverflow','WebkitLineClamp'
];
function extractStyles(element) {
const cs = getComputedStyle(element);
const styles = {};
props.forEach(p => { const v = cs[p]; if (v && v !== 'none' && v !== 'normal' && v !== 'auto' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)') styles[p] = v; });
return styles;
}
function walk(element, depth) {
if (depth > 4) return null;
const children = [...element.children];
return {
tag: element.tagName.toLowerCase(),
classes: element.className?.toString().split(' ').slice(0, 5).join(' '),
text: element.childNodes.length === 1 && element.childNodes[0].nodeType === 3 ? element.textContent.trim().slice(0, 200) : null,
styles: extractStyles(element),
images: element.tagName === 'IMG' ? { src: element.src, alt: element.alt, naturalWidth: element.naturalWidth, naturalHeight: element.naturalHeight } : null,
childCount: children.length,
children: children.slice(0, 20).map(c => walk(c, depth + 1)).filter(Boolean)
};
}
return JSON.stringify(walk(el, 0), null, 2);
})('SELECTOR');
// State A: capture styles at current state (e.g., scroll position 0)
// Then trigger the state change (scroll, click, hover via browser MCP)
// State B: re-run the extraction script on the same element
// The diff between A and B IS the behavior specification
Record the diff explicitly: "Property X changes from VALUE_A to VALUE_B, triggered by TRIGGER, with transition: TRANSITION_CSS."
Extract real content — all text, alt attributes, aria labels, placeholder text. Use element.textContent for each text node. For tabbed/stateful content, click each tab and extract content per state.
Identify assets this section uses — which downloaded images/videos from public/, which icon components from icons.tsx. Check for layered images (multiple <img> or background-images stacked in the same container).
Assess complexity — how many distinct sub-components does this section contain? A distinct sub-component is an element with its own unique styling, structure, and behavior (e.g., a card, a nav item, a search panel).
For each section (or sub-component, if you're breaking it up), create a spec file in docs/research/components/. This is NOT optional — every builder must have a corresponding spec file.
File path: docs/research/components/<component-name>.spec.md
Template:
# <ComponentName> Specification
## Overview
- **Target file:** `src/components/<ComponentName>.tsx`
- **Screenshot:** `docs/design-references/<screenshot-name>.png`
- **Interaction model:** <static | click-driven | scroll-driven | time-driven>
## DOM Structure
<Describe the element hierarchy — what contains what>
## Computed Styles (exact values from getComputedStyle)
### Container
- display: ...
- padding: ...
- maxWidth: ...
- (every relevant property with exact values)
### <Child element 1>
- fontSize: ...
- color: ...
- (every relevant property)
### <Child element N>
...
## States & Behaviors
### <Behavior name, e.g., "Scroll-triggered floating mode">
- **Trigger:** <exact mechanism — scroll position 50px, IntersectionObserver rootMargin "-30% 0px", click on .tab-button, hover>
- **State A (before):** maxWidth: 100vw, boxShadow: none, borderRadius: 0
- **State B (after):** maxWidth: 1200px, boxShadow: 0 4px 20px rgba(0,0,0,0.1), borderRadius: 16px
- **Transition:** transition: all 0.3s ease
- **Implementation approach:** <CSS transition + scroll listener | IntersectionObserver | CSS animation-timeline | etc.>
### Hover states
- **<Element>:** <property>: <before> -> <after>, transition: <value>
## Per-State Content (if applicable)
### State: "Featured"
- Title: "..."
- Subtitle: "..."
- Cards: [{ title, description, image, link }, ...]
### State: "Productivity"
- Title: "..."
- Cards: [...]
## Assets
- Background image: `public/images/<file>.webp`
- Overlay image: `public/images/<file>.png`
- Icons used: <ArrowIcon>, <SearchIcon> from icons.tsx
## Text Content (verbatim)
<All text content, copy-pasted from the live site>
## Responsive Behavior
- **Desktop (1440px):** <layout description>
- **Tablet (768px):** <what changes — e.g., "maintains 2-column, gap reduces to 16px">
- **Mobile (390px):** <what changes — e.g., "stacks to single column, images full-width">
- **Breakpoint:** layout switches at ~<N>px
Fill every section. If a section doesn't apply (e.g., no states for a static footer), write "N/A" — but think twice before marking States & Behaviors as N/A. Even a footer might have hover states on links.
Based on complexity, dispatch builder agent(s) in worktree(s):
Simple section (1-2 sub-components): One builder agent gets the entire section.
Complex section (3+ distinct sub-components): Break it up. One agent per sub-component, plus one agent for the section wrapper that imports them. Sub-component builders go first since the wrapper depends on them.
What every builder agent receives:
docs/design-references/icons.tsx, cn(), shadcn primitives)src/components/HeroSection.tsx)npx tsc --noEmit before finishingDon't wait. As soon as you've dispatched the builder(s) for one section, move to extracting the next section. Builders work in parallel in their worktrees while you continue extraction.
As builder agents complete their work:
npm run buildThe extract -> spec -> dispatch -> merge cycle continues until all sections are built.
After all sections are built and merged, wire everything together in src/app/page.tsx:
npm run build passes cleanAfter assembly, do NOT declare the clone complete. Take side-by-side comparison screenshots:
Only after this visual QA pass is the clone complete.
Before dispatching ANY builder agent, verify you can check every box. If you can't, go back and extract more.
docs/research/components/<name>.spec.md with ALL sections filledgetComputedStyle(), not estimatedThese are lessons from previous failed clones — each one cost hours of rework:
<img> elements and positioned overlays.<video>, Lottie, or canvas before building elaborate HTML mockups of what the video shows.text-lg" is wrong if the computed value is 18px and text-lg is 18px/28px but the actual line-height is 24px. Extract exact values..lenis class), Locomotive Scroll, or similar. Default browser scrolling feels noticeably different and the user will spot it immediately.When done, report:
npm run build result)