From mblode-agent-skills
Optimises SEO for Next.js App Router apps including sitemaps, meta tags, structured data, canonical URLs, Core Web Vitals, and programmatic SEO. Use when asked to improve SEO, add sitemap.xml, fix meta tags, add structured data, set canonical URLs, improve Core Web Vitals, audit SEO, or build SEO pages at scale. Performs no visual redesigns.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-4 --plugin mblode-agent-skillsThis skill uses the workspace's default tool permissions.
No visual redesigns or layout changes. Allowed: metadata, structured data, semantic HTML, internal links, alt text, sitemap/robots, performance tuning.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
No visual redesigns or layout changes. Allowed: metadata, structured data, semantic HTML, internal links, alt text, sitemap/robots, performance tuning.
Copy and track this checklist:
SEO progress:
- [ ] Step 1: Inventory routes and index intent
- [ ] Step 2: Fix crawl/index foundations
- [ ] Step 3: Implement metadata + structured data
- [ ] Step 4: Improve semantics, links, and CWV
- [ ] Step 5: Validate with seo-checklist.md and document changes
app/sitemap.ts) and robots (app/robots.ts):
// app/sitemap.ts
import type { MetadataRoute } from "next";
export default function sitemap(): MetadataRoute.Sitemap {
return [{ url: "https://example.com", lastModified: new Date() }];
}
metadata or generateMetadata<script type="application/ld+json" dangerouslySetInnerHTML={{ __html: JSON.stringify({
"@context": "https://schema.org", "@type": "Organization",
name: "Example", url: "https://example.com"
}) }} />
robots.txt, noindex, or auth walls on routes meant to be indexed.robots.txt has correct crawl directivessitemap.xml lists all indexed routes with valid URLs