npx claudepluginhub kasempiternal/claude-agent-system --plugin casThis skill uses the workspace's default tool permissions.
```
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
██╗ ██████╗ ██████╗
██║ ╚════██╗██╔═████╗
██║ █████╔╝██║██╔██║
██║ ╚═══██╗████╔╝██║
███████╗ ██████╔╝╚██████╔╝
╚══════╝ ╚═════╝ ╚═════╝
Topic Research • Last 30 Days
CAS v7.20.0
MANDATORY: Output the banner above verbatim as your very first message to the user, before any tool calls or other output.
You are entering L30 RESEARCH MODE. You deploy a parallel agent swarm that scrapes 5 free sources (Reddit, Hacker News, DuckDuckGo, Lobsters, GitHub) using Scrapling, scores and ranks the results, then generates a self-contained HTML dashboard.
$ARGUMENTSUse Glob("**/skills/l30/templates/dashboard.html") to find the dashboard template. Extract the parent directory path (everything before /templates/). Store as L30_SKILL_DIR.
Set:
VENV = /Users/izotz.cristobal/Multiverse/Izotz/l30/.venv/bin/python
Run Bash("test -f /Users/izotz.cristobal/Multiverse/Izotz/l30/.venv/bin/python && echo OK").
l30 Python environment not found. Install it:
cd ~/Multiverse/Izotz/l30
python3 -m venv .venv
.venv/bin/pip install -e .
Do NOT proceed.Extract the research topic from $ARGUMENTS.
$ARGUMENTS is empty or missing, use AskUserQuestion to ask: "What topic would you like to research from the last 30 days?"QUERY.QUERY_SLUG = lowercase QUERY, spaces → underscores, remove non-alphanumeric except -_, truncate to 50 chars
DATE_PREFIX = YYYYMMDD_HHMMSS (current time)
RUN_DIR = /tmp/l30-${QUERY_SLUG}-$(date +%s)
OUTPUT_DIR = ~/Documents/l30/dashboards
OUTPUT_FILE = ${OUTPUT_DIR}/${DATE_PREFIX}_${QUERY_SLUG}.html
Run Bash("mkdir -p ${RUN_DIR} ${OUTPUT_DIR}").
Display: Researching: "${QUERY}" across 5 sources...
Use TeamCreate with:
team_name: "l30-${QUERY_SLUG}"description: "L30 research swarm for: ${QUERY}"Use TaskCreate for each task. Store the returned task IDs.
| # | Subject | activeForm |
|---|---|---|
| 1 | Reddit scraping for "${QUERY}" | Scraping Reddit |
| 2 | HN scraping for "${QUERY}" | Scraping Hacker News |
| 3 | DDG scraping for "${QUERY}" | Scraping DuckDuckGo |
| 4 | Lobsters scraping for "${QUERY}" | Scraping Lobsters |
| 5 | GitHub scraping for "${QUERY}" | Scraping GitHub |
| 6 | Intelligence analysis & ranking | Analyzing and ranking results |
| 7 | Dashboard compilation | Building HTML dashboard |
Use TaskUpdate with addBlockedBy:
addBlockedBy: [task1_id, task2_id, task3_id, task4_id, task5_id]addBlockedBy: [task6_id]Use TaskUpdate with owner:
owner: "reddit-scraper"owner: "hn-scraper"owner: "ddg-scraper"owner: "lobsters-scraper"owner: "github-scraper"Prepend this to EVERY teammate's prompt:
You are
{TEAMMATE_NAME}on teaml30-{QUERY_SLUG}.Team Protocol — follow these steps exactly:
- Run
TaskListto find your assigned task (your name appears in theownerfield)- Run
TaskGetwith your task ID to confirm your assignment- Set your task status to
in_progressviaTaskUpdate- Complete the work described below
- Set your task status to
completedviaTaskUpdate- Send a brief summary to the team lead via
SendMessage(type: "message", recipient: "lead", content: your summary, summary: "Completed [task subject]")If you encounter issues, message "lead" before proceeding.
Your assignment follows below.
Spawn all 5 teammates IN PARALLEL via Agent with team_name: "l30-{QUERY_SLUG}". Each uses subagent_type: "general-purpose" and model: "sonnet".
All scraper agents run the same pattern: a single Bash command that invokes the l30 Python scraper with Scrapling, then writes JSON results to RUN_DIR.
Each agent's brief follows this pattern (replace {SOURCE_MODULE}, {SOURCE_CLASS}, {SOURCE_NAME}):
Run this exact Bash command (timeout 90s):
{VENV} -c "
import asyncio, json
from l30.sources.{SOURCE_MODULE} import {SOURCE_CLASS}
source = {SOURCE_CLASS}()
results = asyncio.run(source.search(query='{QUERY}', days=30, max_results=25))
data = [r.model_dump(mode='json') for r in results]
with open('{RUN_DIR}/{SOURCE_NAME}.json', 'w') as f:
json.dump(data, f, default=str)
print(json.dumps({'source': '{SOURCE_NAME}', 'count': len(data)}))
"
After the command completes:
- If successful: Report the result count
- If error: Report the error message, write an empty array to {RUN_DIR}/{SOURCE_NAME}.json
name: "reddit-scraper"
reddit, SOURCE_CLASS: RedditSource, SOURCE_NAME: redditname: "hn-scraper"
hackernews, SOURCE_CLASS: HackerNewsSource, SOURCE_NAME: hackernewsname: "ddg-scraper"
duckduckgo, SOURCE_CLASS: DuckDuckGoSource, SOURCE_NAME: duckduckgoname: "lobsters-scraper"
lobsters, SOURCE_CLASS: LobstersSource, SOURCE_NAME: lobstersname: "github-scraper"
github, SOURCE_CLASS: GitHubSource, SOURCE_NAME: github→ Wait for all 5 teammates to send completion messages.
→ After all 5 complete, send shutdown_request (via SendMessage, type: "shutdown_request") to each Wave 1 teammate.
→ If any teammate hangs for 3+ minutes with no message, consider it failed and proceed.
Pre-assign: TaskUpdate(taskId: task6_id, owner: "intelligence-lead")
Spawn: name: "intelligence-lead" | model: "sonnet" | subagent_type: "general-purpose"
Brief: Preamble + the following:
You are the intelligence analyst. Read all source result files and run the scoring/ranking pipeline.
Run this Bash command (timeout 60s):
{VENV} -c "
import json, glob, os, time
from datetime import datetime, timezone
from l30.models import SearchResult, ResearchReport, SourceStatus
from l30.scoring import full_pipeline
all_results = []
statuses = []
run_dir = '{RUN_DIR}'
for source_name in ['reddit', 'hackernews', 'duckduckgo', 'lobsters', 'github']:
fpath = os.path.join(run_dir, f'{source_name}.json')
count = 0
error = ''
try:
with open(fpath) as f:
items = json.load(f)
results = [SearchResult(**item) for item in items]
all_results.extend(results)
count = len(results)
except FileNotFoundError:
error = 'Source file not found'
except Exception as e:
error = str(e)[:200]
statuses.append(SourceStatus(
name=source_name,
status='done' if count > 0 else ('error' if error else 'done'),
result_count=count,
error=error,
).model_dump(mode='json'))
total_raw = len(all_results)
ranked = full_pipeline(all_results, '{QUERY}')
report = ResearchReport(
query='{QUERY}',
days=30,
sources_used=[s['name'] for s in statuses if s['status'] == 'done' and s['result_count'] > 0],
sources_failed=[s['name'] for s in statuses if s['status'] == 'error'],
results=ranked,
total_raw=total_raw,
total_final=len(ranked),
searched_at=datetime.now(timezone.utc),
).model_dump(mode='json')
output = json.dumps({'report': report, 'statuses': statuses}, default=str)
with open(os.path.join(run_dir, 'ranked.json'), 'w') as f:
f.write(output)
print(f'Ranked: {len(ranked)} results from {total_raw} raw')
"
After the command completes, report the results count.
→ Wait for completion. Send shutdown_request.
Pre-assign: TaskUpdate(taskId: task7_id, owner: "report-compiler")
Spawn: name: "report-compiler" | model: "sonnet" | subagent_type: "general-purpose"
Brief: Preamble + the following:
You are the report compiler. Generate the HTML dashboard from the ranked results.
Steps:
1. Read the ranked data: Read the file at {RUN_DIR}/ranked.json
2. Read the dashboard template: Read the file at {L30_SKILL_DIR}/templates/dashboard.html
3. In the template, find the placeholder: /* REPORT_DATA_PLACEHOLDER */
4. Replace that placeholder with the FULL JSON content from ranked.json
- The result should be: const REPORT_DATA = {"report": {...}, "statuses": [...]};
5. Write the final HTML to: {OUTPUT_FILE}
6. Open the dashboard: Run Bash("open {OUTPUT_FILE}")
7. Report the file path and result count
→ Wait for completion. Send shutdown_request.
After Wave 3 completes:
{RUN_DIR}/ranked.json to extract summary statsL30 RESEARCH COMPLETE
Query: "{QUERY}"
Period: Last 30 days
Results: {total_raw} raw -> {total_final} after dedup
Sources:
{for each status: ✓ or ✗} {name}: {result_count} results
Dashboard: {OUTPUT_FILE}
TeamDelete to clean up{RUN_DIR} — user can inspect or delete{RUN_DIR}/ranked.json and show the template path for manual compilation.shutdown_request, proceed without its output.TeamDelete at the end, even if some waves failed.scrapling.fetchers.Fetcher with Chrome impersonationsubagent_type: "general-purpose" — required for team coordination tools