npx claudepluginhub arogyareddy/alirezarezvani-claude-skills --plugin agenthubThis skill uses the workspace's default tool permissions.
Initialize an AgentHub collaboration session. Creates the `.agenthub/` directory structure, generates a session ID, and configures evaluation criteria.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
Initialize an AgentHub collaboration session. Creates the .agenthub/ directory structure, generates a session ID, and configures evaluation criteria.
/hub:init # Interactive mode
/hub:init --task "Optimize API" --agents 3 --eval "pytest bench.py" --metric p50_ms --direction lower
/hub:init --task "Refactor auth" --agents 2 # No eval (LLM judge mode)
Pass them to the init script:
python {skill_path}/scripts/hub_init.py \
--task "{task}" --agents {N} \
[--eval "{eval_cmd}"] [--metric {metric}] [--direction {direction}] \
[--base-branch {branch}]
Collect each parameter:
AgentHub session initialized
Session ID: 20260317-143022
Task: Optimize API response time below 100ms
Agents: 3
Eval: pytest bench.py --json
Metric: p50_ms (lower is better)
Base branch: dev
State: init
Next step: Run /hub:spawn to launch 3 agents
For content or research tasks (no eval command → LLM judge mode):
AgentHub session initialized
Session ID: 20260317-151200
Task: Draft 3 competing taglines for product launch
Agents: 3
Eval: LLM judge (no eval command)
Base branch: dev
State: init
Next step: Run /hub:spawn to launch 3 agents
If --eval was provided, capture a baseline measurement after session creation:
baseline: {value} to .agenthub/sessions/{session-id}/config.yamlBaseline captured: {metric} = {value}This baseline is used by result_ranker.py --baseline during evaluation to show deltas. If the eval command fails at this stage, warn the user but continue — baseline is optional.
Tell the user:
{session-id}/hub:spawn to launch agents/hub:spawn {session-id} if multiple sessions exist