From agent-almanac
Executes test scenarios against teams to observe coordination patterns, evaluate acceptance criteria, score rubrics, and generate RESULT.md for validation, comparison, and baselines.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Spawns and coordinates pre-composed agent teams from teams/ definition files. Resolves agents/skills, verifies entry criteria, preloads skills, and runs peer or sequential workflows for multi-phase tasks.
Creates agent-almanac team composition files defining purpose, members, coordination patterns, task decomposition, and registry integration. Use for multi-agent workflows, complex reviews, or recurring collaborations.
Use when creating or editing commands, orchestrator prompts, or workflow documentation before deployment - applies RED-GREEN-REFACTOR to test instruction clarity by finding real execution failures, creating test scenarios, and verifying fixes with subagents
Share bugs, ideas, or general feedback.
Execute a test scenario from tests/scenarios/teams/ against the target
team. Observe coordination pattern behaviors, evaluate acceptance criteria,
score the rubric, and produce a RESULT.md in tests/results/.
tests/scenarios/teams/test-opaque-team-cartographers-audit.md)YYYY-MM-DD-<target>-NNN auto-generated)1.1. Read the test scenario file specified in the input.
1.2. Parse YAML frontmatter and extract:
target — the team to testcoordination-pattern — the expected patternteam-size — number of members to spawn1.3. Verify the scenario file has all required sections:
Expected: Scenario file loads, parses, and contains all required sections.
On failure: If the file is missing or unparseable, abort with an error message identifying the missing file or malformed section. If optional sections (Rubric, Ground Truth, Variants) are absent, note their absence and continue.
2.1. Walk through each pre-condition checkbox in the scenario.
2.2. For file-existence checks, use Glob to verify.
2.3. For registry count checks, parse the relevant _registry.yml and compare total_* against actual file counts on disk.
2.4. For branch/git state checks, run git status --porcelain and git branch --show-current.
Expected: All pre-conditions are satisfied.
On failure: If any pre-condition fails, record it as BLOCKED in the results. Decide whether to proceed (soft pre-condition) or abort (hard pre-condition like missing target team file). Document the decision.
3.1. Read tests/_registry.yml and locate the coordination_patterns entry matching the scenario's coordination-pattern value.
3.2. Extract the key_behaviors list for this pattern.
3.3. These behaviors become the observation checklist — each must be watched for during execution and recorded as observed/not observed.
Expected: Pattern key behaviors loaded and ready for observation.
On failure: If the coordination pattern is not defined in the registry, use the scenario's Expected Behaviors section as the sole observation source. Log a warning.
4.1. Create the result directory: tests/results/YYYY-MM-DD-<target>-NNN/.
4.2. Record T0 (task start timestamp).
4.3. Read the target team's definition from teams/<target>.md, extract the CONFIG block, and activate the team: call TeamCreate with the team name, spawn teammates using each member's subagent_type, and create tasks from the CONFIG tasks list. Use the team-size from the scenario. Pass the Primary Task prompt verbatim from the scenario's Task section.
4.4. Observe the team's execution phases. Record timestamps for:
4.5. If the scenario defines a Scope Change Trigger and skip-scope-change is false:
4.6. Continue observing until the team delivers its output.
4.7. Capture the team's complete output.
Expected: Team executes the task through its coordination pattern phases. Timestamps are recorded for all transitions. Scope change (if applicable) is injected and absorbed.
On failure: If the team fails to produce output, record the failure point and any error messages. If the team stalls, note the last observed phase and timeout. Proceed to evaluation with partial results.
5.1. For each key behavior from Step 3, determine whether it was observed during execution:
5.2. For each task-specific behavior from the scenario's Expected Behaviors section, apply the same evaluation.
5.3. Record findings in the observation log.
Expected: All or most pattern-specific and task-specific behaviors are observed.
On failure: Unobserved behaviors are findings, not failures of the test procedure. Record them accurately — they indicate the coordination pattern did not fully manifest.
6.1. Walk through each acceptance criterion from the scenario.
6.2. For each criterion, assign a determination:
6.3. If the scenario includes Ground Truth data, verify reported findings against it:
6.4. If the scenario includes a Scoring Rubric, score each dimension 1-5 with brief justification.
6.5. Calculate summary metrics:
Expected: All acceptance criteria have a determination. Summary metrics are calculated.
On failure: If fewer than half the criteria can be evaluated (too many BLOCKED), the test run is inconclusive. Document why and recommend re-running after fixing pre-conditions.
7.1. Create tests/results/YYYY-MM-DD-<target>-NNN/RESULT.md using the Recording Template from the scenario's Observation Protocol.
7.2. Populate all sections:
7.3. Include the team's raw output as an appendix or in a separate file (team-output.md) in the same result directory.
7.4. Add a summary verdict at the top:
**Verdict**: PASS | FAIL | INCONCLUSIVE
**Score**: X/N criteria (Y/Z rubric points)
**Duration**: Xm
Expected: Complete RESULT.md with all sections populated and a clear verdict.
On failure: If result file cannot be written, output the results to stdout as a fallback. The evaluation data should never be lost.
review-codebase — deep codebase review that complements team-level testingreview-skill-format — validates individual skill format (this skill validates team coordination)create-team — creates team definitions that this skill testsevolve-team — evolves team definitions based on test findingstest-a2a-interop — similar testing pattern for A2A protocol conformanceassess-form — the morphic assessment that the opaque team lead uses internally