Assess a project's verification infrastructure and assign an autonomy readiness level (L0-L2). Use when the user asks 'is this project ready for autonomous work', 'check autonomy level', 'what verification tools are set up', 'assess project readiness', or when starting work on a new project to understand its quality baseline. Pairs with /autonomy-scaffold to fill gaps.
From famdecknpx claudepluginhub ivintik/private-claude-marketplace --plugin famdeckThis skill uses the workspace's default tool permissions.
evals/evals.jsonSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Implements Clean Architecture in Android and Kotlin Multiplatform projects: module layouts, dependency rules, UseCases, Repositories, domain models, and data layers with Room, SQLDelight, Ktor.
Autonomous coding agents need guardrails — tests, linters, CI — to catch mistakes without human review. This skill evaluates what a project has and what it's missing, so you know how much trust to place in autonomous work.
Run the assessment against the current project:
python -c "
from famdeck.autonomy.report import assess_project
report = assess_project('$PWD')
print(report.summary())
"
The tool's output is a starting point, not the final answer. Always verify key findings manually — in particular, check that CI pipelines actually run tests and linting (not just deployments or publishing), and that coverage has a fail_under threshold.
If famdeck is not importable, fall back to manual inspection: check for test configs (pytest.ini, vitest.config.*, jest.config.*), linter configs (.eslintrc*, ruff.toml, .flake8), CI files (.github/workflows/, .gitlab-ci.yml), and coverage settings. Assign levels based on the criteria below.
| Level | What's present | What it means |
|---|---|---|
| L0 | No tests or linter | Cannot work autonomously — every change needs human review |
| L1 | Tests + linter | Basic autonomous work possible, but no safety net for regressions |
| L2 | L1 + CI (that runs tests/lint) + coverage thresholds + E2E validation | Full autonomous work with quality gates catching issues automatically |
Note on CI: a workflow that only publishes or deploys does not count. L2 requires CI that gates merges by running tests, linting, and type checking.
Beyond unit tests and linters, assess whether the project has end-to-end tests that verify the product works from the user's perspective. This is a separate criterion from having a test runner:
tests/e2e/, e2e/, or similar directories)A project can be L2 on infrastructure but still risky for autonomous work if its tests only cover internal functions and never exercise the product as a user would.
Present the level, detected tools, and gaps. Be specific about what's missing and why it matters:
Project: /Users/dev/my-app
Languages: TypeScript
Autonomy Level: L1
Detected tools:
test_runner: vitest
linter: eslint
Gaps:
- No CI pipeline that runs tests — required for L2
- No coverage thresholds — required for L2
- No E2E tests — cannot verify product works end-to-end
Suggested:
ci_pipeline: GitHub Actions (with test + lint steps)
coverage: vitest --coverage with fail_under threshold
e2e: Consider adding user-journey tests
If below L2, recommend specific next steps — usually /autonomy-scaffold to generate missing configs.