From idea-to-code
Monitors running CI builds on GitHub Actions and CircleCI via polling, reports completion status, and diagnoses failures by fetching logs, job summaries, and artifacts.
npx claudepluginhub humansintheloop-dev/humansintheloop-dev-workflow-and-tools --plugin idea-to-codeThis skill uses the workspace's default tool permissions.
This skill covers two scenarios: watching a CI build to completion after a push, and diagnosing failures when a build fails. Start with Phase 0 to monitor the build. If it succeeds, you're done. If it fails, continue to Phase 1 to gather evidence and Phase 2 to analyze and fix.
Troubleshoots CI/CD pipeline failures, build errors, and test issues by analyzing git logs, distinguishing test vs implementation problems, and applying fixes without skipping tests or disabling checks.
Diagnoses and fixes GitHub Actions CI failures in pull requests by fetching job logs, identifying root causes like build or test errors, and proposing targeted code changes.
Monitors GitHub CI workflows with gh CLI, fetches failure logs, diagnoses root causes including flaky tests, fixes issues, and adds systematic preventions. Useful for failing checks, broken builds, or tests.
Share bugs, ideas, or general feedback.
This skill covers two scenarios: watching a CI build to completion after a push, and diagnosing failures when a build fails. Start with Phase 0 to monitor the build. If it succeeds, you're done. If it fails, continue to Phase 1 to gather evidence and Phase 2 to analyze and fix.
When asked to watch a CI build after a push, actively poll until the build completes.
Check which CI system the project uses:
.github/workflows/*.yml → GitHub Actions.circleci/config.yml → CircleCIFor CircleCI, extract the org and repo from git remote get-url origin to construct API URLs.
Poll using the gh CLI (no caching issues):
gh run list --branch <branch> --limit 1 --json status,conclusion,databaseId,name
Then poll the specific run until it completes:
gh run view <run-id> --json status,conclusion,jobs
Repeat every 10-15 seconds until status is completed. Then check conclusion for success or failure.
IMPORTANT: The WebFetch tool has a 15-minute cache. To get fresh data on each poll, append a unique query parameter (e.g., &_ts=1, &_ts=2, incrementing each time).
WebFetch: https://circleci.com/api/v1.1/project/github/<org>/<repo>?limit=1&branch=<branch>&_ts=1
Extract the build_num.
_ts each time:WebFetch: https://circleci.com/api/v1.1/project/github/<org>/<repo>/<build-num>?_ts=<N>
Ask for the status, outcome, and stop_time fields.
Repeat every 10-15 seconds until status is no longer running.
Report the final result: success or failed.
If the build failed, proceed to Phase 1 below.
Collect all available information before drawing any conclusions.
GitHub Actions:
gh run view <run-id> --log-failed 2>&1 | grep -i "FAILED\|error\|Exception" | head -30
This gives you the failing task/test name and a high-level error.
CircleCI: First get the build steps:
WebFetch: https://circleci.com/api/v1.1/project/github/<org>/<repo>/<build-num>?include=steps
Then fetch the output for the failed step index:
WebFetch: https://circleci.com/api/v1.1/project/github/<org>/<repo>/<build-num>/output/<step-index>/0
Check the workflow definition for uploaded artifacts.
GitHub Actions:
Test results are often saved via actions/upload-artifact. If artifacts are available:
gh run download <run-id> --name <artifact-name> --dir test-reports
CircleCI: List artifacts:
WebFetch: https://circleci.com/api/v1.1/project/github/<org>/<repo>/<build-num>/artifacts
Then fetch specific artifact URLs from the response.
Find and read the relevant TEST-*.xml file for the failing test. These files contain:
Testcontainers-based tests typically configure .withLogConsumer(new Slf4jLogConsumer(logger).withPrefix("SVC <service-name>")), so container logs appear in the XML prefixed with [SVC <service-name>]. Use this prefix to filter for the specific container's output when searching for errors.
Use the Glob tool to find the relevant test result:
Glob with pattern="**/TEST-*FailingTestName*.xml" path="test-reports"
Use the Grep tool to search within large XML files for the root cause:
Grep with pattern="ERROR|Exception|FATAL|Application run failed" path="test-reports/path/to/TEST-*.xml" output_mode="content"
Do NOT read HTML reports — they are for humans in browsers. The XML files contain the same information in a machine-readable format.
Do NOT download artifacts to /tmp or other directories outside the sandbox — download to a project-relative path like test-reports/.
Only after gathering evidence, analyze it systematically.
Look past the test framework wrappers. A typical failure chain looks like:
TestName > testMethod() FAILED
IllegalStateException ← symptom (Spring context failed)
CompletionException ← symptom (async wrapper)
ContainerLaunchException ← symptom (container didn't start)
THE ACTUAL ERROR ← root cause (e.g., missing bean, bad config, JDK bug)
Read the full chain — the root cause is at the bottom.
FROM-CACHE in a previous run means it wasn't re-executed then — it may have been broken alreadyFileNotFoundException for a .yml file is different from a missing .class filegrep or read of the test XML will give you the answerjava_version_to_install, machine_image). Compare with other upgraded projects that use the same orb.When fixing issues caused by naming conventions or patterns: