From buildkite
This skill should be used when the user asks to "add an annotation", "upload artifacts from a step", "share data between steps", "upload pipeline dynamically", "request an OIDC token inside a step", "acquire a distributed lock", "get or update a step attribute", "redact a secret from logs", "retrieve a cluster secret at runtime", or "debug environment variables in hooks". Also use when the user mentions buildkite-agent annotate, buildkite-agent artifact upload/download, buildkite-agent meta-data set/get, buildkite-agent pipeline upload, buildkite-agent oidc request-token, buildkite-agent step, buildkite-agent lock, buildkite-agent env, buildkite-agent secret get, buildkite-agent redactor add, buildkite-agent tool sign/verify, or any buildkite-agent subcommand used inside a running job step.
npx claudepluginhub buildkite/skills --plugin buildkiteThis skill uses the workspace's default tool permissions.
The `buildkite-agent` binary provides subcommands for interacting with Buildkite from within running job steps — creating annotations, uploading artifacts, sharing state between jobs, generating dynamic pipelines, requesting OIDC tokens, and more. This skill covers the command syntax, flags, and patterns for every in-job subcommand.
This skill should be used when the user asks to "trigger a build", "check build status", "watch a build", "view build logs", "retry a build", "cancel a build", "list builds", "download artifacts", "upload artifacts", "manage secrets", "create a pipeline", "list pipelines", or "interact with Buildkite from the command line". Also use when the user mentions bk commands, bk build, bk job, bk pipeline, bk secret, bk artifact, bk cluster, bk package, bk auth, bk configure, bk use, bk init, bk api, or asks about Buildkite CLI installation, terminal-based Buildkite workflows, or command-line CI/CD operations.
Guides GitLab CI/CD pipeline creation, debugging, and configuration including runners, jobs, stages, artifacts, caches, environments, and deployment automation.
Designs secure CI/CD pipelines for desktop app builds with GitHub Actions, focusing on secret management, code signing, artifact security, and supply chain protection.
Share bugs, ideas, or general feedback.
The buildkite-agent binary provides subcommands for interacting with Buildkite from within running job steps — creating annotations, uploading artifacts, sharing state between jobs, generating dynamic pipelines, requesting OIDC tokens, and more. This skill covers the command syntax, flags, and patterns for every in-job subcommand.
A step that runs tests, annotates failures, uploads coverage, and stores a result flag for downstream jobs:
steps:
- label: ":test_tube: Tests"
command: |
if ! make test 2>&1 | tee test-output.txt; then
buildkite-agent annotate --style "error" --context "test-failures" < test-output.txt
buildkite-agent meta-data set "tests-passed" "false"
exit 1
fi
buildkite-agent annotate "All tests passed :white_check_mark:" --style "success" --context "test-results"
buildkite-agent artifact upload "coverage/**/*"
buildkite-agent meta-data set "tests-passed" "true"
A downstream step reading that state:
- label: ":rocket: Deploy"
command: |
PASSED=$(buildkite-agent meta-data get "tests-passed")
if [[ "$PASSED" != "true" ]]; then
echo "Tests did not pass, skipping deploy"
exit 0
fi
scripts/deploy.sh
depends_on: "test-step"
Surface build results directly on the build page. Annotations support Markdown and HTML.
# Simple text annotation
buildkite-agent annotate "Deploy completed successfully" --style "success" --context "deploy"
# Pipe from a file
buildkite-agent annotate --style "error" --context "test-failures" < test-output.md
| Flag | Short | Default | Description |
|---|---|---|---|
--style | -s | default | Visual style: default, info, warning, error, success |
--context | -c | random UUID | Unique ID — reusing a context replaces the annotation |
--append | — | false | Append to existing annotation with same context instead of replacing |
--priority | — | 3 | Display priority (1-10). Higher numbers appear first |
--job | — | current job | Job ID to annotate (rarely needed) |
--context without --append replaces the annotation; with --append appends below existing content.--context value so reruns update the same annotation instead of creating duplicates.For pipeline-level
notify:configuration, see the buildkite-pipelines skill.
Upload files as build artifacts, download them in later steps or other builds, and search by glob.
# Upload a single file
buildkite-agent artifact upload "pkg/release.tar.gz"
# Upload with glob pattern
buildkite-agent artifact upload "dist/**/*"
# Download to current directory
buildkite-agent artifact download "pkg/release.tar.gz" .
# Download from a specific step
buildkite-agent artifact download "dist/*" . --step "build-step"
# List matching artifacts
buildkite-agent artifact search "pkg/*.tar.gz" --build "$BUILDKITE_BUILD_ID"
For complete flag tables, see references/flag-reference.md.
For the declarative
artifact_paths:YAML key, see the buildkite-pipelines skill. Forbk artifactCLI commands, see the buildkite-cli skill.
A build-wide key-value store for sharing state between jobs. Set a value in one job, read it in any other job in the same build.
buildkite-agent meta-data set "release-version" "1.4.2"
VERSION=$(buildkite-agent meta-data get "release-version")
Use --default to return a fallback value instead of a non-zero exit when the key is missing: buildkite-agent meta-data get "deploy-env" --default "staging".
# Returns exit code 0 if exists, 100 if not
if buildkite-agent meta-data exists "release-version"; then
echo "Version already set"
fi
Block step field values are stored automatically as meta-data. Retrieve them by field key:
# After a block step with fields: [{key: "release-name", text: "Release Name"}]
RELEASE_NAME=$(buildkite-agent meta-data get "release-name")
Dynamically add steps to a running build. The core mechanism behind dynamic pipelines — generate YAML at runtime and upload it.
# Upload a specific file
buildkite-agent pipeline upload .buildkite/deploy-steps.yml
# Pipe generated YAML from stdin
./scripts/generate-pipeline.sh | buildkite-agent pipeline upload
By default, uploaded steps are appended after the current step. Use --replace to replace the entire remaining pipeline:
# Replace all remaining steps with the uploaded ones
buildkite-agent pipeline upload --replace .buildkite/new-pipeline.yml
| Flag | Default | Description |
|---|---|---|
--replace | false | Replace remaining pipeline steps instead of appending |
--no-interpolation | false | Skip environment variable interpolation in the uploaded YAML |
--dry-run | false | Validate and output the pipeline without uploading |
For pipeline YAML syntax and step types, see the buildkite-pipelines skill.
Request short-lived OpenID Connect tokens from within a job step for authenticating to external services (cloud providers, package registries) without static credentials.
# Request a token for a specific audience
TOKEN=$(buildkite-agent oidc request-token --audience "https://packages.buildkite.com/my-org/my-registry")
# AWS — request token with STS audience
TOKEN=$(buildkite-agent oidc request-token --audience "sts.amazonaws.com")
| Flag | Default | Description |
|---|---|---|
--audience | Buildkite endpoint | Target service URL — must match the OIDC provider audience configuration |
--lifetime | 600 | Token lifetime in seconds |
--claim | — | Comma-separated optional claims to include (e.g., organization_id,pipeline_id) |
--aws-session-tag | — | Comma-separated claims to map as AWS session tags |
For end-to-end OIDC auth flows, cloud provider setup, and token claim details, see the buildkite-secure-delivery skill.
Read or modify step attributes at runtime. Useful for conditional logic within steps and build automation.
# Get current step's label
LABEL=$(buildkite-agent step get "label")
# Get another step's attribute by key
STATE=$(buildkite-agent step get "state" --step "deploy-step")
# Get the outcome of a step
OUTCOME=$(buildkite-agent step get "outcome" --step "test-step")
# Update current step's label dynamically
buildkite-agent step update "label" ":rocket: Deploying v${VERSION}"
# Update another step
buildkite-agent step update "label" ":hourglass: Waiting..." --step "pending-step"
# Cancel a specific step by key
buildkite-agent step cancel --step "optional-step"
| Flag | Default | Description |
|---|---|---|
--step | current step | Step key or UUID to target |
--build | current build | Build UUID (for cross-build operations) |
--format | string | Output format for get |
| Attribute | Readable | Writable | Description |
|---|---|---|---|
label | yes | yes | Step label displayed in UI |
state | yes | no | Current state (running, passed, failed, etc.) |
outcome | yes | no | Final outcome of the step |
key | yes | no | Step key identifier |
Coordinate parallel jobs within a build using distributed mutex locks. Prevents race conditions when multiple jobs access shared resources.
#!/bin/bash
set -euo pipefail
# Acquire lock — blocks until available, returns a token
token=$(buildkite-agent lock acquire "database-migration")
trap 'buildkite-agent lock release "database-migration" "${token}"' EXIT
# Critical section — only one job runs this at a time
bundle exec rails db:migrate
Run a setup task exactly once across all parallel jobs:
#!/bin/bash
echo "+++ Setting up shared test environment"
if [[ $(buildkite-agent lock do "test-env-setup") == "do" ]]; then
echo "Downloading test assets..."
curl -o /tmp/test-data.zip https://releases.example.com/data.zip
unzip /tmp/test-data.zip -d /tmp/shared-test-files/
buildkite-agent lock done "test-env-setup"
else
echo "Assets already prepared by another job"
fi
# All jobs continue here
run-tests.sh
| Subcommand | Flags | Description |
|---|---|---|
lock acquire <name> | --timeout | Maximum wait time in seconds (0 = wait forever) |
lock release <name> <token> | — | Release with the token from acquire |
lock do <name> | — | Returns do if lock acquired, done if already completed |
lock done <name> | — | Mark a do lock as completed |
Inspect and modify the job's environment variables. Primarily useful for debugging lifecycle hooks and understanding what environment changes hooks made.
# Dump all environment variables as JSON
buildkite-agent env dump | jq .
# Get a specific variable
buildkite-agent env get "BUILDKITE_BRANCH"
# Set a variable for subsequent hooks and the command
buildkite-agent env set "DEPLOY_TARGET" "production"
| Subcommand | Default | Description |
|---|---|---|
env dump | JSON to stdout | Dump all environment variables |
env get <keys...> | — | Get one or more specific variables |
env set <key> <value> | — | Set a variable for subsequent phases |
env unset <key> | — | Remove a variable from subsequent phases |
The env dump command is particularly useful in lifecycle hooks to see what prior hooks changed:
#!/bin/bash
# .buildkite/hooks/pre-command
echo "--- Environment after environment hook:"
buildkite-agent env dump | jq 'keys'
For agent lifecycle hooks and
buildkite-agent.cfgconfiguration, see the buildkite-agent-infrastructure skill.
Retrieve cluster secrets at runtime from within job steps. Secrets retrieved this way are automatically added to the log redactor.
# Get a secret value
SECRET_VAR=$(buildkite-agent secret get "deploy-key")
# Pass directly to a tool
cli-tool --token "$(buildkite-agent secret get "api-token")"
| Flag | Default | Description |
|---|---|---|
--format | string | Output format: string (single secret) or env (multiple, KEY="value" pairs) |
--skip-redaction | false | Do not add the secret value to the log redactor |
--job | current job | Job ID context |
By default, secret get automatically registers retrieved values with the log redactor, masking them as [REDACTED] in subsequent output.
For setting up cluster secrets, see the buildkite-agent-infrastructure skill. For the declarative
secrets:pipeline YAML key, see the buildkite-pipelines skill.
Add values to the build log redactor at runtime so they are masked in all subsequent output. Use this for dynamically-retrieved secrets that were not declared via secrets: or buildkite-agent secret get.
# Fetch a token from an external source
DYNAMIC_TOKEN=$(curl -s https://vault.example.com/token)
# Register it with the redactor before using it
echo "$DYNAMIC_TOKEN" | buildkite-agent redactor add
# Now any log output containing the token value shows [REDACTED]
echo "Using token: $DYNAMIC_TOKEN"
# Output: Using token: [REDACTED]
# Redact multiple values
echo "$SECRET1" | buildkite-agent redactor add
echo "$SECRET2" | buildkite-agent redactor add
| Scenario | Use |
|---|---|
| Secret stored in Buildkite cluster secrets | buildkite-agent secret get (auto-redacts) |
| Secret from external vault (HashiCorp Vault, AWS SSM, etc.) | Fetch externally, then buildkite-agent redactor add |
| Computed sensitive value (temporary token, derived key) | buildkite-agent redactor add |
Sign and verify pipeline step definitions for integrity checking. Ensures steps have not been tampered with between definition and execution.
# Sign step configuration using a JWKS key
buildkite-agent tool sign --jwks-file /etc/buildkite-agent/signing-key.json \
--step "command=make test" \
--step "plugins=docker#v5.12.0"
# Verify step signature
buildkite-agent tool verify --jwks-file /etc/buildkite-agent/verification-key.json \
--step "command=make test"
| Flag | Default | Description |
|---|---|---|
--jwks-file | — | Path to JWKS key file for signing or verification |
--jwks-key-id | — | Key ID to use from the JWKS file |
--step | — | Step attributes to sign/verify (repeatable) |
For pipeline signing configuration and rollout strategy, see the buildkite-secure-delivery skill.
| Mistake | What happens | Fix |
|---|---|---|
Missing --context on annotate | Each call creates a new annotation instead of updating | Always pass --context with a stable identifier |
Using --append without matching --context | Append has no effect — creates a new annotation | Ensure --context matches the annotation to append to |
| Forgetting to quote artifact glob patterns | Shell expands globs before buildkite-agent sees them | Always quote: "dist/**/*" not dist/**/* |
Reading meta-data get before the writing job completes | Key does not exist, command fails with non-zero exit | Use depends_on or wait to enforce ordering, or use --default |
Using pipeline upload --replace unintentionally | Removes all remaining steps in the build | Only use --replace when intentionally rebuilding the entire pipeline |
| Not releasing locks on script failure | Lock held indefinitely, blocking other jobs | Use trap ... EXIT to release locks on any exit |
Passing --audience that doesn't match OIDC provider config | Token rejected by the target service | Audience must exactly match the provider's configured audience URL |
Using --skip-redaction with actual secrets | Secret values appear in plain text in build logs | Only use --skip-redaction for non-sensitive configuration values |
Calling env set expecting it to affect the current shell | Variable is set for subsequent hooks/phases, not the current script | Use export VAR=value for current-script variables; env set for cross-phase |
| Passing large values via environment variables | OS-level env size limits cause silent truncation or job failure | Switch to file-based approaches (artifacts, meta-data with files) for payloads larger than a few KB |
Uploading pipeline YAML with unescaped $ in --no-interpolation mode off | Variables interpolated unexpectedly, producing malformed YAML | Use --no-interpolation when YAML contains literal $ characters |
references/flag-reference.md — Complete flag tables for all subcommands including upload, download, search, shasum, annotate, meta-data, pipeline upload, oidc, step, lock, env, secret, redactor, and toolreferences/patterns-and-recipes.md — Advanced multi-subcommand patterns: test failure annotation pipelines, cross-job state machines, OIDC-authenticated Docker push, parallel job coordination with locks, environment debugging