This skill guides integrating 1Password CLI (op) for secret management in development workflows. Use when loading secrets for infrastructure, deployments, or local development.
Integrates 1Password CLI for secure secret management in development workflows.
/plugin marketplace add majesticlabs-dev/majestic-marketplace/plugin install majestic-devops@majestic-marketplaceThis skill is limited to using the following tools:
resources/multiple-accounts.mdThe 1Password CLI (op) provides secure secret injection into development workflows without exposing credentials in code, environment files, or shell history.
op://<vault>/<item>/<field>
Examples:
op://Development/AWS/access_key_id
op://Production/Database/password
op://Shared/Stripe/secret_key
Use {environment}-{service} format for item names:
| Pattern | Example | Notes |
|---|---|---|
{env}-{service} | production-rails | Primary app secrets |
{env}-{provider} | production-dockerhub | External service credentials |
{env}-{provider}-{resource} | production-hetzner-s3 | Provider with multiple resources |
DO:
production-, staging-, development-)DON'T:
Production Rails → production-rails)API Key → production-stripe)Use semantic field names that describe the credential type:
| Good | Bad | Why |
|---|---|---|
access_token | value | Self-documenting |
master_key | secret | Specific purpose clear |
secret_access_key | key | Matches AWS naming |
api_token | token | Distinguishes from other tokens |
Field naming rules:
access_key_id, secret_access_key)database_password not just password when item has multiple credentialsCreate .op.env in project root:
# AWS credentials
AWS_ACCESS_KEY_ID=op://Infrastructure/AWS/access_key_id
AWS_SECRET_ACCESS_KEY=op://Infrastructure/AWS/secret_access_key
AWS_REGION=op://Infrastructure/AWS/region
# DigitalOcean
DIGITALOCEAN_TOKEN=op://Infrastructure/DigitalOcean/api_token
# Database
DATABASE_URL=op://Production/PostgreSQL/connection_string
# API Keys
STRIPE_SECRET_KEY=op://Production/Stripe/secret_key
OPENAI_API_KEY=op://Development/OpenAI/api_key
Critical: Add to .gitignore:
# 1Password - NEVER commit
.op.env
*.op.env
# Single command
op run --env-file=.op.env -- terraform plan
# With environment variable prefix
op run --env-file=.op.env -- rails server
# Inline secret reference
op run -- printenv DATABASE_URL
OP ?= op
OP_ENV_FILE ?= .op.env
# Prefix for all commands needing secrets
CMD = $(OP) run --env-file=$(OP_ENV_FILE) --
deploy:
$(CMD) kamal deploy
console:
$(CMD) rails console
migrate:
$(CMD) rails db:migrate
# docker-compose.yml
services:
app:
build: .
environment:
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
# Run with secrets injected
op run --env-file=.op.env -- docker compose up
# config/deploy.yml
env:
secret:
- RAILS_MASTER_KEY
- DATABASE_URL
- REDIS_URL
# .kamal/secrets (loaded by Kamal)
RAILS_MASTER_KEY=$(op read "op://Production/Rails/master_key")
DATABASE_URL=$(op read "op://Production/PostgreSQL/url")
REDIS_URL=$(op read "op://Production/Redis/url")
# .github/workflows/deploy.yml
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: 1password/load-secrets-action@v2
with:
export-env: true
env:
OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
AWS_ACCESS_KEY_ID: op://CI/AWS/access_key_id
AWS_SECRET_ACCESS_KEY: op://CI/AWS/secret_access_key
- run: terraform apply -auto-approve
# Read single field
op read "op://Vault/Item/field"
# Read with output format
op read "op://Vault/Item/field" --format json
# Read to file (for certificates, keys)
op read "op://Vault/TLS/private_key" > /tmp/key.pem
chmod 600 /tmp/key.pem
# Single secret inline
DATABASE_URL=$(op read "op://Production/DB/url") rails db:migrate
# Multiple secrets via env file
op run --env-file=.op.env -- ./deploy.sh
# With account specification
op run --account my-team --env-file=.op.env -- terraform apply
# List vaults
op vault list
# List items in vault
op item list --vault Infrastructure
# Get item details
op item get "AWS" --vault Infrastructure
# Create item
op item create \
--category login \
--vault Infrastructure \
--title "New Service" \
--field username=admin \
--field password=secret123
# Sign in (creates session)
op signin
# Verify access
op vault list
# Create project env file
cat > .op.env << 'EOF'
# Infrastructure secrets
AWS_ACCESS_KEY_ID=op://Infrastructure/AWS/access_key_id
AWS_SECRET_ACCESS_KEY=op://Infrastructure/AWS/secret_access_key
# Application secrets
DATABASE_URL=op://Production/Database/url
REDIS_URL=op://Production/Redis/url
EOF
# Test secret loading
op run --env-file=.op.env -- env | grep -E '^(AWS|DATABASE|REDIS)'
Create items with placeholder values upfront, populate with real credentials later:
# 1. Create item with placeholder values
op item create \
--vault myproject \
--category login \
--title "production-rails" \
--field master_key="PLACEHOLDER_UPDATE_BEFORE_DEPLOY"
# 2. Create .kamal/secrets referencing the item
cat > .kamal/secrets << 'EOF'
RAILS_MASTER_KEY=$(op read "op://myproject/production-rails/master_key")
EOF
# 3. Update deployment docs to match
# docs/DEPLOYMENT.md should reference same paths
# 4. Later: Update with real value
op item edit "production-rails" \
--vault myproject \
master_key="actual_secret_value_here"
Benefits:
Documentation Sync:
Keep .kamal/secrets (or equivalent) and deployment docs in sync:
<!-- docs/DEPLOYMENT.md -->
## Required Secrets
| Secret | 1Password Path | Purpose |
|--------|----------------|---------|
| `RAILS_MASTER_KEY` | `op://myproject/production-rails/master_key` | Decrypt credentials |
| `DOCKERHUB_TOKEN` | `op://myproject/production-dockerhub/access_token` | Pull images |
Single-Vault Approach (Simpler)
Use one vault with naming conventions for environment separation:
Vault: myproject
Items:
- production-rails
- production-dockerhub
- production-hetzner-s3
- staging-rails
- staging-dockerhub
- development-rails
Benefits:
Multi-Vault Approach (Team Scale)
Separate vaults when you need different access controls:
| Vault | Purpose | Access |
|---|---|---|
Infrastructure | Cloud provider credentials | DevOps team |
Production | Production app secrets | Deploy systems |
Staging | Staging environment | Dev team |
Development | Local dev secrets | Individual devs |
Shared | Cross-team API keys | All teams |
When to Use Which:
.op.env files for project-specific secret mapping.op.env variants to .gitignore.op.env filesop read output in logs or echo statements# Check recent access events
op events-api
# Specific vault events
op audit-events list --vault Production
# Re-authenticate
op signin
# Check current session
op whoami
# Verify vault access
op vault list
# Search for item
op item list --vault Infrastructure | grep -i aws
# Check exact field names
op item get "AWS" --vault Infrastructure --format json | jq '.fields[].label'
# Check account permissions
op vault list
# Verify specific vault access
op vault get Infrastructure
For managing multiple 1Password accounts (personal + work), use --account flag or OP_ACCOUNT env var:
# Specify account per command
op vault list --account acme.1password.com
# Set default for shell session
export OP_ACCOUNT=acme.1password.com
# With op run
op run --account acme.1password.com --env-file=.op.env -- ./deploy.sh
Key rule: Always specify account in automation scripts - never rely on "last signed in".
See resources/multiple-accounts.md for detailed patterns including cross-account workflows and Makefile integration.
# .op.env.production
DATABASE_URL=op://Production/Database/url
REDIS_URL=op://Production/Redis/url
# .op.env.staging
DATABASE_URL=op://Staging/Database/url
REDIS_URL=op://Staging/Redis/url
# .op.env.development
DATABASE_URL=op://Development/Database/url
REDIS_URL=op://Development/Redis/url
ENV ?= development
OP_ENV_FILE = .op.env.$(ENV)
deploy:
op run --env-file=$(OP_ENV_FILE) -- kamal deploy
# Usage: make deploy ENV=production
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.