Dotfiles
AI agent orchestration infrastructure for 100x throughput. Parallelize agents across any harness (Claude Code, Amazon Q, Codex), enforce principles through reproducible config, and self-heal your development stack.
Dotfiles Philosophy
Our dotfiles repository follows three core principles that guide our approach to configuration management:
The Spilled Coffee Principle
The "spilled coffee principle" states that anyone should be able to destroy their machine and be fully operational again that afternoon. This principle emphasizes:
- All configuration changes should be reproducible across machines
- Setup scripts should handle file operations instead of manual commands
- Installation scripts should detect and create required directories
- Symlinks should be managed by setup scripts rather than manual linking
- Dependencies and installation steps should be well-documented
❌ Common Violations - Manual Terminal Heroics:
Like Brent from The Phoenix Project, we often become the constraint by being the "go-to hero" who fixes things manually. These commands are perfectly valid IN SCRIPTS, but become anti-patterns when typed directly in terminal:
# IN TERMINAL (BAD - Makes you Brent, the bottleneck hero):
dotfiles (main) $ ln -s mcp/mcp.json .mcp.json # Works today, forgotten tomorrow
dotfiles (main) $ mv .bashrc .bashrc.backup # Your knowledge, lost when you leave
dotfiles (main) $ chmod 600 ~/.bash_secrets # New teammate: "Why doesn't this work?"
dotfiles (main) $ mkdir -p ~/ppv/pillars # "It worked on my machine..."
dotfiles (main) $ echo "alias q='q'" >> ~/.bashrc # Snowflake environment alert!
dotfiles (main) $ curl -o tool.tar.gz https://... # Downloaded where? What version?
# The exact violation that inspired this documentation:
dotfiles (feature/vendor-agnostic-mcp-692) $ ln -s mcp/mcp.json .mcp.json
# ↑ I actually did this! Then immediately undid it and wrote a script instead.
The Brent Test: If you get hit by a bus (or take vacation), can someone else recreate what you did? If it's only in your terminal history, you're being Brent.
✅ The Same Commands in Scripts (GOOD - No More Brent!):
# IN SCRIPTS (GOOD - Knowledge is codified, not tribal):
# setup-vendor-agnostic-mcp.sh
ln -s mcp/mcp.json "$REPO_ROOT/.mcp.json" # Reproducible by anyone
# setup.sh
mkdir -p "$HOME/ppv/pillars" # Self-documenting
chmod 600 ~/.bash_secrets # Security automated
# install-tool.sh
download_and_install_tool() {
curl -o "$TEMP_DIR/tool.tar.gz" https://... # Version controlled
}
The Phoenix Principle: Move from "Brent did it" to "The system does it". Every terminal command that changes state should become code, removing key person dependencies.
The Litmus Test: Can you destroy your laptop, get a new one, run git clone && ./setup.sh, and be back to exactly where you were? If not, you've been a hero instead of a steward.
This principle ensures resilience and quick recovery from system failures or when setting up new environments.
The Snowball Method
See Snowball Method - compound returns through stacking daily wins. This principle ensures that our development environment continuously improves over time through 1% better every day.
Agent Orchestration Infrastructure
This system enables macro-level agent management instead of micro-level file editing - you manage tasks and projects, not lines of code within an IDE. The core infrastructure:
- Harness-Agnostic Configuration: Single
.agent-config.yml defines user preferences, agent settings, and paths - works across Claude Code, Amazon Q, and Codex without duplication (see config-architecture.md)
- Reproducible Agent Procedures: Slash commands in
commands/ directory (/close-issue, /create-issue, /extract-best-frame, /retro) enforce consistent workflows across all AI harnesses
- Telemetry and Feedback: OpenTelemetry observability stack with Grafana dashboards provides real-time performance monitoring and continuous improvement insights (see observability/README.md)
- Parallel Execution: Using tmux + git worktrees, you manage multiple AI agents simultaneously across parallel tasks
- Principle Enforcement:
knowledge/procedures/ and knowledge/principles/ automatically loaded into agent context to maintain consistency
The goal: 100x-1000x developer productivity through AI agent management capability. See throughput definition.
Modular Shell Configuration
This repository uses a modular approach to shell configuration: