Atomic
Ship complex features with AI agents that actually understand your codebase. Research, spec, implement — then wake up to completed code ready for review.
Key Principle
You own the decisions. Agents own the execution.
- Review specs before implementation (architecture decisions)
- Review code after each feature (quality gate)
- The 40-60% rule: agents get you most of the way, you provide the polish
- Play around with the agents and use them as your swiss army knife
Video Overview

What Engineers Use Atomic For
Ship Complex Features End-to-End
Not just bug fixes — scoped, multi-file features that require architectural understanding:
- Database migrations across large codebases
- Entire new services (building a complete GraphRAG service from scratch)
- Features spanning dozens of files that need to understand existing patterns first
- Trying different implementation approaches — spec it out, try one framework, revert, try another
The workflow: /research-codebase → review → /create-spec → review → /create-feature-list → review → /implement-feature (manually one-by-one, or let Ralph run overnight). Wake up to completed features ready for review.
Works on macOS, Linux, and Windows.
Deep Codebase Research & Root Cause Analysis
You know the pain:
- Hours lost hunting through unfamiliar code manually
- Agents missing key files even when you know they're relevant
- Repeating yourself — mentioning the same file over and over, only for the agent to ignore it
- Context window blown before you've even started the real work
- Files too large to paste — so you just... can't share the context you need
The /research-codebase command dispatches specialized sub-agents to do the hunting for you:
- Understand how authentication flows work in an unfamiliar codebase
- Track down root causes by analyzing code paths across dozens of files
- Search through docs, READMEs, and inline documentation in your repo
- Get up to speed on a new project in minutes instead of hours
This is the fastest path to value — install, run one command, get answers.
Explore Multiple Implementation Approaches
When you're evaluating libraries, exploring implementation approaches, or need best practices before building, Atomic's research phase pulls in external knowledge — not just your codebase — to inform the spec and implementation plan.
Example: Researching three GraphRAG implementation approaches in parallel
# Run 3 parallel research sessions in separate terminals
atomic run claude "/research-codebase Research implementing GraphRAG using \
LangChain's graph retrieval patterns. Look up langchain-ai/langchain for \
graph store integrations, chunking strategies, and retrieval patterns. \
Document how this would integrate with our existing vector store."
atomic run claude "/research-codebase Research implementing GraphRAG using \
Microsoft's GraphRAG library. Look up microsoft/graphrag for their \
community detection, entity extraction, and summarization pipeline. \
Document the infrastructure requirements and how it fits our data model."
atomic run claude "/research-codebase Research implementing GraphRAG using \
LlamaIndex's property graph index. Look up run-llama/llama_index for \
their KnowledgeGraphIndex and property graph patterns. Document trade-offs \
vs our current RAG implementation."
What happens: Each agent spawns codebase-online-researcher sub-agents that query DeepWiki for the specified repos, pull external documentation, and cross-reference with your existing codebase patterns. You get three research documents.
From there: Run /create-spec and /create-feature-list on each research doc in parallel terminals. Then spin up three git worktrees and run /ralph:ralph-loop in each. Wake up to three complete implementations on separate branches — review, benchmark, and choose the winner.
Note: This workflow works identically with atomic run opencode and atomic run copilot — just substitute the CLI command.
Table of Contents
Set up Atomic