From oracle
Strategic technical advisor with two modes. Use for second opinions, architecture decisions, debugging, security analysis, and research. REPO MODE explores your codebase autonomously (finds gaps, reviews code, traces bugs). WEB MODE researches external info via @steipete/oracle CLI (current best practices, library comparisons, docs). Run both in parallel when comparing your implementation against current standards.
npx claudepluginhub andreasasprou/agent-skills --plugin oracleThis skill is limited to using the following tools:
Two complementary modes for different types of questions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Two complementary modes for different types of questions.
Ask: "Is the truth external, or is it in our code?"
| Truth Location | Mode | Model |
|---|---|---|
| In our code | Repo | gpt-5.2 xhigh via Codex SDK |
| External (docs, standards, comparisons) | Web | 5.2 Thinking + Heavy (default) |
| Complex research needing web synthesis | Web | gpt-5.2-pro (escalation) |
| Both (compare impl vs standards) | Parallel | Run both modes |
Script root: ${CLAUDE_PLUGIN_ROOT}/skills/oracle/scripts
bun ${CLAUDE_PLUGIN_ROOT}/skills/oracle/scripts/oracle.ts "question"
Capabilities: Explores files, runs commands, searches web (read-only sandbox).
@steipete/oracle CLI)The CLI bundles your prompt + selected files into one "one-shot" request so another model can answer with real repo context (API or browser automation). Treat outputs as advisory: verify against the codebase + tests.
Main workflow: --engine browser with GPT-5.2 Pro in ChatGPT. This is the "human in the loop" path — it can take ~10 minutes to ~1 hour; expect a stored session you can reattach to.
# Default: 5.2 Thinking with Heavy reasoning (fast questions)
npx -y @steipete/oracle --engine browser --model "5.2 Thinking" --browser-thinking-time heavy -p "question"
# Escalation: gpt-5.2-pro (Deep Research - complex multi-source research)
npx -y @steipete/oracle --engine browser --model gpt-5.2-pro -p "question"
# Include repo context (curated files)
npx -y @steipete/oracle --engine browser --model "5.2 Thinking" --browser-thinking-time heavy \
--file "src/auth/**/*.ts" \
--file "!**/*.test.ts" \
-p "question with context"
Always preview before large runs to check token budget and file selection.
# Summary preview
npx -y @steipete/oracle --dry-run summary -p "<task>" --file "src/**" --file "!**/*.test.*"
# Full preview
npx -y @steipete/oracle --dry-run full -p "<task>" --file "src/**"
# Token/cost sanity check
npx -y @steipete/oracle --dry-run summary --files-report -p "<task>" --file "src/**"
--file)--file accepts files, directories, and globs. Pass multiple times; entries can be comma-separated.
--file "src/**" | --file src/index.ts | --file docs --file README.md!): --file "src/**" --file "!src/**/*.test.ts" --file "!**/*.snap"node_modules, dist, coverage, .git, .turbo, .next, build, tmp.gitignore when expanding globs--file ".github/**")Budget target: keep total input under ~196k tokens. Use --files-report to spot token hogs before spending.
api when OPENAI_API_KEY is set, otherwise browser.--engine api for Claude/Grok/Codex or multi-model runs.--browser-attachments auto|never|always (auto pastes inline up to ~60k chars then uploads).Runs may detach or take a long time (browser + GPT-5.2 Pro often does). If the CLI times out, don't re-run — reattach.
# List recent sessions
oracle status --hours 72
# Reattach to a session
oracle session <id> --render
~/.oracle/sessions (override with ORACLE_HOME_DIR).--slug "<3-5 words>" to keep session IDs readable.--force only when you truly want a fresh run.When browser automation isn't available, assemble the bundle and copy to clipboard:
npx -y @steipete/oracle --render --copy -p "<task>" --file "src/**"
Note: --copy is a hidden alias for --copy-markdown.
Run both with run_in_background=true, poll with TaskOutput, synthesize results.
Questions where the answer is in the codebase:
Questions needing external knowledge with quick turnaround:
Complex research requiring multi-source synthesis (expect longer runtimes):
When comparing your code against current standards:
If the question type isn't obvious, use AskUserQuestion:
What kind of help do you need?
- Find issues in the current implementation (Repo)
- Research best practices and patterns (Web)
- Compare our implementation against current standards (Both)
Oracle starts with zero project knowledge. The model cannot infer your stack, build tooling, conventions, or "obvious" paths. Include:
When you know this will be a deep investigation, write a self-contained prompt:
If you need to reproduce the same context later, re-run with the same prompt + --file set (Oracle runs are one-shot; the model doesn't remember prior runs).
Best practice from developer research: review diffs and tests, not raw code.
1. Generate changes (primary agent writes code)
2. Package context for review (diff + key files + test results)
3. Review with oracle (critique: bugs, edge cases, missing tests)
4. Apply fixes + run tests
5. Repeat until critique converges
Effective review questions:
Both modes return structured responses:
For deep analysis, run in background:
# Repo Oracle (use run_in_background=true)
bun ${CLAUDE_PLUGIN_ROOT}/skills/oracle/scripts/oracle.ts "Audit this codebase for security issues"
# Web Oracle (use run_in_background=true)
npx -y @steipete/oracle --engine browser --model gpt-5.2-pro -p "Audit auth patterns" --file "src/auth/**"
# Poll
TaskOutput with block=false
# Get result (avoid context flooding)
tail -100 /path/to/output
.env, key files, auth tokens). Redact aggressively; share only what's required.| Mode | Model | Use When |
|---|---|---|
| Repo | gpt-5.2 xhigh | Codebase questions, finding gaps, code review |
| Web (default) | 5.2 Thinking + Heavy | External research, best practices, comparisons |
| Web (escalation) | gpt-5.2-pro | Complex multi-source research, deep synthesis |