From forge
Follow this protocol for ALL development tasks including fixing bugs, implementing features, refactoring code, changing configuration, or any code modifications. Call get_session_context at session start and track every non-trivial change as a decision using the decisions MCP server.
npx claudepluginhub tfatykhov/cognition-engines-marketplace --plugin forgeThis skill uses the workspace's default tool permissions.
You forge decisions in the Cognition Engine — deliberately, under pressure, with intention. Every decision flows through this loop, creating a compounding record of organizational judgment.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
Share bugs, ideas, or general feedback.
You forge decisions in the Cognition Engine — deliberately, under pressure, with intention. Every decision flows through this loop, creating a compounding record of organizational judgment.
FORGE: Fetch → Orient → Resolve → Go → Extract
You have access to a decisions MCP server. Follow this protocol for ALL non-trivial work.
get_session_context. Always. No exceptions.pre_action. This includes each design choice within a multi-step plan — not just the plan itself.record_thought calls — one atomic signal per call, minimum 10 per decision. Then update_decision to finalize.log_decision to finalize. It creates a duplicate. Always update_decision.allowed: false. Stop. Show the user. Wait.review_outcome. Every decision needs an outcome. No exceptions.get_session_context(task_description: "<infer from user's message>")
→ returns calibration, guardrails, patterns, relevant past decisions
Do this once at session start. Use it to inform every decision that follows.
pre_action(action, category, stakes, confidence, reasons, agent_id, auto_record:true)
→ returns decisionId, similar past decisions, guardrail results, calibration
This is where you check what's been tried before, what succeeded, what failed, and what guardrails apply. The decision is recorded and you get a decisionId for scoping.
Stream micro-thoughts — one atomic signal per call, minimum 10:
record_thought(text="10k concurrent users requirement", decision_id=ID, agent_id="a")
record_thought(text="team only knows Python — eliminates Go, Rust", decision_id=ID, agent_id="a")
record_thought(text="FastAPI is async-native", decision_id=ID, agent_id="a")
record_thought(text="Django async views exist but bolted on", decision_id=ID, agent_id="a")
record_thought(text="uvicorn benchmarks: 12k req/s — clears the bar", decision_id=ID, agent_id="a")
record_thought(text="but uvicorn process management less mature", decision_id=ID, agent_id="a")
record_thought(text="wait — gunicorn can manage uvicorn workers", decision_id=ID, agent_id="a")
record_thought(text="that resolves the process management concern", decision_id=ID, agent_id="a")
record_thought(text="FastAPI aligns with existing team microservices", decision_id=ID, agent_id="a")
record_thought(text="FastAPI + gunicorn-managed uvicorn workers", decision_id=ID, agent_id="a")
Then finalize:
update_decision(id=ID, decision="FastAPI + gunicorn-managed uvicorn workers")
Do the thing. Write the code. Ship the change.
review_outcome(id=ID, outcome, actual_result, lessons)
If a generalizable principle emerged:
update_decision(id=ID, pattern: "<the principle>")
Each record_thought call is ONE atomic signal. Think like neurons firing, not writing a report:
"Redis supports TTL natively — that's the expiry mechanism" (one fact)"but Redis is single-threaded — bottleneck at 50k writes/sec" (one constraint)"wait — we only need 2k writes/sec, single-thread is fine" (one resolution)"Considering Redis vs Memcached. Redis has TTL support and persistence but is single-threaded. Memcached is multi-threaded but lacks TTL. Given our 2k writes/sec requirement..." (this is a report, not a thought stream)When multiple agents share one MCP connection, scoping prevents thought mixups:
| Parameters | Tracker Key | Use Case |
|---|---|---|
| Neither | mcp-session | Single agent (fallback) |
agent_id only | agent:name | Agent-scoped, no specific decision |
decision_id only | decision:id | Decision-scoped, single agent |
| Both | agent:name:decision:id | Full isolation (recommended) |
Always pass both agent_id and decision_id for clean isolation.
Classify BEFORE acting:
| Level | Signal | Loop |
|---|---|---|
| LOW | Single file, easily reverted | ORIENT → RESOLVE (10+ thoughts) → GO → EXTRACT |
| MEDIUM | Multiple files, design choices | ORIENT → RESOLVE (10+ thoughts) → GO → EXTRACT |
| HIGH | Hard to reverse, wide blast radius | ORIENT(auto_record:false) → RESOLVE (10+ thoughts + risks) → show user → wait → GO → EXTRACT |
| CRITICAL | Irreversible or security-sensitive | ORIENT(auto_record:false) → RESOLVE (10+ thoughts + risks + alternatives) → show user → wait → GO → EXTRACT |
Stakes signals:
git revert? → LOW. Migration rollback? → HIGH+When executing a plan with multiple steps, each step that involves a choice is its own decision:
Test: Could you have done this step differently and it would matter? If yes → it's a decision.
After review_outcome, check:
update_decision(id, pattern: "...")Your session context includes calibration data:
tendency: underconfident → your 0.7-0.9 estimates probably succeed. Trust them.tendency: overconfident → lower estimates by 5-10%.by_category accuracy for category-specific calibration.