From ritual-dapp-skills
Biases agent to resolve autonomously: search skills/code/docs/context before questioning user. Tracks turn budgets, flags low-confidence decisions, announces routine actions.
npx claudepluginhub ritual-foundation/ritual-dapp-skills --plugin ritual-dapp-skillsThis skill uses the workspace's default tool permissions.
Steer the agent toward autonomous resolution over interactive questioning. The default mode is: resolve it yourself. The exception mode is: ask the user, through `skills/ritual-meta-human-in-loop/SKILL.md`.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Steer the agent toward autonomous resolution over interactive questioning. The default mode is: resolve it yourself. The exception mode is: ask the user, through skills/ritual-meta-human-in-loop/SKILL.md.
Before asking the user, exhaust these in order:
Confidence gate: If the self-resolved answer has low confidence (the skill mentions something "might" work, the chain query returns ambiguous data), proceed with it BUT flag it:
I'm going with [approach] based on [source]. This might not be exactly right for your case —
if the result looks off, let me know and I'll adjust.
This is "graduated non-interactivity" — neither fully blocking nor silently wrong.
Every time the agent is about to generate a question, first generate the search that would answer it:
About to ask: "[question]"
Self-resolution attempt:
Skill search: [which skill, what keyword] → [found / not found]
Chain query: [what contract/function] → [result / N/A]
User's code: [which file, what pattern] → [found / not found]
Context inference: [what prior information] → [sufficient / insufficient]
Resolution: [answer found — proceed] or [genuinely unresolvable — ask via human-in-loop]
Token costs are not observable in most harnesses. Turns are. Ask the user early (after the build plan is generated, not before):
The build plan has roughly [N] steps. At current pace, that's about [M] back-and-forths.
Does that sound reasonable, or should I aim to be more concise?
A: That's fine, take your time
B: Try to be more concise
C: I have a hard limit of [X] turns/dollars
D: Don't care — just get it right
Track the ratio: remaining_turns_estimate / remaining_steps.
| Ratio | Behavior |
|---|---|
| > 2.0 | Slack — full verification, exploratory tangents OK |
| 1.0–2.0 | Normal — standard verification, stay on-plan |
| 0.5–1.0 | Tight — lint-only verification, skip nice-to-haves |
| < 0.5 | Critical — stop, summarize what's done, hand off cleanly |
| Data | Cache Duration | Rationale |
|---|---|---|
| Executor list from TEEServiceRegistry | 100 blocks (~35s at ~350ms baseline) | Registrations change infrequently |
| RitualWallet balance | 10 blocks (~3.5s at ~350ms baseline) | Deposits/withdrawals are user-initiated |
| Sender lock state | Never cache | Changes with every async tx submit/settle |
| Contract deployment (cast code) | Permanent once confirmed | Contracts don't un-deploy |
| Block number / chain connectivity | 1 block | Health check — always fresh |
Between "silently do it" and "ask the user," there's a middle state: announce what you're doing without requesting approval.
I'm setting up Scheduler chaining to handle your two-step workflow
(HTTP fetch → LLM analysis). This is the standard approach since
Ritual allows only one short-running async precompile call per transaction.
Use inform-without-asking for:
The user can object ("actually, I'd rather...") but the agent doesn't block waiting for approval.
Despite the non-interactive bias, always ask for:
skills/ritual-meta-human-in-loop/SKILL.md.skills/ritual-meta-circuit-breaker/SKILL.md.