npx claudepluginhub plurigrid/asi --plugin asiThis skill uses the workspace's default tool permissions.
**Trit**: 0 (ZERO)
Provides DuckDB temporal versioning for interaction history with time-travel queries, frozen snapshots, causality tracking via vector clocks, and immutable audit logs.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Trit: 0 (ZERO) Domain: Reinforcement Learning / World Transitions Principle: Worlds (a-z) as successor worlds with GF(3) balanced sampling
A maximally snapshotted replay buffer system for storing and retrieving world-transitions with:
REPLAY: WorldState × Action → WorldState' × Observation × Reward
GF3_COLOR: Experience → {-1, 0, +1}
TRIT_TICK: 1 / 141_120_000 seconds ≈ 7.09 nanoseconds
┌─────────────────────────────────────────────────────────┐
│ World Replay Buffer │
├─────────────────────────────────────────────────────────┤
│ replay_buffer.lpy │ Pure Basilisp in-memory buffer │
│ replay_buffer.py │ Python + DuckDB/VSS persistence │
│ replay_orchestrator.py│ Unified orchestrator with DB │
│ replay_bridge.lpy │ Basilisp-Python interop bridge │
└─────────────────────────────────────────────────────────┘
;; Basilisp experience structure
{:world-from "world-a"
:world-to "world-b"
:action {:play [:move :forward]}
:obs {:coplay [:sensor :reading]}
:reward 1.0
:timestamp 1711471200.0
:gf3-color 1} ; PLUS
Uses SplitMix64 deterministic hashing for reproducible coloring:
def gf3_color(content: str) -> int:
"""GF(3) classification via SplitMix64 hash."""
h = splitmix64_hash(content)
return (h % 3) - 1 # {-1, 0, +1}
Worlds are labeled a-z as successor worlds, NOT todos:
world-a → world-b → world-c → ... → world-z
Each transition stores:
Balanced sampling across GF(3) classes ensures no class dominates:
(defn sample-balanced-gf3
"Sample experiences balanced across GF(3) classes."
[buffer n]
(let [by-color (group-by :gf3-color buffer)
per-class (max 1 (quot n 3))]
(->> (vals by-color)
(mapcat #(take per-class (shuffle %)))
(take n))))
CREATE SEQUENCE IF NOT EXISTS exp_id_seq;
CREATE TABLE IF NOT EXISTS experiences (
id INTEGER PRIMARY KEY DEFAULT nextval('exp_id_seq'),
world_from TEXT NOT NULL,
world_to TEXT NOT NULL,
action_json TEXT NOT NULL,
obs_json TEXT NOT NULL,
reward DOUBLE NOT NULL,
timestamp_ns BIGINT NOT NULL,
gf3_color INTEGER NOT NULL,
content_hash TEXT UNIQUE NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_gf3 ON experiences(gf3_color);
CREATE INDEX IF NOT EXISTS idx_world_from ON experiences(world_from);
(ns replay-buffer)
;; Add experience
(def exp {:world-from "world-a"
:world-to "world-b"
:action {:type :move}
:obs {:type :sensor}
:reward 1.0})
(add-experience! buffer exp)
;; Sample balanced
(sample-balanced-gf3 @buffer 10)
from replay_orchestrator import ReplayOrchestrator
orch = ReplayOrchestrator()
orch.store_experience(
world_from="world-a",
world_to="world-b",
action={"type": "move"},
observation={"type": "sensor"},
reward=1.0
)
samples = orch.sample_balanced(n=10)
(ns replay-bridge
(:import importlib))
(def orch (get-orchestrator))
(store-experience! orch {:world-from "world-a" ...})
This skill participates in triadic composition:
TRIT_TICK = 1 / 141_120_000 # ~7.09 nanoseconds
timestamp_tritticks = int(time.time() / TRIT_TICK)
Located in /Users/alice/worlds/:
replay_buffer.lpy - Pure Basilisp implementationreplay_buffer.py - Python with DuckDBreplay_orchestrator.py - Unified orchestratorreplay_bridge.lpy - Basilisp-Python bridgeSkill Name: world-replay-buffer Type: Reinforcement Learning / Experience Storage Trit: 0 (ZERO) GF(3): Conserved in triplet composition
Condition: μ(n) ≠ 0 (Mobius squarefree)
This skill is qualified for non-backtracking geodesic traversal:
Geodesic Invariant:
∀ path P: backtrack(P) = ∅ ⟹ μ(|P|) ≠ 0
World Transition:
world_a →[action]→ world_b →[action]→ world_c
GF(3) Balance:
|{exp : color = -1}| ≈ |{exp : color = 0}| ≈ |{exp : color = +1}|