HtmlGraph session tracking and documentation skill. Activated automatically at session start to ensure proper activity attribution, feature awareness, and documentation habits. Use when working with HtmlGraph-enabled projects, when drift warnings appear, or when the user asks about tracking features or sessions.
/plugin marketplace add Shakes-tzd/htmlgraph/plugin install htmlgraph@htmlgraphThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Use this skill when HtmlGraph is tracking the session to ensure proper activity attribution and documentation. Activate this skill at session start via the SessionStart hook.
ā READ ../../../AGENTS.md FOR COMPLETE SDK DOCUMENTATION
The root AGENTS.md file contains:
deploy-all.sh scriptThis file (SKILL.md) contains Claude Code-specific instructions only.
For SDK usage, deployment, and general agent workflows ā USE AGENTS.md
Trigger keywords: htmlgraph, feature tracking, session tracking, drift detection, activity log, work attribution, feature status, session management
IMPORTANT: For Claude Code, use the Python SDK directly instead of MCP tools.
Why SDK over MCP:
The SDK provides access to ALL HtmlGraph operations without adding tool definitions to your context.
ABSOLUTE RULE: DO NOT use Read, Write, or Edit tools on .htmlgraph/ HTML files.
Use the SDK (or API/CLI for special cases) to ensure all HTML is validated through Pydantic + justhtml.
ā FORBIDDEN:
# NEVER DO THIS
Write('/path/to/.htmlgraph/features/feature-123.html', ...)
Edit('/path/to/.htmlgraph/sessions/session-456.html', ...)
with open('.htmlgraph/features/feature-123.html', 'w') as f:
f.write('<html>...</html>')
ā REQUIRED - Use SDK (BEST CHOICE FOR AI AGENTS):
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Work with ANY collection (features, bugs, chores, spikes, epics, phases)
sdk.features # Features with builder support
sdk.bugs # Bug reports
sdk.chores # Maintenance tasks
sdk.spikes # Investigation spikes
sdk.epics # Large bodies of work
sdk.phases # Project phases
# Create features (fluent interface)
feature = sdk.features.create("Title") \
.set_priority("high") \
.add_steps(["Step 1", "Step 2"]) \
.save()
# Edit ANY collection (auto-saves)
with sdk.features.edit("feature-123") as f:
f.status = "done"
with sdk.bugs.edit("bug-001") as bug:
bug.status = "in-progress"
bug.priority = "critical"
# Vectorized batch updates (efficient!)
sdk.bugs.batch_update(
["bug-001", "bug-002", "bug-003"],
{"status": "done", "resolution": "fixed"}
)
# Query across collections
high_priority = sdk.features.where(status="todo", priority="high")
in_progress_bugs = sdk.bugs.where(status="in-progress")
# All collections have same interface
sdk.chores.mark_done(["chore-1", "chore-2"])
sdk.spikes.assign(["spike-1"], agent="claude")
Why SDK is best:
ā ALTERNATIVE - Use CLI (for one-off commands):
# CLI is slower (400ms startup per command) but convenient for one-off queries
uv run htmlgraph feature create/start/complete
uv run htmlgraph status
ā ļø AVOID - API/curl (use only for remote access):
# Requires server + network overhead, only use for remote access
curl -X PATCH localhost:8080/api/features/feat-123 -d '{"status": "done"}'
Why this matters:
NO EXCEPTIONS: NEVER read, write, or edit .htmlgraph/ files directly.
Use the SDK for ALL operations including inspection:
# ā
CORRECT - Inspect sessions/events via SDK
from htmlgraph import SDK
from htmlgraph.session_manager import SessionManager
sdk = SDK(agent="claude-code")
sm = SessionManager()
# Get current session
session = sm.get_active_session(agent="claude-code")
# Get recent events (last 10)
recent = session.get_events(limit=10, offset=session.event_count - 10)
for evt in recent:
print(f"{evt['event_id']}: {evt['tool']} - {evt['summary']}")
# Query events by tool
bash_events = session.query_events(tool='Bash', limit=20)
# Query events by feature
feature_events = session.query_events(feature_id='feat-123')
# Get event statistics
stats = session.event_stats()
print(f"Total: {stats['total_events']}, Tools: {stats['tools_used']}")
ā FORBIDDEN - Reading files directly:
# NEVER DO THIS
with open('.htmlgraph/events/session-123.jsonl') as f:
events = [json.loads(line) for line in f]
# NEVER DO THIS
tail -10 .htmlgraph/events/session-123.jsonl
Documentation:
docs/SDK_FOR_AI_AGENTS.mddocs/SDK_EVENT_INSPECTION.mddocs/AGENTS.mdAlways know which feature(s) are currently in progress:
uv run htmlgraph statusMark each step complete IMMEDIATELY after finishing it:
ABSOLUTE REQUIREMENT: Track ALL work in HtmlGraph.
HtmlGraph tracking is like Git commits - never do work without tracking it.
Update HtmlGraph immediately after completing each piece of work:
Why this matters:
The hooks track tool usage automatically, but YOU must:
uv run htmlgraph feature start <id>)uv run htmlgraph feature complete <id>)HtmlGraph automatically tracks tool usage. Action items:
description parameterFor every significant piece of work:
Tracks are high-level containers for multi-feature work (conductor-style planning):
When to create a track:
When to skip tracks:
IMPORTANT: Use the TrackBuilder for deterministic track creation with minimal effort.
The TrackBuilder provides a fluent API that auto-generates IDs, timestamps, file paths, and HTML files.
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Create complete track with spec and plan in one command
track = sdk.tracks.builder() \
.title("User Authentication System") \
.description("Implement OAuth 2.0 authentication with JWT") \
.priority("high") \
.with_spec(
overview="Add secure authentication with OAuth 2.0 support for Google and GitHub",
context="Current system has no authentication. Users need secure login with session management.",
requirements=[
("Implement OAuth 2.0 flow", "must-have"),
("Add JWT token management", "must-have"),
("Create user profile endpoint", "should-have"),
"Add password reset functionality" # Defaults to "must-have"
],
acceptance_criteria=[
("Users can log in with Google/GitHub", "OAuth integration test passes"),
"JWT tokens expire after 1 hour",
"Password reset emails sent within 5 minutes"
]
) \
.with_plan_phases([
("Phase 1: OAuth Setup", [
"Configure OAuth providers (1h)",
"Implement OAuth callback (2h)",
"Add state verification (1h)"
]),
("Phase 2: JWT Integration", [
"Create JWT signing logic (2h)",
"Add token refresh endpoint (1.5h)",
"Implement token validation middleware (2h)"
]),
("Phase 3: User Management", [
"Create user profile endpoint (3h)",
"Add password reset flow (4h)",
"Write integration tests (3h)"
])
]) \
.create()
# Output:
# ā Created track: track-20251221-220000
# - Spec with 4 requirements
# - Plan with 3 phases, 9 tasks
# Files created automatically:
# .htmlgraph/tracks/track-20251221-220000/index.html (track metadata)
# .htmlgraph/tracks/track-20251221-220000/spec.html (specification)
# .htmlgraph/tracks/track-20251221-220000/plan.html (implementation plan)
TrackBuilder Features:
"Task (2h)".create() call generates everythingAfter creating a track, link features to it:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Get the track ID from the track you created
track_id = "track-20251221-220000"
# Create features and link to track
oauth_feature = sdk.features.create("OAuth Integration") \
.set_track(track_id) \
.set_priority("high") \
.add_steps([
"Configure OAuth providers",
"Implement OAuth callback",
"Add state verification"
]) \
.save()
jwt_feature = sdk.features.create("JWT Token Management") \
.set_track(track_id) \
.set_priority("high") \
.add_steps([
"Create JWT signing logic",
"Add token refresh endpoint",
"Implement validation middleware"
]) \
.save()
# Features are now linked to the track
# Query features by track:
track_features = sdk.features.where(track=track_id)
print(f"Track has {len(track_features)} features")
The track_id field:
Complete workflow from track creation to feature completion:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# 1. Create track with spec and plan
track = sdk.tracks.builder() \
.title("API Rate Limiting") \
.description("Protect API endpoints from abuse") \
.priority("critical") \
.with_spec(
overview="Implement rate limiting to prevent API abuse",
context="Current API has no limits, vulnerable to DoS attacks",
requirements=[
("Implement token bucket algorithm", "must-have"),
("Add Redis for distributed limiting", "must-have"),
("Create rate limit middleware", "must-have")
],
acceptance_criteria=[
("100 requests/minute per API key", "Load test passes"),
"429 status code when limit exceeded"
]
) \
.with_plan_phases([
("Phase 1: Core", ["Token bucket (3h)", "Redis client (1h)"]),
("Phase 2: Integration", ["Middleware (2h)", "Error handling (1h)"]),
("Phase 3: Testing", ["Unit tests (2h)", "Load tests (3h)"])
]) \
.create()
# 2. Create features from plan phases
for phase_idx, (phase_name, tasks) in enumerate([
("Core Implementation", ["Implement token bucket", "Add Redis client"]),
("API Integration", ["Create middleware", "Add error handling"]),
("Testing & Validation", ["Write unit tests", "Run load tests"])
]):
feature = sdk.features.create(phase_name) \
.set_track(track.id) \
.set_priority("critical") \
.add_steps(tasks) \
.save()
print(f"ā Created feature {feature.id} for track {track.id}")
# 3. Work on features
# Start first feature
first_feature = sdk.features.where(track=track.id, status="todo")[0]
with sdk.features.edit(first_feature.id) as f:
f.status = "in-progress"
# ... do the work ...
# Mark steps complete as you finish them
with sdk.features.edit(first_feature.id) as f:
f.steps[0].completed = True
# Complete feature when done
with sdk.features.edit(first_feature.id) as f:
f.status = "done"
# 4. Track progress
track_features = sdk.features.where(track=track.id)
completed = len([f for f in track_features if f.status == "done"])
print(f"Track progress: {completed}/{len(track_features)} features complete")
Methods:
.title(str) - Set track title (REQUIRED).description(str) - Set description (optional).priority(str) - Set priority: "low", "medium", "high", "critical" (default: "medium").with_spec(...) - Add specification (optional)
overview - High-level summarycontext - Background and current staterequirements - List of (description, priority) tuples or strings
acceptance_criteria - List of (description, test_case) tuples or strings.with_plan_phases(list) - Add plan phases (optional)
[(phase_name, [task_descriptions]), ...](Xh) in description, e.g., "Implement auth (3h)".create() - Execute build and create all files (returns Track object)Documentation:
docs/TRACK_BUILDER_QUICK_START.mddocs/TRACK_WORKFLOW.mddocs/AGENT_FRIENDLY_SDK.mdNEW: HtmlGraph enforces the workflow via a PreToolUse validation hook that ensures code changes are always tracked.
The validation hook runs BEFORE every tool execution and makes decisions based on your current work item:
VALIDATION RULES:
| Scenario | Tool | Action | Reason |
|---|---|---|---|
| Active Feature | Read | ā Allow | Exploration is always allowed |
| Active Feature | Write/Edit/Delete | ā Allow | Code changes match active feature |
| Active Spike | Read | ā Allow | Spikes permit exploration |
| Active Spike | Write/Edit/Delete | ā ļø Warn + Allow | Planning spike, code changes not tracked |
| Auto-Spike (session-init) | All | ā Allow | Planning phase, don't block |
| No Active Work | Read | ā Allow | Exploration without feature is OK |
| No Active Work | Write/Edit/Delete (1 file) | ā ļø Warn + Allow | Single-file changes often trivial |
| No Active Work | Write/Edit/Delete (3+ files) | ā Deny | Requires explicit feature creation |
| SDK Operations | All | ā Allow | Creating work items always allowed |
Validation DENIES code changes (Write/Edit/Delete) when ALL of these are true:
What you see:
PreToolUse Validation: Cannot proceed without active work item
- Reason: Multi-file changes (5 files) without tracked work item
- Action: Create a feature first with uv run htmlgraph feature create
Resolution: Create a feature using the feature decision framework, then try again.
Validation WARNS BUT ALLOWS when:
What you see:
PreToolUse Validation: Warning - activity may not be tracked
- File: src/config.py (1 file)
- Reason: Single-file change without active feature
- Option: Create feature if this is significant work
You can continue - but consider if the work deserves a feature.
Auto-spikes are automatic planning spikes created during session initialization.
When the validation hook detects the start of a new session:
spike-session-init-abc123)Why auto-spikes?
Example auto-spike lifecycle:
Session Start
ā
Auto-spike created: spike-session-init-20251225
ā
Investigation/exploration work
ā
"This needs to be a feature" ā Create feature, link to spike
ā
Feature takes primary attribution
ā
Spike marked as resolved
Use this framework to decide if you need a feature before making code changes:
User request or idea
āā Single file, <30 min? ā DIRECT CHANGE (validation warns, allows)
āā 3+ files? ā CREATE FEATURE (validation denies without feature)
āā New tests needed? ā CREATE FEATURE (validation blocks)
āā Multi-component impact? ā CREATE FEATURE (validation blocks)
āā Hard to revert? ā CREATE FEATURE (validation blocks)
āā Needs documentation? ā CREATE FEATURE (validation blocks)
āā Otherwise ā DIRECT CHANGE (validation warns, allows)
Key insight: Validation's deny threshold (3+ files) aligns with the feature decision threshold in CLAUDE.md.
Situation: You just started a new session. No features are active.
# Session starts ā auto-spike created automatically
# spike-session-init-20251225 is now active (auto-created)
# All of these work WITHOUT creating a feature:
- Read code files (exploration)
- Write to a single file (validation warns but allows)
- Create a feature (SDK operation, always allowed)
- Ask the user what to work on
Flow:
uv run htmlgraph feature create "User Authentication"Result: Work is properly attributed to the feature, not the throwaway auto-spike.
Situation: User says "Build a user authentication system"
WITHOUT feature:
# Try to edit 5 files without creating a feature
uv run htmlgraph something that touches 5 files
# Validation DENIES:
# ā PreToolUse Validation: Cannot proceed without active work item
# Reason: Multi-file changes (5 files) without tracked work item
# Action: Create a feature first
WITH feature:
# Create the feature first
uv run htmlgraph feature create "User Authentication"
# ā feat-abc123 created and marked in-progress
# Now implement - all 5 files allowed
# Edit src/auth.py
# Edit src/middleware.py
# Edit src/models.py
# Write tests/test_auth.py
# Update docs/authentication.md
# Validation ALLOWS:
# ā
All changes attributed to feat-abc123
# ā
Session shows feature context
# ā
Work is trackable
Result: Multi-file feature work is tracked and attributed.
Situation: You notice a typo in a docstring.
# Edit a single file without creating a feature
# Edit src/utils.py (fix typo)
# Validation WARNS BUT ALLOWS:
# ā ļø PreToolUse Validation: Warning - activity may not be tracked
# File: src/utils.py (1 file)
# Reason: Single-file change without active feature
# Option: Create feature if this is significant work
# You can choose:
# - Continue (typo is trivial, doesn't need feature)
# - Cancel and create feature (if it's a bigger fix)
Result: Small fixes don't require features, but validation tracks the decision.
RECOMMENDED: Use the Python SDK for AI agents (cleanest, fastest, most powerful)
The SDK supports ALL collections with a unified interface. Use it for maximum performance and type safety.
from htmlgraph import SDK
# Initialize (auto-discovers .htmlgraph)
sdk = SDK(agent="claude")
# ===== ALL COLLECTIONS SUPPORTED =====
# Features (with builder support)
feature = sdk.features.create("User Authentication") \
.set_priority("high") \
.add_steps([
"Create login endpoint",
"Add JWT middleware",
"Write tests"
]) \
.save()
# Bugs
with sdk.bugs.edit("bug-001") as bug:
bug.status = "in-progress"
bug.priority = "critical"
# Chores, Spikes, Epics - all work the same way
chore = sdk.chores.where(status="todo")[0]
spike_results = sdk.spikes.all()
epic_steps = sdk.epics.get("epic-001").steps
# ===== EFFICIENT BATCH OPERATIONS =====
# Mark multiple items done (vectorized!)
sdk.bugs.mark_done(["bug-001", "bug-002", "bug-003"])
# Assign multiple items to agent
sdk.features.assign(["feat-001", "feat-002"], agent="claude")
# Custom batch updates (any attributes)
sdk.chores.batch_update(
["chore-001", "chore-002"],
{"status": "done", "agent_assigned": "claude"}
)
# ===== CROSS-COLLECTION QUERIES =====
# Find all in-progress work
in_progress = []
for coll_name in ['features', 'bugs', 'chores', 'spikes', 'epics']:
coll = getattr(sdk, coll_name)
in_progress.extend(coll.where(status='in-progress'))
# Find low-lift tasks
for item in in_progress:
if hasattr(item, 'steps'):
for step in item.steps:
if not step.completed and 'document' in step.description.lower():
print(f"š {item.id}: {step.description}")
SDK Performance (vs CLI):
IMPORTANT: Always use uv run when running htmlgraph commands to ensure the correct environment.
ā ļø CLI is slower than SDK (400ms startup per command). Use for quick one-off queries only.
# Check Current Status
uv run htmlgraph status
uv run htmlgraph feature list
# Start Working on a Feature
uv run htmlgraph feature start <feature-id>
# Set Primary Feature (when multiple are active)
uv run htmlgraph feature primary <feature-id>
# Complete a Feature
uv run htmlgraph feature complete <feature-id>
When to use CLI vs SDK:
NEW: HtmlGraph now provides intelligent analytics to help you make smart decisions about what to work on next.
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Get smart recommendations on what to work on
recs = sdk.recommend_next_work(agent_count=1)
if recs:
best = recs[0]
print(f"š” Work on: {best['title']}")
print(f" Score: {best['score']:.1f}")
print(f" Why: {', '.join(best['reasons'])}")
Identify tasks blocking the most downstream work:
bottlenecks = sdk.find_bottlenecks(top_n=5)
for bn in bottlenecks:
print(f"{bn['title']} blocks {bn['blocks_count']} tasks")
print(f"Impact score: {bn['impact_score']}")
Returns: List of dicts with id, title, status, priority, blocks_count, impact_score, blocked_tasks
Find tasks that can be worked on simultaneously:
parallel = sdk.get_parallel_work(max_agents=5)
print(f"Can work on {parallel['max_parallelism']} tasks at once")
print(f"Ready now: {parallel['ready_now']}")
Returns: Dict with max_parallelism, ready_now, total_ready, level_count, next_level
Get smart recommendations considering priority, dependencies, and impact:
recs = sdk.recommend_next_work(agent_count=3)
for rec in recs:
print(f"{rec['title']} (score: {rec['score']})")
print(f"Reasons: {rec['reasons']}")
print(f"Unlocks: {rec['unlocks_count']} tasks")
Returns: List of dicts with id, title, priority, score, reasons, estimated_hours, unlocks_count, unlocks
Check for dependency-related risks:
risks = sdk.assess_risks()
if risks['high_risk_count'] > 0:
print(f"Warning: {risks['high_risk_count']} high-risk tasks")
for task in risks['high_risk_tasks']:
print(f" {task['title']}: {task['risk_factors']}")
if risks['circular_dependencies']:
print("Circular dependencies detected!")
Returns: Dict with high_risk_count, high_risk_tasks, circular_dependencies, orphaned_count, recommendations
See what completing a task will unlock:
impact = sdk.analyze_impact("feature-001")
print(f"Unlocks {impact['completion_impact']:.1f}% of remaining work")
print(f"Affects {impact['total_impact']} downstream tasks")
Returns: Dict with node_id, direct_dependents, total_impact, completion_impact, unlocks_count, affected_tasks
At the start of each work session:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# 1. Check for bottlenecks
bottlenecks = sdk.find_bottlenecks(top_n=3)
if bottlenecks:
print(f"ā ļø {len(bottlenecks)} bottlenecks found")
# 2. Get recommendations
recs = sdk.recommend_next_work(agent_count=1)
if recs:
best = recs[0]
print(f"\nš” RECOMMENDED: {best['title']}")
print(f" Score: {best['score']:.1f}")
print(f" Reasons: {', '.join(best['reasons'][:2])}")
# 3. Analyze impact
impact = sdk.analyze_impact(best['id'])
print(f" Impact: Unlocks {impact['unlocks_count']} tasks")
# 4. Check for parallel work (if coordinating)
parallel = sdk.get_parallel_work(max_agents=3)
if parallel['total_ready'] > 1:
print(f"\nā” {parallel['total_ready']} tasks available in parallel")
For advanced use cases, access the full analytics engine:
# Access Pydantic models with all fields
analytics = sdk.dep_analytics
bottlenecks = analytics.find_bottlenecks(top_n=5, min_impact=1.0)
parallel = analytics.find_parallelizable_work(status="todo")
recs = analytics.recommend_next_tasks(agent_count=3, lookahead=5)
risk = analytics.assess_dependency_risk(spof_threshold=2)
impact = analytics.impact_analysis("feature-001")
See also: docs/AGENT_STRATEGIC_PLANNING.md for complete guide
NEW: HtmlGraph now automatically categorizes all work by type to differentiate exploratory work from implementation.
All events are automatically tagged with a work type based on the active feature:
Use Spike model for timeboxed investigation:
from htmlgraph import SDK, SpikeType
sdk = SDK(agent="claude")
# Create a spike with classification
spike = sdk.spikes.create("Investigate OAuth providers") \
.set_spike_type(SpikeType.TECHNICAL) \
.set_timebox_hours(4) \
.add_steps([
"Research OAuth 2.0 flow",
"Compare Google vs GitHub providers",
"Document security considerations"
]) \
.save()
# Update findings after investigation
with sdk.spikes.edit(spike.id) as s:
s.findings = "Google OAuth has better docs but GitHub has simpler integration"
s.decision = "Use GitHub OAuth for MVP, migrate to Google later if needed"
s.status = "done"
Spike Types:
TECHNICAL - Investigate technical implementation optionsARCHITECTURAL - Research system design decisionsRISK - Identify and assess project risksGENERAL - Uncategorized investigationUse Chore model for maintenance tasks:
from htmlgraph import SDK, MaintenanceType
sdk = SDK(agent="claude")
# Create a chore with classification
chore = sdk.chores.create("Refactor authentication module") \
.set_maintenance_type(MaintenanceType.PREVENTIVE) \
.set_technical_debt_score(7) \
.add_steps([
"Extract auth logic to separate module",
"Add unit tests for auth flows",
"Update documentation"
]) \
.save()
Maintenance Types:
CORRECTIVE - Fix defects and errorsADAPTIVE - Adapt to environment changes (OS, dependencies)PERFECTIVE - Improve performance, usability, maintainabilityPREVENTIVE - Prevent future problems (refactoring, tech debt)Query work type distribution for any session:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Get current session
from htmlgraph.session_manager import SessionManager
sm = SessionManager()
session = sm.get_active_session(agent="claude")
# Calculate work breakdown
breakdown = session.calculate_work_breakdown()
# Returns: {"feature-implementation": 120, "spike-investigation": 45, "maintenance": 30}
# Get primary work type
primary = session.calculate_primary_work_type()
# Returns: "feature-implementation" (most common type)
# Query events by work type
spike_events = [e for e in session.get_events() if e.get("work_type") == "spike-investigation"]
Work type is automatically inferred from feature_id prefix:
# When you start a spike:
sdk.spikes.start("spike-123")
# ā All events auto-tagged with work_type="spike-investigation"
# When you start a feature:
sdk.features.start("feat-456")
# ā All events auto-tagged with work_type="feature-implementation"
# When you start a chore:
sdk.chores.start("chore-789")
# ā All events auto-tagged with work_type="maintenance"
No manual tagging required! The system automatically categorizes your work based on what you're working on.
Work type classification enables you to:
Example Session Analysis:
# After a long session, analyze what you did:
session = sm.get_active_session(agent="claude")
breakdown = session.calculate_work_breakdown()
print(f"Primary work type: {session.calculate_primary_work_type()}")
print(f"Work breakdown: {breakdown}")
# Output:
# Primary work type: spike-investigation
# Work breakdown: {
# "spike-investigation": 65,
# "feature-implementation": 30,
# "documentation": 10
# }
# ā This was primarily an exploratory/research session
CRITICAL: Use this framework to decide when to create a feature vs implementing directly.
Create a FEATURE if ANY apply:
Implement DIRECTLY if ALL apply:
User request received
āā Bug in existing feature? ā See Bug Fix Workflow in WORKFLOW.md
āā >30 minutes? ā CREATE FEATURE
āā 3+ files? ā CREATE FEATURE
āā New tests needed? ā CREATE FEATURE
āā Multi-component impact? ā CREATE FEATURE
āā Hard to revert? ā CREATE FEATURE
āā Otherwise ā IMPLEMENT DIRECTLY
ā CREATE FEATURE:
ā IMPLEMENT DIRECTLY:
When in doubt, CREATE A FEATURE. Over-tracking is better than losing attribution.
See docs/WORKFLOW.md for the complete decision framework with detailed criteria, thresholds, and edge cases.
MANDATORY: Follow this checklist for EVERY session. No exceptions.
uv run htmlgraph session start-info - Get comprehensive session context (optimized, 1 call)
uv run htmlgraph feature start <id>IMPORTANT: After finishing each step, mark it complete using the SDK:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# Mark step 0 (first step) as complete
with sdk.features.edit("feature-id") as f:
f.steps[0].completed = True
# Mark step 1 (second step) as complete
with sdk.features.edit("feature-id") as f:
f.steps[1].completed = True
# Or mark multiple steps at once
with sdk.features.edit("feature-id") as f:
f.steps[0].completed = True
f.steps[1].completed = True
f.steps[2].completed = True
Step numbering is 0-based (first step = 0, second step = 1, etc.)
When to mark complete:
Example workflow:
uv run htmlgraph feature start feature-123with sdk.features.edit("feature-123") as f: f.steps[0].completed = Truewith sdk.features.edit("feature-123") as f: f.steps[1].completed = Trueuv run htmlgraph feature complete feature-123uv run pytest - All tests MUST passuv run htmlgraph feature complete <id>REMINDER: Completing a feature without doing all of the above means incomplete work. Don't skip steps.
When you see a drift warning like:
Drift detected (0.74): Activity may not align with feature-self-tracking
Consider:
uv run htmlgraph feature primary <id> to change attributionAt the start of each session:
At the end of each session:
.htmlgraph/sessions/Include feature context:
feat(feature-id): Description of the change
- Details about what was done
- Why this approach was chosen
š¤ Generated with Claude Code
When using Bash tool, always provide a description:
# Good - descriptive
Bash(description="Install dependencies for auth feature")
# Bad - no context
Bash(command="npm install")
When making architectural decisions:
uv run htmlgraph track "Decision" "Chose X over Y because Z"View progress visually:
uv run htmlgraph serve
# Open http://localhost:8080
The dashboard shows:
.htmlgraph/features/ - Feature HTML files (the graph nodes).htmlgraph/sessions/ - Session HTML files with activity logsindex.html - Dashboard (open in browser)HtmlGraph hooks track:
All data is stored as HTML files - human-readable, git-friendly, browser-viewable.
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.