npx claudepluginhub maxghenis/farness --plugin farness# Score Decision Outcome Review a past decision and score how the forecasts performed. ## Option A: Use the CLI (Recommended) Run the interactive scoring command: This will: 1. Show unscored decisions (or find by ID if provided) 2. Display original forecasts for the chosen option 3. Prompt for actual outcomes per KPI 4. Calculate errors and CI coverage 5. Save results and show updated calibration ## Option B: Guided Scoring (if CLI unavailable) ### Step 1: Find the Decision List unscored decisions: Or show a specific decision: ### Step 2: Review Original Forecasts Display: -...
/scoreDesigns lead scoring model interactively: gathers scope, focus, signals, platform via questions, confirms config table, outputs framework for HubSpot/Salesforce/CRM.
/scoreInteractively collects lead scoring parameters (scope, model, data, focus) and calculates score from demographic and behavioral factors with breakdown, actions, or full analysis.
/scoreScores vendor proposals against project evaluation criteria using 0-3 rubric, calculates weighted totals, persists scores in JSON. Supports vendor, compare, audit actions.
/scoreScores NL programming artifacts on 100-point quality scale per file or directory, analyzing structure, vagueness, and issues. Outputs markdown report with per-file scores, averages, top issues, and ratings.
/scoreCalculate automation potential scores using 6-factor weighted algorithm
Review a past decision and score how the forecasts performed.
Run the interactive scoring command:
farness score $ARGUMENTS
This will:
List unscored decisions:
farness list --unscored
Or show a specific decision:
farness show <id>
Display:
For each KPI, ask the user: "What was the actual outcome for [KPI]?"
Get specific numbers.
from datetime import datetime
from farness import DecisionStore
store = DecisionStore()
decision = store.get("<decision_id>")
decision.actual_outcomes = {
"<kpi1>": <actual_value>,
"<kpi2>": <actual_value>,
}
decision.scored_at = datetime.now()
decision.reflections = "<user reflections>"
store.update(decision)
farness calibration
After scoring, ask:
Record reflections in the decision.