From kipi-ops
Runs expert persona reviews on generated output (configs, code, reports) against customer deployment profiles to catch operational mismatches before delivery.
npx claudepluginhub assafkip/kipi-systemThis skill uses the workspace's default tool permissions.
Run expert persona reviews against any generated output, anchored to the customer's actual deployment reality. Catches technically correct but operationally wrong output before it reaches the customer.
Runs parallel reviews from 6 reviewers (security, UX/DX, external Codex/Gemini CLIs, domain experts) on code, plans, or requirements for quality gates. Invoke via /review --mode code/plan/clarify.
Runs multi-agent adversarial verification: two independent reviewers apply same rubric to output; loops fixing issues until both pass. For prod code ships, docs, compliance.
Runs parallel specialized reviewers for code, content, strategy, or plan validation using presets like code, security, content, full, with optional mmbridge security integration.
Share bugs, ideas, or general feedback.
Run expert persona reviews against any generated output, anchored to the customer's actual deployment reality. Catches technically correct but operationally wrong output before it reaches the customer.
Invoke when:
This is not a code review. It does not check whether the code compiles or the tests pass. Those are CI gates. This checks whether the OUTPUT fits the CUSTOMER -- whether it assumes infrastructure they don't have, defaults that would break their environment, or expertise they don't possess.
The files to review. Can be any format: configs, scripts, reports, policies, CSVs, markdown, JSON.
A structured profile with deployment reality fields. Reviews without this profile are prohibited. The SJI Fire incident proved that expert reviews without customer context produce technically correct but operationally wrong results (approved a policy that would have locked out firefighters, assumed infrastructure that didn't exist, claimed coverage on devices with no agent).
customer_profile:
# Identity
name: ""
sector: "" # fire_department, healthcare, financial, saas, etc.
employee_count: 0
security_team_size: 0 # 0 = solo operator
# Deployment Reality
managed_device_count: 0 # devices with your agent/tool installed
unmanaged_device_count: 0 # BYOD, personal devices, contractor machines
byod_policy: "" # allowed, restricted, blocked
licensing_tier: "" # what tier of your product/platform they have
infrastructure_present: # what they ACTUALLY have running
- ""
infrastructure_absent: # what they DON'T have (common assumptions that are wrong)
- ""
integrations_configured: # what's actually connected and sending data
- ""
integrations_not_configured: # known gaps in their setup
- ""
# Context
compliance_frameworks: [] # derived from sector
prior_incidents: [] # relevant history
contact_background: "" # their expertise level
contact_constraints: "" # time, budget, team size limitations
primary_risk_surface: "" # where attacks most likely enter their environment
Before any reviews, validate the customer profile is complete. If any deployment reality field is empty, STOP and ask. Do not review with assumptions.
Spawn one agent per persona. Each agent receives:
Collect all findings. Apply fixes. Re-validate affected files.
Every persona checks:
These trigger automatically during review. The persona STOPS and flags:
Personas are domain-specific. Define them based on the output being reviewed. Common set:
Domain Expert Personas (technical review with customer context):
The Customer Persona (non-negotiable, always included):
## [Persona Name] Review
Pass/Fail: [PASS | FAIL | PASS WITH NOTES]
Customer-fit: [Does this help THIS SPECIFIC customer?]
Issues: [numbered list with severity]
Fixes: [numbered list with specific file/line changes]
Confidence: [Would deploy as-is | Needs minor edits | Needs rework]
[output-dir]/
persona-reviews.md # All reviews with scorecard
customer_profile.yaml # Profile used for reviews
fixes-applied.md # List of fixes applied after reviews