From arn-spark
This skill should be used when the user says "spike", "arn spike", "validate risks", "technical validation", "proof of concept", "validate architecture", "risk spike", "test this risk", "will this work", "technical spike", "validate the stack", or wants to validate critical technical risks from the architecture vision by creating minimal proof-of-concept code and testing whether the chosen technologies work as expected.
npx claudepluginhub appsvortex/arness --plugin arn-sparkThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Validate critical technical risks from the architecture vision through minimal proof-of-concept implementations, aided by the arn-spark-spike-runner agent for POC creation and execution. This is a conversational skill that runs in normal conversation (NOT plan mode). The primary artifacts are spike POC code in isolated directories and a spike results document.
This skill addresses the question: "Will the chosen technologies actually work for our use case?" It does not implement features or build the application -- it runs targeted experiments to validate or invalidate specific technical assumptions before committing to them.
An architecture vision document should exist. Check in order:
CLAUDE.md for a ## Arness section. If found, check the configured Vision directory for architecture-vision.md## Arness section found, check .arness/vision/architecture-vision.md at the project rootIf an architecture vision is found: Read it and extract the "Known Risks & Mitigations" section.
If no architecture vision is found: Inform the user:
"No architecture vision document found. I can still run spikes if you describe the specific technical risks to validate. For a comprehensive risk assessment, run /arn-spark-arch-vision first."
If the user provides risks directly, proceed with those.
The project should ideally be scaffolded (via /arn-spark-scaffold) so the spike runner can leverage the existing project setup. If not scaffolded, spikes will need to set up their own dependencies, which the spike runner handles.
Determine the spike workspace:
CLAUDE.md and check for a ## Arness section## Arness section exists or Arness Spark fields are missing, inform the user: "Arness Spark is not configured for this project yet. Run /arn-brainstorming to get started — it will set everything up automatically." Do not proceed without it.Load the architecture vision and extract all risks from the "Known Risks & Mitigations" section. Parse each risk to identify:
If additional validation points were noted in the tech evaluator's recommendations during the architecture vision phase, include those as well.
Present the risk list to the user:
"I found [N] risks in your architecture vision. Here they are by priority:
Critical:
Important: 3. [Risk title] -- [brief description]
Monitor: 4. [Risk title] -- [brief description]
Ask (using AskUserQuestion) with multiSelect: true:
"Which risks would you like to spike? (select multiple)"
Options: [List each risk as a numbered option with its title and brief description, e.g.:]
If the user adds custom risks not from the architecture vision, include those.
For each selected risk, propose a minimal POC approach and clear validation criteria:
"For [Risk Title]:
POC approach: [What we will build to test this. 1-2 sentences describing the minimal experiment.]
Validation criteria:
Spike directory: <configured Spikes directory>/spike-[NNN]-[descriptive-name]/
Does this approach look right, or would you test it differently?"
Wait for user approval or adjustments before running each spike. The user may want to modify the approach, change criteria, or skip the risk.
IMPORTANT: Run spikes sequentially, one at a time. Do NOT launch multiple spike-runner agents in parallel or in the background. The spike-runner agent needs Bash and Write tool access, which requires user permission approval. Parallel or background agents cannot surface permission prompts to the user, causing all tool calls to be denied. Wait for each spike to fully complete before starting the next one.
For each approved spike, in order:
Invoke the arn-spark-spike-runner agent (foreground, not background) with:
<Spikes directory>/spike-001-webrtc-wkwebview/)Wait for the agent to complete fully before proceeding.
Present the spike runner's results to the user:
[spike directory] -- you can run this spike manually on the required platform. This should be validated when [condition]."If a spike failed:
Ask (using AskUserQuestion):
"Risk [Risk Title] failed validation. How would you like to proceed?"
Options:
Proceed to the next spike only after presenting results and resolving any failures.
After all spikes have been run:
Read the spike report template:
Read
${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-spike/references/spike-report-template.md
Populate the template with all spike results
Determine the output directory:
CLAUDE.md and check for a ## Arness section## Arness section exists or Arness Spark fields are missing, inform the user: "Arness Spark is not configured for this project yet. Run /arn-brainstorming to get started — it will set everything up automatically." Do not proceed without it.Write the document as spike-results.md
Present a summary:
"Spike results saved to [path]/spike-results.md.
Summary:
| Risk | Result |
|---|---|
| [Risk 1] | Validated |
| [Risk 2] | Failed -- chose Alternative A |
| [Risk 3] | Deferred -- needs macOS |
[N] of [M] risks validated. [Note any architecture changes decided.]"
If any spikes failed and the user chose an alternative approach:
"Based on the spike results, the following sections of architecture-vision.md should be updated:
Ask (using AskUserQuestion):
Should I update the architecture vision now?
- Yes — Update the affected sections
- No — Skip for now, I will update manually later"
If the user chooses Yes, make the targeted updates to the architecture vision document. Only change the specific sections affected by failed spikes -- do not rewrite the entire document.
"All spikes complete. Recommended next steps:
/arn-spark-style-explore to define the look and feel/arn-spark-static-prototype to validate visual fidelityAdapt based on results. If critical risks failed and alternatives were chosen, emphasize the architecture changes. If risks were deferred, note when they should be revisited.
| Situation | Action |
|---|---|
| Run an approved spike (Step 3) | Invoke arn-spark-spike-runner sequentially (foreground, not background) with risk details, criteria, and workspace path. Wait for completion before starting the next spike. |
| Spike runner agent denied permissions | The agent likely failed because it ran in the background or in parallel. Re-run it in the foreground sequentially. If permissions are still denied, run the spike directly in the main conversation instead of delegating to the agent. |
| User asks about technology alternatives | Answer from the architecture vision context or invoke arn-spark-tech-evaluator if deep comparison needed |
| User wants to add a custom risk | Record it and proceed to Step 2 for that risk |
| User asks about features or screens | Defer: "Feature work comes after risk validation. Next step after spikes is /arn-spark-style-explore or /arn-spark-static-prototype." |
| Spike runner reports environment limitation | Record as deferred, continue to next spike |
/arn-spark-style-explore or /arn-spark-static-prototype."