From vthink-agent-toolkit
Generate BDD Gherkin feature files from requirements — even vague ones. Asks clarifying questions to surface edge cases and acceptance criteria, then produces Given/When/Then scenarios. Use when asked for BDD requirements, Gherkin scenarios, .feature files, or acceptance criteria.
npx claudepluginhub vthinkdeveloper/vthink-agent-toolkit --plugin vthink-toolkitThis skill uses the workspace's default tool permissions.
This skill turns requirements — even rough, vague, or incomplete ones — into comprehensive BDD Gherkin scenarios. Instead of analyzing code, it **collaborates with the user** through focused questions to surface the full picture before writing scenarios.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
This skill turns requirements — even rough, vague, or incomplete ones — into comprehensive BDD Gherkin scenarios. Instead of analyzing code, it collaborates with the user through focused questions to surface the full picture before writing scenarios.
The key insight: most requirements start vague. "Users should be able to reset their password" sounds simple until you ask about email delivery failures, expired tokens, and locked accounts. This skill's job is to ask those questions early, then write scenarios that capture the answers.
The user provides a requirement. It could be anything from a one-liner to a detailed spec. Accept whatever they give without complaint — your job is to work with it.
Examples of inputs you should handle well:
This is the most important step. Before writing any Gherkin, have a short conversation to fill in the gaps. Your goal is to ask just enough questions to write meaningful scenarios — not to interrogate the user into exhaustion.
How to ask questions:
Use the AskUserQuestion tool to present focused, structured questions. Group related questions together. Offer concrete options where possible so the user can pick rather than think from scratch.
What to probe for:
| Gap to fill | Example questions |
|---|---|
| Who — the actor(s) | "Who performs this action? Admin only, any logged-in user, or unauthenticated users too?" |
| What — the core action | "When you say 'export', what formats? PDF, Excel, CSV, all of them?" |
| When — triggers and preconditions | "Can they do this anytime, or only when certain conditions are met?" |
| Where — the UI context | "Is this on a dedicated page, a modal, or part of an existing screen?" |
| What if — failure and edge cases | "What should happen if the export fails? Retry? Error message? Silent failure?" |
| Who else — impact on others | "Should other users be able to see/access the exported file?" |
| Boundaries — limits and constraints | "Is there a max number of rows? A timeout? Size limit?" |
Rules for questioning:
When to skip questions entirely:
If the requirement is detailed enough (e.g., a well-written user story with acceptance criteria), state your interpretation briefly and go straight to scenarios. You can always refine after the user reviews.
Before writing scenarios, briefly list the assumptions you're working with. This gives the user a chance to correct course before you invest in writing.
Format:
Based on our discussion, here's what I'm working with:
- Actor: Admin users only
- The export runs in the background (not blocking)
- Supported formats: PDF and Excel
- Error handling: toast notification with retry option
- No file size limit for now
I'll write scenarios covering: happy path, format selection, error handling, and permissions. Let me know if I'm missing anything, otherwise I'll proceed.
Keep it short. This is a checkpoint, not a document.
Feature organization — break into separate Features when:
Scenario coverage checklist: For each feature, consider these categories (not all will apply):
Don't force coverage. Only write scenarios for categories that are relevant. A simple toggle setting doesn't need 10 categories of scenarios.
After presenting the scenarios, explicitly invite feedback:
Be ready to add, remove, or rewrite scenarios based on feedback. This is collaborative — the first pass is a draft, not a final product.
Default: Show scenarios directly in the conversation, organized by Feature.
If the user asks for files: Create .feature files and save them to the project's docs or test directory.
Always include a brief coverage summary at the end showing what categories of behavior are covered.
Write for the user, not the developer. Say "I click the Save button" not "I send a PUT request to /api/v1/settings". Say "I should see an error message" not "the response status should be 500".
Be specific about outcomes. Don't say "an error is displayed" — say "I should see the message 'Export failed. Please try again.'" If you don't know the exact message, use a realistic placeholder and mark it with a comment.
Use Background for shared setup. If every scenario starts with "Given I am logged in as an admin", put it in Background. But don't overload it.
Scenario Outlines for data-driven variations. When the same flow works with different inputs, use Scenario Outline with Examples tables rather than duplicating scenarios.
One behavior per scenario. Each scenario tests exactly one thing. If you're writing "And then also..." you probably need two scenarios.
Name scenarios descriptively. The name should tell you what's being tested without reading the steps. "Admin exports report as PDF" beats "Test export".
Use realistic data. Don't use "test123" or "foo" — use "Q4 Sales Report" and "jane.smith@company.com". Realistic data makes scenarios readable and catches assumptions.
Tag strategically. Use tags like @smoke, @admin, @regression, @wip to help organize and filter scenarios.