From sales
Analyze a sales rep's call recordings alongside CRM data to produce a comprehensive, evidence-backed performance report with grades, real transcript quotes, and actionable coaching recommendations.
npx claudepluginhub naveedharri/benai-skills --plugin salesThis skill uses the workspace's default tool permissions.
Analyze a sales rep's call recordings alongside CRM data to produce a comprehensive, evidence-backed performance report with grades, real transcript quotes, and actionable coaching recommendations.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Analyze a sales rep's call recordings alongside CRM data to produce a comprehensive, evidence-backed performance report with grades, real transcript quotes, and actionable coaching recommendations.
A professional .docx report covering:
The report reads like something a VP of Sales or sales coach would write after shadowing the rep for a month — grounded in evidence, not generic advice.
Before pulling any data, use AskUserQuestion to collect the information needed to do this right. Missing any of these leads to a shallow analysis. Combine into 2-3 AskUserQuestion calls (max 4 questions per call).
Question 1 — The Business: "Tell me a bit about your business: What do you sell, who's your ideal customer (ICP), and what does a sales-qualified meeting look like for you?"
Question 2 — The Rep & Targets: "Who is the sales rep being analyzed? What are their targets or quotas (e.g., deals per month, revenue targets, meeting-to-close ratio)? If you don't have formal targets, that's fine — just let me know."
Question 3 — Call Selection: "Do you want me to analyze ALL of this rep's sales calls, or a specific set? If specific, which ones?"
Question 4 — Deal Outcomes: "How should I determine which deals were won vs. lost? Options: (a) You tell me manually which prospects closed, (b) I cross-check against your CRM automatically, or (c) Both — you tell me what you know and I verify against CRM."
Question 5 — Scoring Framework: "Do you have your own scoring framework for evaluating sales calls, or would you like to use an established one? Common options include BANT (Budget, Authority, Need, Timeline), MEDDIC (Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, Champion), or a custom framework I can build based on your sales process."
Question 6 — CRM Access: "Which CRM are you using (Attio, HubSpot, Salesforce, Pipedrive, etc.)? I'll cross-reference deal stages, contact records, and activity history with the call transcripts."
After collecting all answers, summarize your understanding back to the user in a few sentences and get confirmation before pulling any data.
Check which transcription MCP tools are available in your environment:
fireflies_search, fireflies_get_transcripts, fireflies_get_transcript, fireflies_get_summarysearch-call-recordings-by-metadata, get-call-recordingIf no transcription tool is connected, stop and tell the user: "I don't see a connection to a meeting transcription tool. Could you connect Fireflies, Fathom, or similar through your integrations?"
Based on the user's scope preference from Phase 0:
If "all calls":
AskUserQuestion: "Here are [N] calls I found. Which ones should I include in the analysis?" Let them remove irrelevant calls (internal meetings, non-sales calls, training sessions, etc.)If "specific calls":
If "date range":
This step is critical to the quality of the entire analysis. You need the complete word-for-word transcript of every call — not summaries, not overviews, not bullet points. The difference matters enormously: summaries strip out the exact language the rep and prospect used, the hesitations, the specific objections, the pricing discussions, the moments where rapport builds or breaks. A summary might say "discussed pricing" — but the transcript reveals whether the rep anchored high, folded at the first pushback, or confidently tied price to value. Without full transcripts, the grades in this report would be based on secondhand accounts rather than direct observation.
When using Fireflies:
fireflies_get_summary only for metadata (date, participants, duration) and a quick overview of what the call coveredfireflies_get_transcript to get the full conversation with speaker attribution — this is the primary data source for all analysisfireflies_get_transcript fails or returns empty for any call, explicitly note it in your analysis and in the methodology section. Do not silently fall back to summaries.When using Attio call recordings:
get-call-recording which returns the full transcript with speaker attributionVerification step: After pulling transcripts, confirm that what you received contains actual dialogue (speaker-attributed sentences), not a condensed summary. If a "transcript" looks like bullet points or a paragraph summary, it's not the real transcript — dig deeper or flag it.
Be aware of rate limits. If some transcripts fail, note which ones and move forward with what you have. Report the gap in the methodology section of the final report.
Batch processing strategy: If there are more than 10 calls, use parallel Task subagents to pull transcripts in batches of ~10-15 each. This dramatically speeds up the data collection phase.
If the user opted for CRM cross-referencing (Phase 0, Question 4):
Before pulling any prospect data, you need to understand how the CRM is organized. Different teams structure their CRM very differently — some have a single pipeline list, others have separate lists for different stages, and the column names vary widely (one team's "Deal Value" is another's "Contract Amount" or "Budget"). Skipping this discovery step leads to missed data and incorrect deal outcomes.
Step 4a — Discover All Lists & Pipelines:
Use list-lists (or the CRM equivalent) to retrieve every list in the workspace. Present these to yourself and identify which ones are relevant to the sales analysis. Look for lists with names containing "pipeline," "deals," "opportunities," "prospects," "sales," or similar. There may also be lists for "lost deals," "churned," "onboarding," or "closed-won" that contain valuable outcome data.
Step 4b — Map All Columns & Attributes:
For each relevant list, use list-list-attribute-definitions to pull the complete set of columns/attributes. Paginate through ALL results (many CRMs have 20-40+ attributes per list, and the default page size is often 10). Keep pulling with increasing offset until you've seen every attribute.
Also pull the object-level attributes using list-attribute-definitions for the parent object (e.g., "companies" or "people"). These often contain critical fields like company size, industry, email addresses, and domains that don't appear on the list-level attributes.
Build a reference map of every available field — you'll use this throughout the analysis. Pay special attention to: stage/status fields (what are the possible values?), monetary fields (deal value, budget, forecast), date fields (created, stage change dates), and any custom fields the team uses for tracking deal progress.
Step 4c — Pull Prospect Records & List Entries: For each prospect identified from the call list:
search-records by name, email, or company domainlist-records-in-list to find the prospect's entry in each relevant pipeline/list. Don't rely solely on filters — if a filter returns no results, try pulling a broader set and matching manually, since CRM data can be inconsistent (name variations, missing fields, etc.)Step 4d — Build the CRM Context Map: Compile a merged dataset per prospect containing: current pipeline stage, deal value (if any), all relevant dates (created, stage changes), budget/forecast fields, owner/assignee, and any notes or activity counts. This becomes the backbone for cross-referencing against transcript evidence.
If CRM access isn't available, rely on user-provided win/loss data.
Email data is often the most reliable source for verifying deal outcomes and understanding what happened between calls. Contracts get signed via email, proposals get sent via email, and "we've decided to go with someone else" arrives via email. Skipping this step means relying solely on CRM stages (which may be stale) and transcript inferences (which can be ambiguous).
Step 5a — Search by Domain & Email: For each prospect/company identified from the calls:
search-emails-by-metadata with the company domain to find all email correspondenceStep 5b — Semantic Search for Deal Signals:
Use semantic-search-emails with queries designed to surface deal-critical communications:
Step 5c — Pull Full Email Content:
For emails that look deal-relevant based on subject/snippet, use get-email-content to read the full body. Look for:
Step 5d — Build an Email Evidence Log: For each prospect, compile a timeline of email evidence alongside the call recordings. Note which emails provide definitive deal outcome evidence vs. which are ambiguous. This log feeds directly into the deal outcome verification in Phase 2.
If email tools aren't available, note this limitation and rely on CRM + transcript data for deal outcomes.
This is where the real value is created. The analysis must go beyond surface-level observation — it should feel like a seasoned sales coach watched every call.
Before grading anything, establish the ground truth on what actually happened with each deal. Wrong deal outcomes lead to wrong grades — if you think a deal was lost when it actually closed, the entire analysis of "what went wrong" becomes fiction.
Cross-reference at least three sources for each prospect's outcome:
When sources conflict, flag the discrepancy explicitly in the report. For example: "CRM shows 'Lost' but the last transcript has the prospect agreeing to move forward — this may need manual verification." Present any uncertain outcomes to the user for confirmation before finalizing the analysis.
For each prospect, build a complete picture:
Use the scoring framework from Phase 0. If no framework was specified, use this default set of dimensions. Each dimension should be graded on a letter scale (A+ through F) with supporting evidence from transcripts.
Default dimensions:
Discovery & Qualification — Does the rep uncover pain, budget, authority, timeline? Do they ask open-ended questions? Do they dig deeper or accept surface-level answers? Are they qualifying or just collecting information?
Demo & Value Articulation — Does the rep tailor the demo to the prospect's stated pain? Do they connect features to business outcomes? Or do they run the same generic demo every time?
Objection Handling — When the prospect pushes back (price, timing, competition, internal resistance), does the rep address it head-on with evidence? Do they acknowledge and redirect? Or do they fold/ignore?
Pricing & Negotiation — How does the rep introduce pricing? Do they anchor high? Do they give away discounts unprompted? Do they tie price to value? How do they handle "that's too expensive"?
Closing Technique — Does the rep actually ask for the business? Do they use trial closes throughout? Do they create next steps with specific dates and commitments? Or do conversations just... end?
Urgency & Scarcity Creation — Does the rep create legitimate reasons to act now? End-of-month incentives, implementation timelines, competitive pressure? Or is there no urgency at all?
Follow-up & Next Steps — Does the rep set concrete next steps on every call? Calendar invites sent live? Clear action items for both sides? Or vague "I'll send you some info"?
Adaptability & Rapport — Does the rep read the room? Do they adjust their approach for technical vs. business buyers? Do they handle unexpected situations well?
Case Study & Social Proof Usage — Does the rep reference relevant customer stories? Do the examples match the prospect's industry/size/pain?
Multi-stakeholder Management — When multiple decision-makers are on the call, does the rep engage all of them? Do they identify the champion vs. the blocker?
Every grade must be backed by specific transcript evidence. The standard:
After grading individual dimensions, look for patterns:
Generate a professional .docx report using the docx npm package. Read the docx skill for document formatting best practices if available.
The report should follow this structure:
Full transcripts, not summaries. This is the single most important data quality decision in the entire workflow. Summaries lose the nuance — the exact words a prospect uses when they're about to close, the silence after a pricing reveal, the specific objection that went unaddressed, the way a rep introduces pricing. A summary that says "discussed pricing and next steps" tells you almost nothing about whether the rep handled that moment well. The full transcript of that same moment might reveal the rep said "our price is $500/month but we can do a discount" (anchoring low and volunteering a discount unprompted) vs. "based on the ROI we just discussed, most teams at your stage invest $500-800/month" (anchoring to value). That distinction is the difference between a C and an A in pricing technique — and it's invisible in summaries.
Understand the CRM before querying it. Every CRM is structured differently. Before pulling a single prospect record, map out the available lists, pipelines, and columns. A 5-minute discovery step prevents the entire analysis from missing critical data — like deal values sitting in a custom "Contract Amount" field instead of the default "Value" field, or deal outcomes living in a separate "Closed Deals" list rather than a stage on the main pipeline. Paginate through all attribute definitions; don't stop at the first page.
Verify deal outcomes from multiple sources. No single data source is reliable enough on its own. CRM stages go stale. Transcript inferences can be ambiguous ("we'll move forward" doesn't always mean the deal closed). Email evidence — payment confirmations, signed contracts, rejection notices — is often the strongest signal. When sources conflict, flag the discrepancy rather than guessing. Present uncertain outcomes to the user for confirmation.
Evidence over opinion. Every claim in the report should be traceable to a specific call or CRM record. "The rep struggles with urgency" is an opinion. "In 8 of 12 calls, the rep ended without establishing a timeline or reason to act. For example, on the Bettercommerce call (Jan 15), the prospect said 'we'll think about it' and the rep responded 'sounds good, take your time' instead of proposing a specific next step" is evidence.
Context matters. A 14% close rate on enterprise deals is different from 14% on SMB leads. A rep who closes 10/69 calls in their first quarter selling a new product is performing differently than one selling an established product. The ICP, product maturity, and market all factor into fair grading.
Be constructive, not punitive. The goal is to help the rep improve, not to prove they're bad. Frame weaknesses as opportunities with specific, actionable advice. "Instead of X, try Y — here's an example of when something similar worked on the Boostability call."
Don't silently skip data sources. If email tools, CRM tools, or full transcripts aren't available, explicitly note what's missing in the methodology section and how it limits the analysis. The user deserves to know what the report is based on and what it's missing.