Extract issues from documents and import them into Linear
From linear-issue-importernpx claudepluginhub jamesckemp/claude-plugins --plugin linear-issue-importerThis skill uses the workspace's default tool permissions.
references/chunk-agent-prompt.mdreferences/extraction-patterns.mdreferences/issue-template.mdreferences/linear-tools.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
If plan mode is active, call ExitPlanMode immediately with a single-line plan. Do not create a detailed plan — this skill defines its own workflow. All import commands run in execute mode.
Important: This skill definition is the authoritative process. Do not reference previous session outputs to learn how extraction was done before — those may reflect an older version of this skill or a different approach entirely. Always follow the steps defined here.
Extract issues from any document and import them into Linear.
/import-issues <path/to/file> — full workflow: extract, deduplicate, configure, create in Linear/preview-issues <path/to/file> — dry run: extract, deduplicate, configure only, nothing created in LinearFor /preview-issues, run Steps 0–6 only, then stop. Do not create any issues.
Read these files before starting the workflow:
references/extraction-patterns.md — document type detection and issue extraction rulesreferences/issue-template.md — required description templates (Bug Report, Action Item, Feature Request)references/chunk-agent-prompt.md — prompt template for chunk analysis agents (only needed for chunked pipeline)These files are NOT auto-loaded. You must read them explicitly. Issue descriptions that don't follow the templates will be rejected by the user.
Before doing ANY work on the source file, complete these steps in order:
Document type: [type] Utterance/line count: [N] Routing: [Chunked pipeline (Steps 1c-1e) / Single-pass transcript-first / Single-pass structured] [If chunked] Estimated chunks: ~N (5-min windows, 2-min overlap)
Do NOT begin extraction until the routing decision has been reported.
Interactive steps that require user input — do not skip:
Run this step once at the start. It produces metadata and a reusable attribution block embedded in every created issue.
After reading the file, extract:
title, Obsidian frontmatter title, or filename)created_at, Obsidian frontmatter created_at, or file modification date)https://notes.granola.ai/d/{id}), Obsidian frontmatter granola_url, or file path if no URL availableUse AskUserQuestion to ask:
Do you have any links to include in every issue? (e.g., summary post, video recording, related docs)
Options:
If the user provides links, store them as supplementary_links[] for use in issue descriptions.
Build a reusable attribution block following the format in references/issue-template.md. This block is appended to every issue description:
---
Source: [Meeting Title](url) — YYYY-MM-DD
[Summary post](url)
[Video recording](url)
Supplementary link lines are only included if the user provided them in Step 0b. If no links were provided, just the Source line appears.
@ reference), use what's there — but do NOT summarize, extract, or analyze ahead. Just proceed to type detection.id, title, notes_markdown, transcript[] fieldsgranola_id, title, granola_url, tags; body uses ### headings + bullet points## category headings, ### individual bugs, prefixes like "Bug:", "UX:", "Task:"Granola JSON with 200+ utterances → Chunked pipeline (Steps 1c–1e)
Granola JSON with <200 utterances → Single-pass transcript-first (Step 1f)
Raw Transcript with 300+ lines → Chunked pipeline (Steps 1c–1e)
Raw Transcript with <300 lines → Single-pass transcript-first (Step 1f)
Granola Obsidian / Markdown Bug List / General Doc → Single-pass structured (Step 1f)
Routing confirmation (required): After determining the route, output the routing decision to the user (see Pre-Flight Check above). Do not proceed to extraction until this output has been presented.
Timestamped transcripts (Granola JSON):
Non-timestamped transcripts (Raw Transcript):
[00:12:34]), prefer time-based chunkingRecord metadata per chunk: start time, end time, overlap boundaries, utterance count, position in sequence.
Read references/chunk-agent-prompt.md for the prompt template. Use the Task tool to spawn agents in batches of 3–4. Each agent receives:
Also calculate the video time offset for this batch: video_time = utterance.start_timestamp - first_utterance.start_timestamp, formatted as H:MM:SS. Pass this to agents so they can include video-referenceable timestamps.
Task tool config:
subagent_type: "general-purpose"
mode: "bypassPermissions"
After all agents complete:
notes_markdown:
references/extraction-patterns.mdreferences/extraction-patterns.md), then cross-reference against notes for enrichmentFor each extracted issue, capture:
at 1:37:05 in recording)H:MM:SS offset from recording start (for timestamped transcripts)references/issue-template.md)[Potential visual issue] if identified through Visual Gap IndicatorsBefore building the working document, compare extracted issues against each other. Look for:
If overlapping pairs are found, present them to the user:
These extracted issues may overlap:
A) "Fix checkout timeout on large carts"
B) "Checkout fails when cart has 50+ items"
These appear to describe the same underlying issue.
Options:
- Merge into one issue (pick which title to keep)
- Keep both as separate issues
- Remove one (pick which to remove)
Resolve all internal overlaps before proceeding.
Present extracted issues as a numbered list:
## Extracted Issues (N found)
### 1. [Title]
- Type: Bug / Task / Feature Request
- Priority: Urgent / High / Medium / Low
- Category: [functional area]
- Source: "[exact quote]"
If no issues were found, use AskUserQuestion to ask what to look for:
REQUIRED: Format each issue's description using the matching template from references/issue-template.md:
Omit template sections that have no meaningful content. Never use freeform descriptions.
Use AskUserQuestion to collect Linear settings. Fetch real data from Linear MCP to populate choices.
mcp__claude_ai_Linear__list_teams and present optionsmcp__claude_ai_Linear__list_issue_labels for the chosen team, then ask which labels to apply to all issuesmcp__claude_ai_Linear__list_projects for the chosen team and ask which project to assign issues to (or "None")mcp__claude_ai_Linear__list_usersmcp__claude_ai_Linear__list_issue_statuses for the chosen team if user wants a non-default statusPresent defaults clearly. Allow the user to set per-issue overrides (e.g., different labels or parent for specific issues).
Important — UUID tracking: Linear's triage automation can move issues between teams after creation, which changes the human-readable identifier (e.g., WOOPRD-1234 becomes WOOPLUG-567). The UUID never changes. Always track created issues by their UUID, not their identifier.
For each confirmed issue, search for duplicates using multiple strategies. Run them in order and stop when plausible matches are found:
Strategy 1 — Parent sub-issue check (only when parentId is set):
list_issues({ parentId: "<parent-uuid>", limit: 50 })
Compare all existing sub-issues of the parent against the new issue title and description. This is the highest-confidence signal — if the parent already has a sub-issue covering the same topic, it's almost certainly a duplicate.
Strategy 2 — Team + query search with 3 variants (always runs if Strategy 1 found nothing):
Run up to 3 query variants, each with limit: 30:
Variant 1 — Title noun phrases:
list_issues({ team: "<team-key>", query: "<key noun phrases from title>", limit: 30 })
Extract meaningful noun phrases from the title — not just 2-3 random words. For example, for "Fix checkout timeout when processing large carts", search "checkout timeout large carts".
Variant 2 — Description key terms:
list_issues({ team: "<team-key>", query: "<synonyms/related terms from description>", limit: 30 })
Use synonyms or related terms from the description. For example, if the description mentions "payment processing hangs", search "payment processing hang".
Variant 3 — User-visible symptom:
list_issues({ team: "<team-key>", query: "<how a user would describe the problem>", limit: 30 })
Describe the problem from the user's perspective — how they'd report it. For example, "spinning loader checkout" or "can't complete purchase". This catches issues titled differently but describing the same observable behavior.
Early-stop: If any variant returns a HIGH confidence match (same behavior described, active issue), skip remaining variants for that issue.
Deduplicate results across all query variants before presenting.
Strategy 3 — Cross-team search (only if Strategies 1-2 found nothing):
list_issues({ query: "<key noun phrases>", limit: 20 })
Omit the team parameter to search across all teams. This catches issues that were filed under a different team or moved by triage automation.
After collecting all candidates from the search strategies, assess each match and assign a confidence label:
Confidence is based on:
Order candidates HIGH → MEDIUM → LOW when presenting.
When potential duplicates are found, present them with full context and confidence labels:
Issue "Fix checkout timeout" may already exist:
1. WOOPRD-1234: "Checkout times out on large carts"
Team: WooProduct | Status: In Progress | Priority: High
Assignee: Jane Smith | Updated: 3 days ago
Match confidence: HIGH — same behavior, active issue
2. WOOPLUG-567: "Payment timeout handling"
Team: WooPlugins | Status: Done | Priority: Medium
Assignee: — | Updated: 2 weeks ago
Match confidence: LOW — related topic, already resolved
Options:
a) Create anyway (no relation)
b) Create and link as related to [pick which existing issue]
c) Mark as duplicate — skip creation
d) Skip entirely
For option (b), use the relatedTo parameter in create_issue to link the new issue to the existing one in a single call.
Present the final list with all Linear configuration and duplicate resolution status applied:
## Ready to Import (N issues)
### 1. [Title]
- Team: [team name]
- Labels: [label1, label2, ...]
- Parent: [parent identifier] (if set)
- Assignee: [name] (if set)
- Duplicate status: No matches found / Linked to TEAM-XXXX / Create anyway
- Description preview: [first 2 lines]
### 2. [Title]
...
Ask user to confirm. They can:
Do not proceed until the user explicitly confirms.
For 10+ issues, suggest using parallel agents grouped by category to speed up creation. When parallelizing:
For issues that pass duplicate checking:
mcp__claude_ai_Linear__create_issue. REQUIRED: Each issue description MUST use the template from references/issue-template.md matching its Type (Bug Report, Action Item, or Feature Request). Include the source attribution block from Step 0c. Do not use freeform descriptions.create_issue call, store the UUID (id field) from the response — not just the identifier. The UUID is the only stable reference once triage automation runs.create_issue returned a UUID, the issue exists. Searching by the old identifier after triage routing has moved it will return nothing, which is not a failure — do not re-create the issue.Before generating the summary, re-fetch every created issue by UUID using mcp__claude_ai_Linear__get_issue({ id: "<uuid>" }). This is critical because triage automation may have:
Use the current identifier and URL from the get_issue response in the summary, not the values from the original create_issue response.
Create a markdown summary with:
## Linear Import Summary
**Source:** [file path]
**Team:** [team name] (at time of creation — triage may have moved issues)
**Date:** [today's date]
**Total extracted:** N | **Created:** N | **Skipped:** N | **Removed:** N
### Created Issues
| # | ID | Title | URL |
|---|-----|-------|-----|
| 1 | TEAM-XXXX | Title here | [link](url) |
### Skipped (Duplicates)
| # | Title | Existing Issue | Reason |
|---|-------|---------------|--------|
| 1 | Title here | TEAM-XXXX | Near duplicate |
### Removed by User
- Title of removed issue 1
- Title of removed issue 2
Present this summary to the user. Offer to save it as linear-import-YYYY-MM-DD-{topic}.md in the current directory, where {topic} is 2-3 lowercase hyphenated words summarising the import (e.g., linear-import-2026-02-20-checkout-bugs.md, linear-import-2026-02-20-onboarding-tasks.md).
references/extraction-patterns.md — how to parse each document typereferences/issue-template.md — description templates for bugs, tasks, and feature requestsreferences/linear-tools.md — Linear MCP tool quick referencereferences/chunk-agent-prompt.md — prompt template for chunk analysis agents (used in Step 1d)