From trailmark
Augments Trailmark code graphs with SARIF static analysis (Semgrep/CodeQL) and weAudit findings, mapping to nodes by file/line, creating severity subgraphs, cross-referencing blast radius/taint data.
npx claudepluginhub trailofbits/skills --plugin trailmarkThis skill uses the workspace's default tool permissions.
Projects findings from external tools (SARIF) and human auditors (weAudit)
Builds and queries multi-language code graphs for security audits with pre-analysis for blast radius, taint propagation, privilege boundaries, and entry points. Supports 16 languages including Rust, Go, Python, TypeScript.
Performs deep code property graph analysis with Joern CPG (AST+CFG+PDG) for control flow, data flow, dependencies, dead code, and CodeQL for taint tracking and security auditing.
Performs ultra-granular line-by-line code analysis to build deep architectural context before vulnerability or bug detection in security audits.
Share bugs, ideas, or general feedback.
Projects findings from external tools (SARIF) and human auditors (weAudit) onto Trailmark code graphs as annotations and subgraphs.
trailmark skill)diagramming-code skill after augmenting)| Rationalization | Why It's Wrong | Required Action |
|---|---|---|
| "The user only asked about SARIF, skip pre-analysis" | Without pre-analysis, you can't cross-reference findings with blast radius or taint | Always run engine.preanalysis() before augmenting |
| "Unmatched findings don't matter" | Unmatched findings may indicate parsing gaps or out-of-scope files | Report unmatched count and investigate if high |
| "One severity subgraph is enough" | Different severities need different triage workflows | Query all severity subgraphs, not just error |
| "SARIF results speak for themselves" | Findings without graph context lack blast radius and taint reachability | Cross-reference with pre-analysis subgraphs |
| "weAudit and SARIF overlap, pick one" | Human auditors and tools find different things | Import both when available |
| "Tool isn't installed, I'll do it manually" | Manual analysis misses what tooling catches | Install trailmark first |
MANDATORY: If uv run trailmark fails, install trailmark first:
uv pip install trailmark
# Augment with SARIF
uv run trailmark augment {targetDir} --sarif results.sarif
# Augment with weAudit
uv run trailmark augment {targetDir} --weaudit .vscode/alice.weaudit
# Both at once, output JSON
uv run trailmark augment {targetDir} \
--sarif results.sarif \
--weaudit .vscode/alice.weaudit \
--json
from trailmark.query.api import QueryEngine
engine = QueryEngine.from_directory("{targetDir}", language="python")
# Run pre-analysis first for cross-referencing
engine.preanalysis()
# Augment with SARIF
result = engine.augment_sarif("results.sarif")
# result: {matched_findings: 12, unmatched_findings: 3, subgraphs_created: [...]}
# Augment with weAudit
result = engine.augment_weaudit(".vscode/alice.weaudit")
# Query findings
engine.findings() # All findings
engine.subgraph("sarif:error") # High-severity SARIF
engine.subgraph("weaudit:high") # High-severity weAudit
engine.subgraph("sarif:semgrep") # By tool name
engine.annotations_of("function_name") # Per-node lookup
Augmentation Progress:
- [ ] Step 1: Build graph and run pre-analysis
- [ ] Step 2: Locate SARIF/weAudit files
- [ ] Step 3: Run augmentation
- [ ] Step 4: Inspect results and subgraphs
- [ ] Step 5: Cross-reference with pre-analysis
Step 1: Build the graph and run pre-analysis for blast radius and taint context:
engine = QueryEngine.from_directory("{targetDir}", language="{lang}")
engine.preanalysis()
Step 2: Locate input files:
semgrep --sarif -o results.sarif
or codeql database analyze --format=sarif-latest.vscode/<username>.weaudit within the workspaceStep 3: Run augmentation via engine.augment_sarif() or
engine.augment_weaudit(). Check unmatched_findings in the result — these
are findings whose file/line locations didn't overlap any parsed code unit.
Step 4: Query findings and subgraphs. Use engine.findings() to list all
annotated nodes. Use engine.subgraph_names() to see available subgraphs.
Step 5: Cross-reference with pre-analysis data to prioritize:
sarif:error with tainted subgraphhigh_blast_radiusprivilege_boundaryFindings are stored as standard Trailmark annotations:
finding (tool-generated) or audit_note (human notes)sarif:<tool_name> or weaudit:<author>[SEVERITY] rule-id: message (tool)| Subgraph | Contents |
|---|---|
sarif:error | Nodes with SARIF error-level findings |
sarif:warning | Nodes with SARIF warning-level findings |
sarif:note | Nodes with SARIF note-level findings |
sarif:<tool> | Nodes flagged by a specific tool |
weaudit:high | Nodes with high-severity weAudit findings |
weaudit:medium | Nodes with medium-severity weAudit findings |
weaudit:low | Nodes with low-severity weAudit findings |
weaudit:findings | All weAudit findings (entryType=0) |
weaudit:notes | All weAudit notes (entryType=1) |
Findings are matched to graph nodes by file path and line range overlap:
root_pathlocation.file_path matches AND whose line range overlaps are
selectedSARIF paths may be relative, absolute, or file:// URIs — all are handled.
weAudit uses 0-indexed lines which are converted to 1-indexed automatically.