Supply chain security model for the marketplace plugin ecosystem
From marketplace-pronpx claudepluginhub markus41/claude --plugin marketplace-proThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
This skill documents the marketplace plugin security model, including how plugins are verified, sandboxed, scored, and audited.
The security module (src/security/trust-engine.ts) provides four interlocking components:
.cpkg Bundle
|
v
+------------------+
| SignatureVerifier | Integrity check (SHA-512)
+------------------+
|
v
+------------------+
| SecurityAuditor | Static code analysis
+------------------+
|
v
+------------------+
| PermissionSandbox| Permission boundary enforcement
+------------------+
|
v
+------------------+
| TrustScorer | Composite trust score (0-100)
+------------------+
|
v
Trust Report
Every .cpkg bundle can include a __signature__.json containing:
{
"algorithm": "sha512",
"checksum": "a1b2c3d4e5f6...",
"author": {
"identity": "user@example.com",
"provider": "github",
"verified": true
},
"timestamp": "2026-02-20T14:30:00Z",
"transparencyLogEntry": "24f9a8b3-1234-4abc-9def-567890abcdef"
}
The verifier:
import { SignatureVerifier } from './trust-engine';
const verifier = new SignatureVerifier();
const result = await verifier.verify(bundleBuffer, signatureInfo);
if (result.valid) {
console.log('Bundle verified:', result.details);
} else {
console.error('Verification failed:', result.status, result.errors);
}
| Status | Meaning |
|---|---|
verified | Checksum matches, author valid, timestamp in range |
tampered | Checksum mismatch or invalid algorithm |
unsigned | No signature metadata present |
expired | Signature older than max age (default 2 years) |
unknown-signer | Identity provider not in allowed list |
Plugins declare their resource requirements in the manifest:
{
"permissions": {
"filesystem": ["read:./src", "write:./dist"],
"network": ["api.github.com", "*.npmjs.org"],
"exec": ["npm", "docker"],
"env": ["AWS_REGION", "NODE_ENV"]
}
}
Permission string conventions:
<access>:<path> where access is read or write. Write implies read. Paths are relative to plugin root.*.example.com matches any subdomain).process.env.The sandbox statically analyzes hook scripts to detect undeclared resource access:
import { PermissionSandbox } from './trust-engine';
const sandbox = new PermissionSandbox(
{ filesystem: ['read:./src'], network: ['api.github.com'], exec: ['npm'], env: ['NODE_ENV'] },
'/path/to/plugin'
);
const result = sandbox.validateScript(hookScriptContent);
if (!result.allowed) {
for (const v of result.violations) {
console.warn(`Line ${v.line}: undeclared ${v.category} access to ${v.resource}`);
}
}
For runtime enforcement, the sandbox generates restricted shell wrappers:
const wrapper = sandbox.generateWrapper(originalScript);
// wrapper.script contains the restricted bash script
// wrapper.allowedEnv lists exposed environment variables
// wrapper.allowedPaths lists accessible filesystem paths
The wrapper:
The trust score is a weighted linear combination of five factors:
overall = signed * 0.30
+ reputation * 0.20
+ codeAnalysis * 0.25
+ community * 0.15
+ freshness * 0.10
Each factor produces a 0-100 sub-score. The overall score maps to a letter grade:
| Grade | Range | Meaning |
|---|---|---|
| A | 90-100 | Fully trusted |
| B | 80-89 | Good, minor concerns |
| C | 60-79 | Fair, review before installing |
| D | 40-59 | Poor, proceed with caution |
| F | 0-39 | Failing, do not install |
Signed & Verified (30%)
Author Reputation (20%)
Code Analysis (25%)
Community Signals (15%)
Freshness (10%)
import { TrustScorer } from './trust-engine';
const scorer = new TrustScorer();
const score = scorer.score({
verification: verifyResult,
author: { publishedPluginCount: 5, accountCreated: '2024-01-01', identityVerified: true },
audit: auditResult,
community: { installCount: 1200, maxInstallCount: 50000, issueResolutionRate: 0.85, stars: 45 },
freshness: { lastUpdated: '2026-02-10', dependencyCurrency: 0.92 },
});
console.log(`Score: ${score.overall}/100 (${score.grade})`);
for (const [name, factor] of Object.entries(score.factors)) {
console.log(` ${name}: ${factor.score}/100 [${factor.weight * 100}%] -- ${factor.details}`);
}
The auditor scans all source files (.ts, .js, .sh, .py, .json, .yaml, etc.) for:
Critical patterns:
eval() and new Function() -- code injection vectorsvm.runInContext() -- unsafe VM executionHigh patterns:
spawn() with shell: true__proto__ access -- prototype pollutionMedium patterns:
require() with variable paths/etc, /usr, etc.)process.exit() in pluginsprocess.env[...] = ...)constructor.prototype manipulationLow patterns:
import { SecurityAuditor } from './trust-engine';
const auditor = new SecurityAuditor();
const report = await auditor.audit('my-plugin', '/path/to/plugin', declaredPermissions);
console.log(`Audit ${report.passed ? 'PASSED' : 'FAILED'}`);
console.log(`Findings: ${report.findings.length}`);
// Permission gap analysis
if (report.permissionAnalysis.undeclared.network?.length) {
console.warn('Undeclared network access:', report.permissionAnalysis.undeclared.network);
}
Skip directories: node_modules, .git, dist, build, .next, coverage, __pycache__
Max file size: 512 KB (skip minified bundles)
Comment lines are excluded to reduce false positives.
You can extend the scanner with custom patterns:
import { SecurityAuditor, DANGEROUS_PATTERNS } from './trust-engine';
import type { DangerousPattern } from './types';
const customPattern: DangerousPattern = {
id: 'custom-check',
name: 'Custom security check',
pattern: /dangerousFunction\s*\(/,
severity: 'high',
category: 'custom',
description: 'Usage of dangerousFunction() detected',
recommendation: 'Replace with safeAlternative()',
};
const auditor = new SecurityAuditor([...DANGEROUS_PATTERNS, customPattern]);
For typical usage, use createSecurityPipeline() to get all components wired together:
import { createSecurityPipeline } from './trust-engine';
const manifest = JSON.parse(await readFile('plugin.json', 'utf-8'));
const pipeline = createSecurityPipeline('/path/to/plugin', manifest);
// All components ready:
// pipeline.verifier -- SignatureVerifier
// pipeline.sandbox -- PermissionSandbox
// pipeline.scorer -- TrustScorer
// pipeline.auditor -- SecurityAuditor
// pipeline.permissions -- parsed PluginPermissions
| Command | Description |
|---|---|
/mp:trust <plugin> | Full trust score and security audit |
/mp:trust <plugin> --audit-only | Security audit without scoring |
/mp:trust <plugin> --score-only | Trust score summary |
/mp:verify <target> | Verify .cpkg bundle or plugin signature |
| File | Purpose |
|---|---|
src/security/types.ts | All TypeScript interfaces and types |
src/security/trust-engine.ts | Core engine implementation (4 classes) |
commands/trust.md | /mp:trust command definition |
commands/verify.md | /mp:verify command definition |
skills/security/SKILL.md | This documentation |