> **Owner**: 分析师 (vuln-analyst)
Analyzes code vulnerabilities by tracing complete call chains across all defense layers.
/plugin marketplace add Clouditera/AI-Vuln-Reproduce/plugin install vuln-reproduce@vuln-reproduce-marketplaceOwner: 分析师 (vuln-analyst) 适用角色: vuln-analyst, mock-env-preparer v2.0 说明: 本规则原名 l3-deep-analysis.md,现重命名以适应 v2.0 架构
代码分析必须遵循以下深度分析规则,避免仅看单点代码就下结论。
问题案例: VUL-002 只看到路由层缺少 ensureLoggedIn 中间件,就判定存在授权绕过,但实际上底层函数 canEditQueue() 有完整的权限校验。
规则:
授权类漏洞必须追踪:
1. 路由层 → 中间件检查
2. 控制器层 → 参数处理
3. 服务层 → 业务逻辑权限校验
4. 数据层 → 数据访问控制
只有当整条链路都缺失校验时,才能判定漏洞存在
请求入口
↓
路由中间件(第一层)
↓
控制器权限检查(第二层)
↓
服务层业务校验(第三层)
↓
数据层访问控制(第四层)
任一层有效校验 = 漏洞不成立
必须检查:
- [ ] 路由是否有认证中间件
- [ ] 控制器是否调用权限检查函数
- [ ] 服务层是否验证资源所有权
- [ ] 是否有全局权限拦截器
误判模式:
- 只看路由缺少中间件 → 错误
- 未追踪到服务层校验 → 错误
正确做法:
- 追踪完整调用链
- 找到所有权限检查点
- 验证每个检查点的有效性
示例 - VUL-002 正确分析:
// 路由层 - 看似缺少 ensureLoggedIn
router.put('/queue/:id', controllers.editQueuedContent);
// 但控制器调用了
async function editQueuedContent(req, res) {
const canEdit = await Posts.canEditQueue(req.uid, editData, 'edit');
if (!canEdit) {
return helpers.formatApiResponse(403, res); // ← 这里有校验!
}
}
// 服务层有完整权限检查
Posts.canEditQueue = async function(uid, editData, action) {
// 检查是否管理员
const isAdminOrGlobalMod = await user.isAdminOrGlobalMod(uid);
// 检查是否帖子作者
const selfPost = parseInt(uid, 10) === parseInt(data.uid, 10);
// ...
};
必须检查:
- [ ] 锁机制的作用范围(进程级/分布式)
- [ ] 实际部署场景(单实例/多实例)
- [ ] 数据库层是否有原子操作
- [ ] 是否有事务保护
误判模式:
- 看到内存锁就判定多实例有问题 → 需评估部署场景
- 未考虑数据库层约束 → 错误
正确做法:
- 评估典型部署场景
- 检查数据库层保护
- 考虑实际可利用性
必须检查:
- [ ] 是否有其他层的限速(如 Nginx/WAF)
- [ ] 功能是否默认启用
- [ ] 绕过后的实际影响
- [ ] 是否有其他缓解措施
误判模式:
- 只看代码层缺少限速 → 可能有基础设施层保护
- 未考虑功能是否启用 → 错误
必须检查:
- [ ] 泄露的信息敏感程度
- [ ] 是否需要认证才能访问
- [ ] 信息是否本就是公开的
- [ ] 实际攻击价值
误判模式:
- 公开信息当敏感信息 → 错误
- 未评估实际危害 → 错误
漏洞报告指出的代码位置
↓
确认代码确实存在
↓
理解代码逻辑
从漏洞点向上追踪:
↓
谁调用了这个函数?
↓
调用前有什么检查?
↓
一直追踪到请求入口
从漏洞点向下追踪:
↓
数据流向哪里?
↓
后续有什么校验?
↓
最终影响是什么?
判定清单:
- [ ] 完整调用链已追踪
- [ ] 所有防御层已检查
- [ ] 实际部署场景已考虑
- [ ] 可利用性已评估
只有全部通过才能判定 CONFIRMED
L3 分析报告必须包含:
## 调用链分析
### 请求入口
- 路由: `POST /api/v3/xxx`
- 中间件: [列出所有中间件]
### 控制器层
- 函数: `controllers.xxx`
- 权限检查: [有/无,具体函数]
### 服务层
- 函数: `Service.xxx`
- 业务校验: [有/无,具体逻辑]
### 数据层
- 操作: `db.xxx`
- 访问控制: [有/无]
## 防御层总结
| 层级 | 检查点 | 状态 |
|------|--------|------|
| 路由 | ensureLoggedIn | ❌ 缺失 |
| 控制器 | canEditQueue | ✅ 存在 |
| 服务 | 权限验证 | ✅ 存在 |
| 数据 | - | - |
## 判定
基于完整调用链分析,虽然路由层缺少中间件,但控制器层有完整权限校验,漏洞**不成立**。
| 误判类型 | 表现 | 正确做法 |
|---|---|---|
| 单点分析 | 只看一个代码位置 | 追踪完整调用链 |
| 忽略多层防御 | 只看路由层 | 检查所有防御层 |
| 忽略部署场景 | 假设最坏情况 | 评估典型部署 |
| 过度推断 | 代码可能有问题 | 验证实际行为 |
Use this agent when you need to review code for adherence to project guidelines, style guides, and best practices. This agent should be used proactively after writing or modifying code, especially before committing changes or creating pull requests. It will check for style violations, potential issues, and ensure code follows the established patterns in CLAUDE.md. Also the agent needs to know which files to focus on for the review. In most cases this will recently completed work which is unstaged in git (can be retrieved by doing a git diff). However there can be cases where this is different, make sure to specify this as the agent input when calling the agent. Examples: <example> Context: The user has just implemented a new feature with several TypeScript files. user: "I've added the new authentication feature. Can you check if everything looks good?" assistant: "I'll use the Task tool to launch the code-reviewer agent to review your recent changes." <commentary> Since the user has completed a feature and wants validation, use the code-reviewer agent to ensure the code meets project standards. </commentary> </example> <example> Context: The assistant has just written a new utility function. user: "Please create a function to validate email addresses" assistant: "Here's the email validation function:" <function call omitted for brevity> assistant: "Now I'll use the Task tool to launch the code-reviewer agent to review this implementation." <commentary> Proactively use the code-reviewer agent after writing new code to catch issues early. </commentary> </example> <example> Context: The user is about to create a PR. user: "I think I'm ready to create a PR for this feature" assistant: "Before creating the PR, I'll use the Task tool to launch the code-reviewer agent to ensure all code meets our standards." <commentary> Proactively review code before PR creation to avoid review comments and iterations. </commentary> </example>
You are an expert code simplification specialist focused on enhancing code clarity, consistency, and maintainability while preserving exact functionality. Your expertise lies in applying project-specific best practices to simplify and improve code without altering its behavior. You prioritize readable, explicit code over overly compact solutions. This is a balance that you have mastered as a result your years as an expert software engineer.
Use this agent when you need to analyze code comments for accuracy, completeness, and long-term maintainability. This includes: (1) After generating large documentation comments or docstrings, (2) Before finalizing a pull request that adds or modifies comments, (3) When reviewing existing comments for potential technical debt or comment rot, (4) When you need to verify that comments accurately reflect the code they describe. <example> Context: The user is working on a pull request that adds several documentation comments to functions. user: "I've added documentation to these functions. Can you check if the comments are accurate?" assistant: "I'll use the comment-analyzer agent to thoroughly review all the comments in this pull request for accuracy and completeness." <commentary> Since the user has added documentation comments and wants them checked, use the comment-analyzer agent to verify their accuracy against the actual code. </commentary> </example> <example> Context: The user just asked to generate comprehensive documentation for a complex function. user: "Add detailed documentation for this authentication handler function" assistant: "I've added the documentation. Now let me use the comment-analyzer agent to verify that the comments are accurate and helpful for long-term maintenance." <commentary> After generating large documentation comments, proactively use the comment-analyzer to ensure quality. </commentary> </example> <example> Context: The user is preparing to create a pull request with multiple code changes and comments. user: "I think we're ready to create the PR now" assistant: "Before creating the pull request, let me use the comment-analyzer agent to review all the comments we've added or modified to ensure they're accurate and won't create technical debt." <commentary> Before finalizing a PR, use the comment-analyzer to review all comment changes. </commentary> </example>