From datadog-log-analyst
Use this skill when the user asks to check, analyse, or query Datadog logs for a specific client instance or environment — e.g. "check RH99", "any errors in QR99 last hour", "look at QR01 warnings", "what's happening in RH98", or any request that includes a client name or environment code alongside a log-related question. Also triggers on: "check the Datadog logs", "analyse Prismatic logs", "send me a log summary via Slack", "any monitors alerting", "what events happened". Understands natural language — the client code can be embedded anywhere in the message. Also use this skill when the user asks about Datadog monitors, events, or metrics related to Prismatic or client environments.
npx claudepluginhub p3nj/p3nj-market --plugin datadog-log-analystThis skill uses the workspace's default tool permissions.
This skill orchestrates the full log analysis pipeline. It parses the user request,
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
This skill orchestrates the full log analysis pipeline. It parses the user request, resolves the client instance, then delegates to sub-skills for fetching, analysis, and reporting.
The plugin bundles its own Datadog MCP server with these tools:
| Tool | Purpose |
|---|---|
datadog_query_logs | Search logs with full attribute passthrough. Supports cursor-based pagination for millions of logs. |
datadog_aggregate_logs | Count logs grouped by any facet (status, service, flow, step). Fast — no individual logs returned. |
datadog_list_log_facets | Sample logs to discover available attributes. |
These tools use @-prefixed Datadog query syntax (e.g. @service:Prismatic).
Always pass response_format: "json".
Every Prismatic log message is prefixed with a tag identifying its source. Use this table when categorising and interpreting log entries:
| Tag | Source | Level | What it means |
|---|---|---|---|
FLOW | Coded Flows (all) | Info/Error/Debug/Warning | General flow execution steps and errors |
Http-Error | Common client.ts | Error | HTTP request failures |
OBZ-ErrorHandler | Common functions.ts | Error | No error loggers configured / failed to log error |
OBZ-Entity | Common functions.ts | Error | Entity error record logging failures |
OBZ-LogEntityError | Common functions.ts | Error | Entity error logging failure |
INVOKE_ERROR | Common invokeWithErrorHandling.ts | Error | Flow invocation failure |
DATA-Map | Common shared.ts / actions.ts | Error/Debug | Data mapping/transform failures |
Validation | Common dataConverter.ts | Warning | Data validation issues |
SAP-WARN | SAP Connector errorHandling.ts | Warning | SAP returned error messages (code "E") |
SAP-Error | SAP Connector errorHandling.ts | Debug/Warning | Failed to extract SAP messages |
SAP-DATA | SAP Connector sapUtilities.ts | Debug | Raw SAP response data |
SAP-Filter | SAP Connector utilities.ts | Debug | SAP OData filter string |
Entity errors are always secondary: OBZ-Entity and OBZ-LogEntityError are never
the root cause. They correspond to an earlier error in the same execution. Group them
with their parent error, never count independently.
Only 4 log levels exist: error, warn, info, debug. There is no critical or emergency.
Step 1 ──→ Step 1.5 ──→ dd-fetch ──→ dd-analyse-core ──→ dd-analyse-sap ──→ dd-report
Parse Resolve Fetch & Build analysis SAP-specific Format &
request instanceId accumulate object extension deliver
(Phase 1+2) from summaries (SAP only)
Each sub-skill reads its own SKILL.md when invoked. The orchestrator controls which sub-skills run and in what order.
Extract from the user message:
CLIENT_CODE — the instance/environment code (e.g. RH98, QR99, EM01, RH99). Ask with AskUserQuestion if missing.
Time range — convert to from_time / to_time:
from_time="now-2h"from_time="now-30m"2026-03-28T14:00:00Z for midnight 29 Mar AEST)from_time="now-24h"from_time="now-1h", to_time="now"Filters (translate to Datadog query syntax):
"Time Confirmation""Assignments"status:error or status:(error OR warn)Http-Error, timeoutThis step is a mandatory gate. It must complete before any fetching begins. It's a single quick call — not a bottleneck.
Tool: datadog_query_logs
query: "@service:Prismatic <CLIENT_CODE>"
from_time: <resolved>
to_time: <resolved>
limit: 10
response_format: "json"
Wait for the response. Inspect each log entry:
instanceId from the log attributes (it's a top-level field in the
flattened response since the MCP server preserves all nested attributes).instance or integration attribute:
SAP → SAPAMT → AMTMaximo → MaximoRecord both values. Build INSTANCE_FILTER:
@service:Prismatic @instanceId:<instanceId>@service:Prismatic <CLIENT_CODE> — flag in summaryBased on the user's intent:
For general health checks ("check RH99", "how's QR01 looking?", "give me a summary"):
dd-fetch/SKILL.md → run the full fetch pipeline (Phase 1 + Phase 2 + volume counts)dd-analyse-core/SKILL.md → build the analysis object from accumulated summariesdd-analyse-sap/SKILL.md → extend with SAP fieldsdd-report/SKILL.md → format and deliverFor targeted questions ("did QR01 have failed Time Confirmation?"):
datadog_query_logs call with filters built from instanceIdFor volume/distribution questions ("how many errors vs warnings?"):
datadog_aggregate_logs with group_by="status" — fast, no individual logsFor monitor status ("any monitors alerting?"):
mcp__datadog__datadog_list_monitors if availableFor discovery ("what attributes do these logs have?"):
datadog_list_log_facets to sample and inspectWhen reading a sub-skill, pass these variables (carry them through the pipeline):
| Variable | Value |
|---|---|
CLIENT_CODE | From Step 1 |
INSTANCE_FILTER | From Step 1.5 |
INTEGRATION_TYPE | From Step 1.5 (SAP, AMT, Maximo, generic) |
from_time | Resolved time range start |
to_time | Resolved time range end |
DELIVERY_INTENT | Where to send results (chat, Slack, Notion, email, docx) |