From datadog
This skill should be used when the user wants to create a Datadog monitoring dashboard as Terraform HCL. Handles metric discovery via Datadog MCP, documentation, or integrations-core CSV, widget layout design with grouped widgets, template variables, team assignment, and terraform fmt validation. Activates on: "create dashboard", "datadog dashboard", "new dashboard", "add dashboard", "dashboard for", "monitor dashboard", "build monitoring dashboard", "generate dashboard HCL", "Terraform dashboard".
npx claudepluginhub fabn/claude-plugins --plugin datadogThis skill uses the workspace's default tool permissions.
Create production-quality Datadog dashboards as Terraform HCL. This skill discovers metrics, designs grouped widget layouts, and generates formatted Terraform code ready to plan and apply.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Create production-quality Datadog dashboards as Terraform HCL. This skill discovers metrics, designs grouped widget layouts, and generates formatted Terraform code ready to plan and apply.
Scope: This skill handles dashboard creation only. It does not manage Datadog monitors, SLOs, or synthetic tests — those are separate Terraform resources.
Reference files: Consult reference/widget-patterns.md for HCL code patterns and reference/metric-discovery.md for the full metric research workflow.
terraform fmt validationParse the user's request to determine:
If the request is vague, ask:
"What specific aspects of [topic] do you want to monitor? For example: throughput, errors, latency, resource usage, queue depth?"
Datadog dashboards only support team:xxx tags.
datadog_team resources)tags = ["team:<handle>"]Run all applicable strategies in parallel for comprehensive coverage. See reference/metric-discovery.md for full details.
Strategy A - Datadog MCP (preferred):
ToolSearch("datadog")
Use available tools to list metrics matching the integration prefix, inspect tags and units.
Strategy B - Datadog Documentation:
WebFetch: https://docs.datadoghq.com/integrations/<integration>/?tab=host
Extract the "Data Collected" > "Metrics" section.
Strategy C - integrations-core CSV:
WebFetch: https://raw.githubusercontent.com/DataDog/integrations-core/master/<integration>/metadata.csv
Parse CSV for metric_name, unit_name, per_unit_name, description, orientation.
After discovery, present findings to user:
Design dashboard filters based on discovered metric tags:
env with production-like default (check project conventions - prod or production)kube_namespace, kube_cluster_nameservicequeue for Sidekiq, db for Redis)Organize widgets following the design guidelines below:
group_definition using colored backgroundsGroup: "Overview KPIs" (vivid_blue)
- [query_value 3x2] Total Requests
- [query_value 3x2] Error Rate
- [query_value 3x2] Avg Latency
Group: "Traffic" (vivid_green)
- [timeseries 12x4] Requests Over Time
Group: "Errors" (vivid_pink)
- [timeseries 8x4] Errors by Type
- [sunburst 4x4] Error Breakdown
reference/widget-patterns.md:
dashboard_id and dashboard_urlterraform fmt on generated files/terraform-plan datadog)template_variable blockslayout_type = "ordered", reflow_type = "fixed" (always)ALL widgets MUST be organized in group_definition blocks:
show_title = true, descriptive titlebackground_color from the semantic palette:| Color | Use For |
|---|---|
vivid_blue | Overview, KPIs, general information |
vivid_green | Health, success, availability |
vivid_orange | Warnings, degradation |
vivid_pink | Errors, failures |
vivid_purple | Performance, latency |
vivid_yellow | Alerts, attention items |
gray | Logs, events, auxiliary data |
white | Neutral, documentation notes |
| Metric Category | Widget Type | Key Config |
|---|---|---|
| Single KPI value | query_value | conditional_formats (MUST), timeseries_background, autoscale |
| Trend over time | timeseries | show_legend = true (MUST for multi-series), legend_columns, formula alias |
| Category breakdown | sunburst | legend_table, grouped by tags |
| Top N ranking | toplist | conditional_formats, limit |
| Tabular data | query_table | cell_display_mode, multiple formulas |
| Distribution | heatmap | For latency, size distributions |
| Period comparison | change | week_before(), increase_good |
| Log stream | list_stream | data_source = "logs_stream", columns |
| Event timeline | event_timeline | tags_execution = "and" |
| Health check | check_status | grouping = "cluster" |
| Monitor overview | manage_status | display_format = "countsAndList" |
| SLO summary | slo_list | request_type = "slo_list" |
| Widget Type | Size (WxH) | Notes |
|---|---|---|
query_value | 2x2, 3x2, 4x2 | KPI row at top of groups |
timeseries | 6x3, 6x4, 12x4 | Half or full width |
sunburst | 3x3, 4x4 | Alongside timeseries |
query_table | 6x3, 12x4 | Tabular breakdowns |
toplist | 3x3, 4x4 | Ranking panels |
change | 2x4, 2x5 | Narrow vertical column |
list_stream | 5x3, 6x4 | Log panels |
check_status | 3x3 | Health indicators |
See reference/widget-patterns.md for detailed configuration patterns including conditional formats, timeseries backgrounds, legend configuration, display types, and style palettes.
ALL metric widgets MUST display correct units. Derive from (in priority order):
metadata.csv unit_name column.seconds -> second, .bytes -> byte, .count -> autoscale)env filter with production defaultprefix matching the tag keydefaults = ["*"] for optional filters, specific value for required ones$variable_nameWhen a dashboard exceeds ~20 widgets or covers too many unrelated concerns:
datadog_dashboard_list to group related dashboardsteam:xxx tagstags = ["team:<handle>"]reference/widget-patterns.md - Complete HCL code patterns for every widget type with inline commentsreference/metric-discovery.md - Detailed metric research workflow using MCP, docs, and CSV sources| Situation | Action |
|---|---|
| Datadog MCP not available | Warn user, proceed with docs/CSV discovery only |
| No metrics found for integration | Ask user to provide metric names manually |
| Integration not in integrations-core | Try integrations-extras repo, then fall back to docs |
| Terraform not initialized | Suggest terraform init -backend=false in module directory |
| Dashboard >20 widgets | Warn and suggest splitting strategy |
| Template variable mismatch | Cross-check all $var references against template_variable blocks |
terraform fmt fails | Fix formatting issues before presenting to user |