From cost-analyst
Investigates sudden cost increases, spikes, or anomalies by comparing recent spending to baseline periods and identifying which services, accounts, or resources are responsible for unexpected changes in cloud spending
npx claudepluginhub cloudzero/cloudzero-claude-marketplace --plugin cost-analystThis skill uses the workspace's default tool permissions.
This skill helps identify and explain sudden increases in cloud costs by comparing recent spending patterns to historical baselines and pinpointing the specific resources, services, or dimensions responsible for the spike.
Proactively scans for cost anomalies, unusual spending patterns, unexpected changes, or irregularities that may indicate waste, misconfiguration, security issues, or optimization opportunities across all dimensions
Analyzes AWS, GCP, Azure costs via APIs; detects idle resources, top spenders; recommends rightsizing, RIs, spot workloads, storage tiering; generates IaC changes, reports, alerts.
Share bugs, ideas, or general feedback.
This skill helps identify and explain sudden increases in cloud costs by comparing recent spending patterns to historical baselines and pinpointing the specific resources, services, or dimensions responsible for the spike.
This skill builds on the understand-cloudzero-organization skill.
Before applying this procedure:
Query total costs for both periods:
# Recent period (where spike occurred)
get_cost_data(
date_range="2024-01-15 to 2024-01-21",
granularity=None,
cost_type="real_cost"
)
# Baseline period (for comparison)
get_cost_data(
date_range="2024-01-08 to 2024-01-14",
granularity=None,
cost_type="real_cost"
)
Calculate:
Systematically check common cost drivers to identify where the spike originated:
Check by Cloud Provider:
get_cost_data(
group_by=["CZ:CloudProvider"],
date_range="<spike_period>",
limit=10
)
Check by Service:
get_cost_data(
group_by=["CZ:Service"],
date_range="<spike_period>",
limit=20
)
Check by Account:
get_cost_data(
group_by=["CZ:Account"],
date_range="<spike_period>",
limit=20
)
Compare results from spike period vs. baseline period to identify:
Once you identify the primary dimension responsible, drill deeper:
# Example: If EC2 showed the spike, break down by account and region
get_cost_data(
group_by=["CZ:Account", "CZ:Region"],
filters={"CZ:Service": ["Amazon Elastic Compute Cloud - Compute"]},
date_range="<spike_period>",
limit=50
)
Show how costs evolved during the spike period:
get_cost_data(
group_by=["CZ:Service"],
granularity="daily",
date_range="<extended_period_including_spike>",
limit=5
)
This reveals:
Identify if new resources were provisioned:
# Compare dimension values between periods
get_dimension_values(
dimension="CZ:Resource",
date_range="<spike_period>"
)
get_dimension_values(
dimension="CZ:Resource",
date_range="<baseline_period>"
)
Use organization-specific dimensions to attribute costs:
# Discover available custom dimensions
get_available_dimensions(filter="User:Defined")
# Query by relevant custom dimensions
get_cost_data(
group_by=["User:Defined:Team", "CZ:Service"],
date_range="<spike_period>",
limit=30
)
Check tag-based attribution:
get_available_dimensions(filter="Tag")
get_cost_data(
group_by=["CZ:Tag:Environment", "CZ:Service"],
date_range="<spike_period>",
limit=30
)
Provide a clear, structured investigation report:
For general cost analysis best practices, see ${CLAUDE_PLUGIN_ROOT}/references/best-practices.md
${CLAUDE_PLUGIN_ROOT}/references/best-practices.md - Universal cost analysis best practices${CLAUDE_PLUGIN_ROOT}/references/cloudzero-tools-reference.md - Complete tool documentation${CLAUDE_PLUGIN_ROOT}/references/error-handling.md - Troubleshooting and common errors${CLAUDE_PLUGIN_ROOT}/references/dimensions-reference.md - Dimension types and FQDIDs${CLAUDE_PLUGIN_ROOT}/references/cost-types-reference.md - When to use each cost type