From lc-advanced-skills
Generates multi-tenant security and operational reports from LimaCharlie: billing summaries, usage roll-ups, detection trends, sensor health monitoring, configuration audits across organizations.
npx claudepluginhub refractionpoint/lc-ai --plugin lc-advanced-skillsThis skill is limited to using the following tools:
---
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Prerequisites: Run
/init-lcto initialize LimaCharlie context.
All LimaCharlie operations use the limacharlie CLI directly:
limacharlie <noun> <verb> --oid <oid> --output yaml [flags]
For command help and discovery: limacharlie <command> --ai-help
| Rule | Wrong | Right |
|---|---|---|
| CLI Access | Call MCP tools or spawn api-executor | Use Bash("limacharlie ...") directly |
| Output Format | --output json | --output yaml (more token-efficient) |
| Filter Output | Pipe to jq/yq | Use --filter JMESPATH to select fields |
| LCQL Queries | Write query syntax manually | Use limacharlie ai generate-query first |
| Timestamps | Calculate epoch values | Use date +%s or date -d '7 days ago' +%s |
| OID | Use org name | Use UUID (call limacharlie org list if needed) |
This skill enables AI-assisted generation of comprehensive security and operational reports across LimaCharlie organizations. It provides structured access to billing data, usage statistics, detection summaries, sensor health, and configuration audits. Supports both per-tenant detailed reports and cross-tenant aggregated roll-ups.
Core Philosophy: Accuracy over completeness. This skill prioritizes data accuracy with strict guardrails that make fabricated metrics impossible. Reports clearly document what data is available, what failed, and what limits were applied.
Use this skill when you need to:
This skill supports structured JSON templates that define input schemas, output schemas, and data sources for each report type. Templates are located in skills/reporting/templates/.
| Template | Description | Scope |
|---|---|---|
billing-report.json | Invoice-focused billing data with SKU breakdown | single / all |
mssp-executive-report.json | High-level fleet health for MSSP leadership | single / all |
customer-health-report.json | Comprehensive customer success tracking | single / all |
detection-analytics-report.json | Detection volume, categories, and trends | single / all |
Each template defines:
Templates ensure consistency across reports and enable:
Ensure you are authenticated to LimaCharlie with access to target organizations:
⚠️ CRITICAL: Organization ID (OID) is a UUID (like c7e8f940-1234-5678-abcd-1234567890ab), NOT the organization name.
limacharlie org list to get OID from organization name⚠️ MANDATORY: Prompt User for Time Range
Before generating any report that requires detection or event data, you MUST ask the user to confirm or specify the time range using the AskUserQuestion tool:
AskUserQuestion(
questions=[{
"question": "What time range should I use for this report?",
"header": "Time Range",
"options": [
{"label": "Last 24 hours", "description": "Most recent day of data"},
{"label": "Last 7 days", "description": "Past week of activity"},
{"label": "Last 30 days", "description": "Past month of activity"},
{"label": "Custom range", "description": "I'll specify exact dates"}
],
"multiSelect": false
}]
)
If user selects "Custom range", follow up to get specific start/end dates.
Core Requirements:
⚠️ CRITICAL: Dynamic Timestamp Calculation
NEVER use hardcoded epoch values from examples or documentation!
ALWAYS calculate timestamps dynamically using bash before making API calls:
# Get current Unix timestamp
NOW=$(date +%s)
# Calculate relative time ranges
HOURS_24_AGO=$((NOW - 86400)) # 24 hours = 86400 seconds
DAYS_7_AGO=$((NOW - 604800)) # 7 days = 604800 seconds
DAYS_30_AGO=$((NOW - 2592000)) # 30 days = 2592000 seconds
DAYS_90_AGO=$((NOW - 7776000)) # 90 days = 7776000 seconds
# For specific date ranges (user-provided)
START=$(date -d "2025-11-01 00:00:00 UTC" +%s)
END=$(date -d "2025-11-30 23:59:59 UTC" +%s)
# Display human-readable for confirmation
echo "Time range: $(date -d @$START) to $(date -d @$END)"
Why This Matters:
get_historic_detections) uses Unix epoch timestamps in SECONDSValidation Before API Call:
# Verify timestamps are reasonable
if [ $START -gt $END ]; then
echo "ERROR: Start time is after end time"
exit 1
fi
if [ $END -gt $NOW ]; then
echo "WARNING: End time is in the future, using current time"
END=$NOW
fi
Absolute Rules:
Always:
Default Limit: 5,000 detections per organization
Required Workflow:
1. Query with limit=5000
2. Track retrieved_count
3. Check: limit_reached = (retrieved_count >= 5000)
4. If limit_reached:
⚠️ DISPLAY PROMINENT WARNING
"DETECTION LIMIT REACHED
Retrieved: 5,000 detections
Actual count: May be significantly higher
For complete data:
- Narrow time range
- Query specific date ranges
- Filter by category or sensor"
Never Say:
Always Say:
Absolute Rule: ZERO Cost Calculations
What You CAN Show:
What You CANNOT Do:
Even if user provides rates:
MANDATORY in Every Report:
Header (always visible):
Generated: 2025-11-20 14:45:30 UTC
Time Window: 2025-11-01 00:00:00 UTC to 2025-11-30 23:59:59 UTC (30 days)
Organizations: 45 of 50 processed successfully
Per Section:
── Usage Statistics ──
Data Retrieved: 2025-11-20 14:45:35 UTC
Coverage Period: Nov 1-30, 2025 (30 days)
Source: get-usage-stats API
Data Freshness: Daily updates (24hr delay typical)
Partial Reports Are Acceptable:
Error Documentation Template:
⚠️ FAILED ORGANIZATIONS (3 of 50)
Client ABC (oid: c7e8f940-...)
Status: ❌ Failed
Error: 403 Forbidden
Endpoint: get-billing-details
Reason: Insufficient permissions
Impact: Billing data unavailable
Action: Grant billing:read permission
Timestamp: 2025-11-20 14:32:15 UTC
CLI: limacharlie org list
Response Structure:
{
"orgs": [
{
"oid": "c7e8f940-1234-5678-abcd-1234567890ab",
"name": "Client ABC Production",
"role": "owner"
},
{
"oid": "c7e8f940-5678-1234-dcba-0987654321ab",
"name": "Client XYZ Security",
"role": "admin"
}
],
"total": 2
}
Validation:
orgs array exists and not emptyCLI: limacharlie org info
Response Structure:
{
"oid": "c7e8f940-...",
"name": "Client ABC",
"created": 1672531200,
"creator": "user@example.com"
}
CLI: limacharlie org stats
Response Structure:
{
"usage": {
"2025-11-06": {
"sensor_events": 131206,
"output_bytes_tx": 500123456,
"replay_num_evals": 435847,
"peak_sensors": 4
},
"2025-11-07": {
"sensor_events": 145821,
"output_bytes_tx": 523456789,
"replay_num_evals": 478932,
"peak_sensors": 4
}
}
}
Field Definitions:
sensor_events: Total events ingested from sensorsoutput_bytes_tx: Data transmitted to outputs (in bytes)replay_num_evals: D&R rule evaluations performedpeak_sensors: Maximum concurrent sensors onlineCritical Notes:
output_bytes_tx is in BYTES - convert to GB: divide by 1,073,741,824Aggregation Rules:
For time range Nov 1-30, 2025:
1. Filter usage dict to only dates in range
2. Sum daily values:
total_events = sum(usage[date]['sensor_events'] for date in range)
3. Document calculation:
"Total Events: 1,250,432,100
Calculation: Sum of daily sensor_events from Nov 1-30, 2025
Source: get-usage-stats"
CLI: limacharlie billing details
Response Structure:
{
"plan": "enterprise",
"status": "active",
"billing_email": "billing@example.com",
"payment_method": "card",
"last_four": "4242",
"next_billing_date": 1672531200,
"auto_renew": true
}
Common Error: 403 Forbidden (insufficient permissions)
CLI: limacharlie billing invoice-url
Response Structure:
{
"url": "https://billing.limacharlie.io/invoice/..."
}
Usage in Reports:
For billing details and charges:
→ View Invoice: https://billing.limacharlie.io/invoice/...
CLI: limacharlie sensor list
resource_link if >100KBResponse Structure (normal):
{
"sensors": {
"sensor-id-1": {
"sid": "sensor-id-1",
"hostname": "SERVER01",
"plat": 268435456,
"arch": 1,
"enroll": "2024-01-15T10:30:00Z",
"alive": "2024-11-20 14:22:13",
"int_ip": "10.0.1.50",
"ext_ip": "203.0.113.45",
"oid": "c7e8f940-..."
}
},
"continuation_token": ""
}
Large Result Handling:
For large result sets, pipe CLI output to a file:
# Save large results to file
limacharlie sensor list --oid <oid> --output yaml > /tmp/sensors.yaml
# Or use --filter to extract needed fields directly
limacharlie sensor list --oid <oid> --filter "length(sensors)" --output yaml # Count
limacharlie sensor list --oid <oid> --filter "sensors.*.hostname" --output yaml # Hostnames
Field Validation - CRITICAL:
CORRECT Fields to Use:
alive: "2025-11-20 14:22:13" (datetime string for last seen)plat: Platform code (int or string)hostname: Sensor hostnamesid: Sensor IDint_ip: Internal IP addressext_ip: External IP addressINCORRECT Fields (Common Mistakes):
last_seen: Often 0 or missing - DO NOT USEalive field instead for offline detectionOffline Sensor Detection:
# Parse alive field (datetime string format: "YYYY-MM-DD HH:MM:SS")
from datetime import datetime, timezone
alive_str = sensor_info.get('alive', '')
if alive_str:
# Parse: "2025-10-01 17:08:10"
alive_dt = datetime.strptime(alive_str, '%Y-%m-%d %H:%M:%S')
alive_dt = alive_dt.replace(tzinfo=timezone.utc)
last_seen_timestamp = alive_dt.timestamp()
hours_offline = (current_time - last_seen_timestamp) / 3600
# Categorize with explicit thresholds:
if hours_offline < 24:
category = "Recently offline (< 24 hours)"
elif hours_offline < 168: # 7 days
category = "Offline short term (1-7 days)"
elif hours_offline < 720: # 30 days
category = "Offline medium term (7-30 days)"
else:
category = "Offline long term (30+ days)"
Platform Code Translation:
Traditional OS platforms (strings):
Numeric platform codes (extensions/adapters):
Example:
Platform: LimaCharlie Extensions (code: 2415919104)
Sample hostnames: ext-strelka-01, ext-hayabusa-02, ext-secureannex-01
Sensor count: 30
CLI: limacharlie sensor list --online
Response Structure:
{
"sensors": [
"sensor-id-1",
"sensor-id-2",
"sensor-id-3"
]
}
Usage:
# Convert to set for O(1) lookup
online_sids = set(response['sensors'])
# Check if sensor is online
is_online = sensor_id in online_sids
# Calculate offline count
total_sensors = 2500
online_count = len(online_sids)
offline_count = total_sensors - online_count
CLI: limacharlie detection list
Query Parameters:
start: Unix epoch timestamp (seconds)end: Unix epoch timestamp (seconds)limit: Maximum detections to retrieve (default: 5000)sid: Filter by sensor ID (optional)cat: Filter by category (optional)Response Structure:
{
"detects": [
{
"detect_id": "detect-uuid-123",
"cat": "suspicious_process",
"source_rule": "general.encoded-powershell",
"namespace": "general",
"ts": 1732108934567,
"sid": "sensor-xyz-123",
"detect": {
"event": {
"TIMESTAMP": 1732108934567,
"COMMAND_LINE": "powershell.exe -encodedCommand ...",
"FILE_PATH": "C:\\Windows\\System32\\..."
},
"routing": {
"sid": "sensor-xyz-123",
"hostname": "SERVER01"
}
}
}
],
"next_cursor": ""
}
Field Validation - CRITICAL:
CORRECT Fields:
source_rule: "namespace.rule-name" (actual rule identifier)cat: Category namets: Timestamp (MAY be seconds or milliseconds - normalize!)sid: Sensor ID (may be "N/A" for some detections)detect_id: Unique detection identifierINCORRECT Fields (Common Mistakes):
rule_name: Doesn't exist - use source_rule insteadseverity: NOT in detection records (only in D&R rule config)Timestamp Normalization (MANDATORY):
ts = detection.get('ts', 0)
# Check magnitude to determine units
if ts > 10000000000:
# Milliseconds - convert to seconds
ts = ts / 1000
# Sanity check result
if ts < 1577836800: # Before 2020-01-01
# Invalid timestamp
display = "Invalid timestamp"
elif ts > time.time() + 86400: # More than 1 day in future
# Invalid timestamp
display = "Invalid timestamp"
else:
# Valid - format for display
display = datetime.fromtimestamp(ts, tz=timezone.utc).strftime('%Y-%m-%d %H:%M:%S UTC')
Detection Limit Tracking (MANDATORY):
retrieved_count = 0
for detection in detections:
retrieved_count += 1
limit_reached = (retrieved_count >= query_limit)
if limit_reached:
# MUST display prominent warning
warning = f"""
⚠️ DETECTION LIMIT REACHED
Retrieved: {retrieved_count:,} detections
Actual count: May be significantly higher
This organization has more detections than retrieved.
For complete data:
- Narrow time range (currently: {days} days)
- Query specific date ranges separately
- Filter by category or sensor
"""
CLI: limacharlie dr list
Response Structure:
{
"custom-rule-1": {
"name": "custom-rule-1",
"namespace": "general",
"detect": {
"event": "NEW_PROCESS",
"op": "contains",
"path": "event/COMMAND_LINE",
"value": "powershell"
},
"respond": [
{
"action": "report",
"name": "suspicious_powershell"
}
],
"is_enabled": true
}
}
Usage:
CLI: limacharlie output list
Usage in Reports:
This skill uses a parallel subagent architecture for efficient multi-tenant data collection:
┌─────────────────────────────────────────────────────────────┐
│ reporting (this skill) │
│ ├─ Phase 1: Discovery (list orgs via CLI) │
│ ├─ Phase 2: Time range validation │
│ ├─ Phase 3: Spawn parallel agents ────────────────────┐ │
│ ├─ Phase 4: Aggregate results │ │
│ └─ Phase 5: Generate report │ │
└────────────────────────────────────────────────────────┼────┘
│
┌────────────────────────────────────────────────────┘
│
│ Spawns ONE agent per organization (in parallel)
│
▼
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│org- │ │org- │ │org- │ │org- │
│reporter │ │reporter │ │reporter │ │reporter │
│ Org 1 │ │ Org 2 │ │ Org 3 │ │ Org N │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │ │
│ Each agent collects ALL data for its org:
│ - org info, usage, billing, sensors,
│ - detections, rules, outputs
│ │ │ │
└──────────────┴──────┬───────┴──────────────┘
│
▼
Structured JSON results
returned to parent skill
Benefits of this architecture:
Default Output: Console (Formatted Markdown)
By default, ALL report data MUST be displayed directly in the console as formatted markdown tables and text. This includes:
Console Output Characteristics:
[GREEN], [YELLOW], [RED]HTML Output: Only When Explicitly Requested
HTML visualization should ONLY be generated when the user explicitly requests it using phrases like:
If user requests HTML output:
html-renderer agent to create the HTML file/tmp/{report-name}-{date}.htmlNEVER automatically generate HTML - console output is always the default.
Template Reference: templates/mssp-executive-report.json
User Request Examples:
Step-by-Step Execution:
┌─ PHASE 1: DISCOVERY ──────────────────────────────┐
│ 1. Use CLI to get org list: │
│ limacharlie org list --output yaml │
│ │
│ 2. Validation: │
│ ✓ Check orgs array exists and not empty │
│ ✓ Validate each OID is UUID format │
│ ✓ Count total organizations │
│ │
│ 3. User Confirmation (if >20 orgs): │
│ "Found 50 organizations. Generate report for │
│ all 50? This may take a few minutes." │
│ │
│ Options to present: │
│ - Yes, process all 50 │
│ - No, let me filter first │
│ - Show me the organization list │
└────────────────────────────────────────────────────┘
┌─ PHASE 2: TIME RANGE - ASK USER & CALCULATE ──────┐
│ │
│ 4. ⚠️ MANDATORY: Ask user for time range: │
│ │
│ Use AskUserQuestion tool: │
│ - "Last 24 hours" │
│ - "Last 7 days" │
│ - "Last 30 days" │
│ - "Custom range" │
│ │
│ If "Custom range", ask for specific dates. │
│ NEVER assume or default without asking! │
│ │
│ 5. ⚠️ CRITICAL: Calculate timestamps dynamically: │
│ │
│ ```bash │
│ NOW=$(date +%s) │
│ # Based on user selection: │
│ # 24h: START=$((NOW - 86400)) │
│ # 7d: START=$((NOW - 604800)) │
│ # 30d: START=$((NOW - 2592000)) │
│ END=$NOW │
│ ``` │
│ │
│ NEVER use hardcoded epoch values! │
│ Stale timestamps = NO DATA returned! │
│ │
│ 6. Validation Checks: │
│ ✓ Start timestamp < End timestamp │
│ ✓ End timestamp <= Current time │
│ ✓ Range is reasonable (<= 90 days) │
│ ✓ Timestamps are Unix epoch in SECONDS │
│ │
│ 7. Display for user confirmation: │
│ "Time Range: │
│ - Start: [calculated date] UTC │
│ - End: [calculated date] UTC │
│ - Duration: X days │
│ - Unix: [start_epoch] to [end_epoch]" │
└────────────────────────────────────────────────────┘
┌─ PHASE 3: SPAWN PARALLEL AGENTS ──────────────────┐
│ 7. Spawn org-reporter agents IN PARALLEL: │
│ │
│ CRITICAL: Send ALL Task calls in a SINGLE │
│ message to achieve true parallelism: │
│ │
│ Task( │
│ subagent_type="lc-essentials:org-reporter", │
│ prompt="Collect reporting data for org │
│ 'Client ABC' (OID: uuid-1) │
│ Time Range: │
│ - Start: 1730419200 │
│ - End: 1733011199 │
│ Detection Limit: 5000" │
│ ) │
│ Task( │
│ subagent_type="lc-essentials:org-reporter", │
│ prompt="Collect reporting data for org │
│ 'Client XYZ' (OID: uuid-2)..." │
│ ) │
│ ... (one Task per organization) │
│ │
│ 8. Each agent returns structured JSON: │
│ { │
│ "org_name": "...", │
│ "oid": "...", │
│ "status": "success|partial|failed", │
│ "data": { usage, billing, sensors, ... }, │
│ "errors": [...], │
│ "warnings": [...] │
│ } │
│ │
│ 9. Wait for all agents to complete │
└────────────────────────────────────────────────────┘
┌─ PHASE 4: AGGREGATE RESULTS ──────────────────────┐
│ 10. Categorize agent results: │
│ success_orgs = [] (status == "success") │
│ partial_orgs = [] (status == "partial") │
│ failed_orgs = [] (status == "failed") │
│ │
│ 11. Multi-Org Aggregation: │
│ Aggregate across SUCCESS + PARTIAL orgs: │
│ - Sum total_events (from usage) │
│ - Sum total_output_bytes (convert to GB) │
│ - Sum total_evaluations │
│ - Sum peak_sensors │
│ - Count total sensors │
│ - Count total detections (track limits) │
│ │
│ Document for each aggregate: │
│ - Formula used │
│ - Orgs included count │
│ - Orgs excluded (and why) │
│ - Time range covered │
└────────────────────────────────────────────────────┘
┌─ PHASE 5: REPORT GENERATION (CONSOLE OUTPUT) ─────┐
│ ⚠️ DEFAULT: Display ALL data in console as │
│ formatted markdown. HTML only if requested. │
│ │
│ 17. Console Report Structure: │
│ │
│ A. HEADER (mandatory metadata) │
│ ═══════════════════════════════════════ │
│ MSSP Comprehensive Report │
│ Generated: 2025-11-20 14:45:30 UTC │
│ Time Window: Nov 1-30, 2025 (30 days) │
│ Organizations: 45 of 50 successful │
│ ═══════════════════════════════════════ │
│ │
│ B. EXECUTIVE SUMMARY │
│ - High-level metrics (successful orgs) │
│ - Critical warnings and alerts │
│ - Failed organization count │
│ - Detection limit warnings │
│ │
│ C. AGGREGATE METRICS │
│ Total Across 45 Organizations │
│ (Excluded: 5 orgs - see failures section) │
│ │
│ - Total Sensor Events: 1,250,432,100 │
│ Calculation: Sum of daily sensor_events │
│ from Nov 1-30, 2025 │
│ │
│ - Total Data Output: 3,847 GB │
│ (4,128,394,752,000 bytes) │
│ Calculation: Sum of daily output_bytes_tx│
│ ÷ 1,073,741,824 │
│ │
│ - Peak Sensors: 12,450 │
│ Calculation: Sum of max peak_sensors │
│ │
│ D. PER-ORGANIZATION DETAILS │
│ For each successful organization: │
│ │
│ ── Client ABC ───────────────────────── │
│ OID: c7e8f940-1234-5678-abcd-... │
│ Data Retrieved: 2025-11-20 14:45:35 UTC │
│ │
│ Usage Statistics (Nov 1-30, 2025): │
│ - Sensor Events: 42,150,000 │
│ - Data Output: 125 GB (134,217,728,000 B)│
│ - D&R Evaluations: 1,200,450 │
│ - Peak Sensors: 250 │
│ │
│ Sensor Inventory: │
│ - Total Sensors: 250 │
│ - Online: 245 (98%) │
│ - Offline: 5 (2%) │
│ - Platforms: Windows (150), Linux (100) │
│ │
│ Detection Summary: │
│ Retrieved: 5,000 detections │
│ ⚠️ LIMIT REACHED - actual count higher │
│ Top Categories: │
│ - suspicious_process: 1,250 │
│ - network_threat: 890 │
│ - malware: 450 │
│ │
│ Billing Status: │
│ - Plan: Enterprise │
│ - Status: Active ✓ │
│ - Next Billing: Dec 1, 2025 │
│ - Invoice: [URL] │
│ │
│ E. FAILED ORGANIZATIONS SECTION │
│ ⚠️ FAILED ORGANIZATIONS (5 of 50) │
│ │
│ Client XYZ (oid: c7e8f940-...) │
│ Status: ❌ Failed │
│ Error: 403 Forbidden │
│ Endpoint: get-billing-details │
│ Reason: Insufficient billing permissions │
│ Impact: Billing data unavailable │
│ Available: Usage stats, sensor inventory │
│ Action: Grant billing:read permission │
│ Timestamp: 2025-11-20 14:32:15 UTC │
│ │
│ F. DETECTION LIMIT WARNINGS │
│ ⚠️ Organizations at Detection Limit: │
│ │
│ 15 of 45 organizations exceeded the 5,000 │
│ detection limit. Actual counts are higher. │
│ │
│ Organizations affected: │
│ - Client A: 5,000 retrieved ⚠️ │
│ - Client B: 5,000 retrieved ⚠️ │
│ - [... 13 more] │
│ │
│ Recommendation: For complete detection │
│ data, narrow time ranges or query specific │
│ date ranges for these organizations. │
│ │
│ G. METHODOLOGY SECTION │
│ Data Sources: │
│ - limacharlie org list: Organization discovery │
│ - limacharlie org stats: Daily metrics │
│ - limacharlie billing details: Sub info │
│ - limacharlie sensor list: Endpoint inv │
│ - limacharlie sensor list --online: Live │
│ - limacharlie detection list: Security │
│ - limacharlie dr list: Custom rules │
│ │
│ Query Parameters: │
│ - Detection limit: 5,000 per org │
│ - Time range: Nov 1-30, 2025 │
│ - Date filtering: Applied to usage stats │
│ │
│ Calculations: │
│ - Bytes to GB: value ÷ 1,073,741,824 │
│ - Aggregations: Sum across successful │
│ organizations only │
│ - Timestamps: Normalized from mixed │
│ seconds/milliseconds format │
│ │
│ Data Freshness: │
│ - Usage stats: Daily (24hr delay) │
│ - Detections: Near real-time (5min) │
│ - Sensor status: Real-time │
│ - Billing: Updated on changes (~1hr) │
│ │
│ H. FOOTER │
│ ═══════════════════════════════════════ │
│ Report completed: 2025-11-20 14:50:15 UTC │
│ Execution time: 4 minutes 45 seconds │
│ │
│ For questions or issues: │
│ Contact: support@limacharlie.io │
│ │
│ Disclaimer: Usage metrics shown are from │
│ LimaCharlie APIs. For billing and pricing, │
│ refer to individual organization invoices. │
│ ═══════════════════════════════════════ │
└────────────────────────────────────────────────────┘
Progress Reporting During Execution:
Display progress as orgs are processed:
"Generating MSSP Report for 50 Organizations...
[1/50] Client ABC... ✓ Success (2.3s)
[2/50] Client XYZ... ✓ Success (1.8s)
[3/50] Client PDQ... ⚠️ Billing permission denied
[4/50] Client RST... ✓ Success (2.1s)
[5/50] Client MNO... ❌ Failed: 500 Server Error
...
[50/50] Client ZZZ... ✓ Success (1.9s)
Collection Complete:
✓ Successful: 45 organizations
⚠️ Partial: 2 organizations (some data unavailable)
❌ Failed: 3 organizations
Generating report structure..."
IMPORTANT: Console Output is Complete
The above console output displays ALL collected data. Do NOT automatically proceed to HTML generation.
Only generate HTML if user explicitly requests it (e.g., "export as HTML", "create dashboard"). When HTML is requested:
html-renderer to create the visual dashboardTemplate Reference: templates/billing-report.json
User Request Examples:
Workflow:
1. Read billing-report.json template for input/output schemas
2. Validate inputs against template:
- Required: year (integer), month (1-12)
- Optional: scope (single/all), oid (UUID), format (json/markdown)
3. Get orgs: limacharlie org list --output yaml
4. Spawn org-reporter agents in parallel (same as Pattern 1)
- Agents collect ALL data (billing focus is in report generation)
5. Aggregate results focusing on billing data:
- Extract only: usage, billing, invoice_url from each agent result
- Skip: detections, rules, detailed sensor data
6. Format output per template schema:
- metadata: generated_at, period, scope, tenant_count
- data.tenants: per-tenant billing with SKUs
- data.rollup: aggregate totals (when scope=all)
- warnings/errors arrays
7. Generate Billing-Focused Report:
- Usage metrics per org (NO cost calculations)
- Subscription status
- Invoice links
- Billing permission issues flagged
Template Reference: templates/customer-health-report.json (with scope=single)
User Request Examples:
Workflow:
1. Read customer-health-report.json template for structure
2. Get orgs: limacharlie org list --output yaml
- Filter to find OID for the specified org name
3. Spawn ONE org-reporter agent for that organization:
Task(
subagent_type="lc-essentials:org-reporter",
prompt="Collect reporting data for org 'Client ABC' (OID: uuid)
Time Range: [start] to [end]
Detection Limit: 5000"
)
4. Generate Detailed Single-Org Report per template:
- Full usage breakdown
- Complete sensor inventory with platforms
- Detection breakdown by category
- All D&R rules listed
- Output configurations
- Any errors or warnings from collection
- Attention items requiring follow-up
Template Reference: templates/detection-analytics-report.json
User Request Examples:
Workflow:
1. Read detection-analytics-report.json template for schemas
2. Ask user for time range (7/14/30 days)
3. Get orgs: limacharlie org list --output yaml
4. Spawn org-reporter agents in parallel:
- Each agent collects detection data for its org
5. Aggregate detection data:
- Extract: detection_count, categories, severities, top_hosts
- Build: top_categories, top_tenants rankings
- Track: limit_reached flags
6. Format output per template:
- metadata: time_window with calculated timestamps
- data.tenants: per-tenant detection breakdown
- data.rollup: aggregate totals
- data.top_categories: cross-tenant category ranking
- data.top_tenants: volume ranking with limit flags
7. Prominently display limit warnings for affected orgs
AI must validate at these critical points:
✓ Organization list retrieved successfully
✓ Each OID is valid UUID format (not org name)
✓ USER WAS ASKED to confirm time range (MANDATORY)
✓ Timestamps calculated DYNAMICALLY using $(date +%s)
✓ Timestamps are reasonable (start < end, not future)
✓ Limit values are positive integers
✓ Date ranges <= 90 days (warn if larger)
✓ User confirmed processing for large org counts (>20)
✓ Time range displayed to user before proceeding
✓ Each agent returned valid JSON structure
✓ Check agent status: "success", "partial", or "failed"
✓ Required data fields present in successful results
✓ Errors array populated for any failures
✓ Warnings array checked for detection limits, etc.
✓ Data types correct (numbers are numbers)
✓ Values within expected ranges
✓ No division by zero
✓ No negative values where impossible
✓ Timestamps pass sanity checks
✓ All numbers formatted with thousand separators
✓ Units clearly labeled (GB, bytes, count, etc.)
✓ Timestamps in consistent format (UTC)
✓ Warnings included where required
✓ Metadata sections complete
Based on previous implementation learnings:
WRONG: detection['rule_name']
RIGHT: detection.get('source_rule', detection.get('cat', 'unknown'))
WRONG: detection['severity']
RIGHT: Severity not in detection records - only in D&R rule config
WRONG: Using hardcoded epoch values from documentation examples
RIGHT: ALWAYS calculate dynamically: NOW=$(date +%s); START=$((NOW - 604800))
WRONG: Assuming a default time range without asking user
RIGHT: Use AskUserQuestion to confirm time range before querying
WRONG: Using timestamps from skill examples (e.g., 1730419200)
RIGHT: Calculate current time and subtract: $(($(date +%s) - 86400))
WRONG: Assuming all timestamps are seconds
RIGHT: Check magnitude, normalize if > 10000000000 (milliseconds)
WRONG: Ignoring invalid timestamps
RIGHT: Validate range (2020-01-01 to now+1day), flag invalid
WRONG: Not confirming calculated timestamps with user
RIGHT: Display "Time range: [date] to [date]" before running queries
WRONG: last_seen field
RIGHT: alive field (datetime string format)
WRONG: Estimating offline duration
RIGHT: Parse alive timestamp, calculate exact hours/days
WRONG: Showing raw numeric codes without context
RIGHT: Pattern analysis + sample hostnames
Example:
WRONG: "Platform: 2415919104 (30 sensors)"
RIGHT: "Platform: LimaCharlie Extensions (code: 2415919104)
Sample hostnames: ext-strelka-01, ext-hayabusa-02
Sensor count: 30"
WRONG: Using all 90 days from API without filtering
RIGHT: Filter to requested time range only
WRONG: Showing only GB
RIGHT: Show both - "450 GB (483,183,820,800 bytes)"
WRONG: Calculating costs
RIGHT: Show usage only, link to invoice for costs
WRONG: Including failed orgs in totals
RIGHT: Sum successful orgs only, document exclusions
WRONG: "Total detections: 75,000"
RIGHT: "Total retrieved: 75,000 (15 orgs hit 5K limit - actual higher)"
CRITICAL (Stop entire report):
HIGH (Skip org, document prominently):
MEDIUM (Use fallback, note in report):
LOW (Note in methodology):
{
"org_oid": "c7e8f940-...",
"org_name": "Client ABC",
"endpoint": "get-billing-details",
"error_type": "403 Forbidden",
"error_message": "Insufficient permissions",
"timestamp": "2025-11-20T14:45:30Z",
"impact": "Billing details unavailable",
"remediation": "Grant billing:read permission",
"severity": "HIGH",
"retry_attempted": false,
"partial_data_available": true,
"partial_data_sections": ["usage", "sensors", "detections"]
}
Before presenting any report, verify:
═══════════════════════════════════════════════════════════
MSSP Comprehensive Report - November 2025
Generated: 2025-11-20 14:45:30 UTC
Time Window: 2025-11-01 00:00:00 UTC to 2025-11-30 23:59:59 UTC
Duration: 30 days
Organizations Processed: 45 of 50 (90% success rate)
═══════════════════════════════════════════════════════════
EXECUTIVE SUMMARY
Fleet Overview:
• Total Sensors: 12,450 (across 45 successful organizations)
• Online: 11,823 (95%)
• Offline: 627 (5%)
Security Activity:
• Detections Retrieved: 127,450
• ⚠️ 15 organizations hit detection limit (actual counts higher)
• Top Categories: suspicious_process (32%), network_threat (28%)
Usage Metrics (45 organizations):
• Total Events: 1,250,432,100
• Data Output: 3,847 GB
• D&R Evaluations: 45,230,890
Issues Requiring Attention:
⚠️ 5 organizations failed (see Failures section)
⚠️ 15 organizations exceeded detection limits
⚠️ 627 sensors offline (5% of fleet)
⚠️ FAILED ORGANIZATIONS (5 of 50)
─────────────────────────────────────────────────────────
Organization: Client XYZ Security Operations
OID: c7e8f940-aaaa-bbbb-cccc-ddddeeeeffffggg
Status: ❌ Partial Failure
─────────────────────────────────────────────────────────
Failed Endpoint: get-billing-details
Error Code: 403 Forbidden
Error Message: Insufficient permissions to access billing data
Timestamp: 2025-11-20 14:32:15 UTC
Impact:
✗ Billing details unavailable
✗ Subscription status unknown
✗ Invoice link unavailable
✗ Next billing date unknown
Available Data:
✓ Organization metadata
✓ Usage statistics (Nov 1-30, 2025)
✓ Sensor inventory (125 sensors)
✓ Detection summary (1,847 detections)
✓ D&R rules configuration
Action Required:
Grant the following permission to this organization:
• Permission: billing:read
• Scope: Organization level
• Required Role: Admin or Owner
Without this permission, billing data will remain unavailable
in future reports for this organization.
Data Inclusion:
✓ Usage metrics INCLUDED in aggregate totals
✗ Billing status EXCLUDED from billing summaries
─────────────────────────────────────────────────────────
⚠️ DETECTION LIMIT WARNINGS
The following organizations exceeded the 5,000 detection retrieval
limit. Actual detection counts are higher than shown.
Organizations Affected (15 of 45):
Client ABC Corp
Retrieved: 5,000 detections ⚠️ LIMIT REACHED
Time Range: Nov 1-30, 2025 (30 days)
Actual Count: Unknown (exceeds 5,000)
Recommendation: Query in 7-day increments for complete data
Client XYZ Industries
Retrieved: 5,000 detections ⚠️ LIMIT REACHED
Time Range: Nov 1-30, 2025 (30 days)
Actual Count: Unknown (exceeds 5,000)
Recommendation: Query in 7-day increments for complete data
[... 13 more organizations]
Impact on Report:
• Detection counts shown are MINIMUM values
• Actual totals across all organizations are higher
• Category distributions may be skewed (sample bias)
• Aggregate detection count is underreported
Recommendations:
1. For complete data, narrow time ranges:
- Instead of 30 days, query 7-day periods
- Aggregate results manually
2. Filter by category for targeted analysis:
- Query specific categories separately
- Combine results for complete picture
3. Consider increasing limit (max: 50,000):
- Higher limits increase API response time
- May hit other infrastructure limits
Issue: "Too many organizations to process"
Issue: "Detection limit hit for all orgs"
Issue: "Billing permission errors"
Issue: "Large sensor list timing out"
--filter to extract summary fields, or pipe CLI output to fileFor issues with this skill:
This skill should be updated when:
Last Updated: 2025-11-20 Version: 1.0.0