From formal-verify
Identify non-obvious signals, hidden patterns, and clever correlations in datasets using investigative data analysis techniques. Use when analyzing social media exports, user data, behavioral datasets, or any structured data where deeper insights are desired. Pairs with personality-profiler for enhanced signal extraction. Triggers on requests like "what patterns do you see", "find hidden signals", "correlate these datasets", "what am I missing in this data", "analyze across datasets", "find non-obvious insights", or when users want to go beyond surface-level analysis. Also use proactively when you notice interesting anomalies or correlations during any data analysis task.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Advanced signal detection and correlation analysis for extracting non-obvious insights from datasets.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Advanced signal detection and correlation analysis for extracting non-obvious insights from datasets.
This skill transforms Claude into an investigative data analyst, applying techniques from data journalism, forensic accounting, and OSINT investigation to find patterns others miss. It pairs naturally with personality-profiler to enhance signal extraction from social media data, but works with any structured dataset.
Adopt these cognitive stances from elite data journalists and investigators:
Healthy Skepticism — "There is no such thing as clean or dirty data, just data you don't understand." Challenge every assumption.
Harm-Centered Pattern Recognition — Study anomalies not as noise to remove, but as potential signals revealing system cracks.
Naivete as Asset — Remain naive enough to spot what domain experts miss due to habituation.
Evidence Over Assumption — Build confidence through evidence, never trust preconceived notions.
CRITICAL: Before any analysis, use AskUserQuestion to interview the user about potential analyses. Present proactively formulated options based on the data structure.
When data is provided:
Use AskUserQuestion with proactively formulated analysis options. Structure questions around these categories:
Template for interview questions:
AskUserQuestion with options like:
- "Temporal anomaly detection" — Find unusual patterns in when things happen
- "Behavioral clustering" — Group similar patterns to find outlier behaviors
- "Cross-field correlation" — Discover unexpected relationships between fields
- "Absence analysis" — Identify what's NOT in the data that should be
- "Custom analysis" — [Free text option for user-specified direction]
Always include:
Example interview for social media data:
Header: "Analysis Focus"
Question: "What patterns are you most interested in discovering?"
Options:
- "Engagement anomalies" — Posts that performed unusually well/poorly vs your baseline
- "Topic evolution" — How your interests shifted over time
- "Social network signals" — Who you engage with most and patterns in those interactions
- "Behavioral fingerprint" — Your unique timing, vocabulary, and stylistic signatures
Apply the signal detection techniques from the reference guide based on user selection.
For each insight:
For comprehensive technique descriptions, see references/signal-detection.md.
| Technique | What It Finds | When to Use |
|---|---|---|
| Temporal Fingerprinting | Activity rhythms, scheduling patterns | Any timestamped data |
| Ratio Analysis | Unusual proportions that suggest hidden behavior | Engagement metrics, financial data |
| Absence Detection | What's missing that should exist | Any dataset with expected patterns |
| Cross-Dataset Triangulation | Corroboration or contradiction across sources | Multiple data exports |
| Outlier Contextualization | Whether anomalies are errors or signals | After initial statistical analysis |
| Linguistic Forensics | Vocabulary shifts, tone changes over time | Text-heavy datasets |
| Network Topology | Connection patterns and clustering | Social/relationship data |
| Behavioral Segmentation | Distinct modes of operation | Activity logs, engagement data |
When analyzing multiple datasets together:
For each finding in Dataset A, check:
Use this format:
CORRELATION: [brief title]
Source A: [dataset] — [specific finding]
Source B: [dataset] — [supporting/contradicting evidence]
Confidence: [high/medium/low]
Implication: [what this combined insight suggests]
When paired with personality-profiler:
Deliver findings in two parts:
2-3 paragraphs highlighting the most significant non-obvious findings.
{
"analysis_type": "data-sleuth",
"datasets_analyzed": ["list of sources"],
"findings": [
{
"title": "Finding title",
"category": "temporal|behavioral|linguistic|network|correlation",
"confidence": 0.0-1.0,
"description": "What was found",
"evidence": ["specific data points", "quotes", "timestamps"],
"implication": "What this suggests",
"follow_up": "Suggested deeper analysis if warranted"
}
],
"cross_correlations": [
{
"datasets": ["A", "B"],
"finding": "What the correlation reveals",
"confidence": 0.0-1.0
}
],
"methodology_notes": "How the analysis was conducted"
}
Use this skill without being asked when you notice:
Briefly note: "I noticed something interesting — would you like me to investigate further?"