From ad-migration
Use when profiling a single table, view, or materialized view for migration and the next step depends on persisted classification, keying, watermark, foreign-key typing, PII handling, or stg-vs-mart view classification.
npx claudepluginhub accelerate-data/migration-utilityThis skill uses the workspace's default tool permissions.
Persist a fresh profile for one table, view, or materialized view. Treat any existing `profile` section as non-authoritative and recompute from current catalog evidence.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
Persist a fresh profile for one table, view, or materialized view. Treat any existing profile section as non-authoritative and recompute from current catalog evidence.
$ARGUMENTS must be the fully-qualified object name. Ask the user if it is missing.
Check readiness:
uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" migrate-util ready profile --object <fqn>
If ready is false, report the failing code and reason and stop.
Do not run profile context, classify, or write commands after a failed readiness check.
Detect object type:
catalog/views/<fqn>.json exists → use the View PipelineBuild the profile from current evidence, write it with profile write, then report the persisted result.
/profile statuses and codes in ../../lib/shared/profile_error_codes.md.warnings or errors must include code, severity, and message.profile write rejects the payload, fix the JSON and retry.status yourself. profile write derives it.uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" profile view-context \
--view <view_fqn>
If the command exits non-zero, stop and report the error.
stg vs martRead the context JSON and apply references/view-classification-signals.md.
Use this order:
Check references.views.in_scope. For each dependency view, run:
uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" discover show --name <dependency_view_fqn>
If any dependency view is already classified mart, inherit mart.
Use sql_elements next.
group_by, or window_function signals push to mart.stg.Use logic_summary only as a tiebreaker when structural signals are sparse.
For materialized views, aggregation still implies mart; lookup/pass-through still implies stg.
If signals conflict, default to mart.
Write a 1-2 sentence rationale naming the signals that drove the decision.
If view parsing continued with limitations, carry those diagnostics into warnings as canonical /profile entries. Normalize continued parse-limit warnings to DDL_PARSE_ERROR and preserve the original detail in message.
If classification is still ambiguous after dependency checks, sql_elements, and logic_summary, do not guess. Report the ambiguity and stop.
Create the temp file first, then persist it to the catalog:
mkdir -p .staging
cat > .staging/view_profile.json <<'EOF'
<profile JSON>
EOF
uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" profile write \
--table <view_fqn> \
--profile-file .staging/view_profile.json && rm -rf .staging
profile write reads .staging/view_profile.json and persists that profile into the catalog.
Required payload fields:
classification: stg or martrationalesource: llmAfter a successful write, report:
uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" profile context \
--table <table>
If the command exits non-zero, stop and report the error.
Use selected_writer_ddl_slice as the table SQL when present. Otherwise use proc_body. If both are empty, stop with a context error.
Read the context JSON and apply references/profiling-signals.md.
Answer the six profiling questions defined there. If the signals tentatively indicate:
fact_accumulating_snapshot → also read references/accumulating-snapshot-classification.mdfact_periodic_snapshot → also read references/periodic-snapshot-classification.mdFollow the signal tables and pattern-matching rules in those references completely. Do not abbreviate them or replace them with a lighter heuristic.
PARTIAL_PROFILE with severity: "warning"EXEC also reducing confidence → also add PARSE_ERROR with severity: "warning"profile context surfaces parse or routing diagnostics, copy the relevant detail into warnings.PROFILING_FAILED, explain why, and stop instead of writing guessed output.Required partial-friendly warning behavior:
PARTIAL_PROFILEEXEC routing → include both PARTIAL_PROFILE and PARSE_ERRORseverity must be exactly "warning"Create the temp file first, then persist it to the catalog:
mkdir -p .staging
cat > .staging/profile.json <<'EOF'
<profile JSON>
EOF
uv run --project "${CLAUDE_PLUGIN_ROOT}/packages/ad-migration-internal" profile write \
--table <table> \
--profile-file .staging/profile.json && rm -rf .staging
profile write reads .staging/profile.json and persists that profile into the catalog.
Payload rules:
writer is required.
Every persisted decision must include a short rationale, including classification, primary_key, natural_key, watermark, each foreign_keys[] entry, and each pii_actions[] entry you emit.
Use this payload shape:
{
"writer": "<writer fqn>",
"classification": {"resolved_kind": "...", "source": "...", "rationale": "..."},
"primary_key": {"columns": ["..."], "primary_key_type": "...", "source": "...", "rationale": "..."},
"natural_key": {"columns": ["..."], "source": "...", "rationale": "..."},
"watermark": {"column": "...", "source": "...", "rationale": "..."},
"foreign_keys": [{"column": "...", "fk_type": "...", "source": "...", "rationale": "..."}],
"pii_actions": [{"column": "...", "suggested_action": "...", "source": "...", "rationale": "..."}],
"warnings": [{"code": "...", "severity": "...", "message": "..."}],
"errors": [{"code": "...", "severity": "...", "message": "..."}]
}
Omit unresolved sections entirely. Do not emit null placeholders.
foreign_keys[] entries are per local FK column and use this shape: {"column": "<local column>", "fk_type": "...", "source": "...", "rationale": "..."}.
Do not replace column with columns. Do not emit reference or nested references objects in the profile payload.
Include every section you can support; omit unresolved sections instead of inventing values.
Use canonical warnings and errors to explain omissions or reduced confidence.
If profile write fails validation, correct the payload shape and retry before presenting results.
Do not set status. Do not stop after reasoning. Persist the profile first, then summarize the persisted result.
After a successful write, report:
stg vs mart rules../../lib/shared/profile_error_codes.md — canonical /profile statuses and codes