From jaganpro-sf-skills-7
Orchestrates Salesforce Data Cloud multi-phase pipelines (connect→prepare→harmonize→segment→act), manages data spaces/kits, and troubleshoots cross-phase workflows via sf data360 CLI.
npx claudepluginhub jaganpro/sf-skillsThis skill uses the workspace's default tool permissions.
Use this skill when the user needs **product-level Data Cloud workflow guidance** rather than a single isolated command family: pipeline setup, cross-phase troubleshooting, data spaces, data kits, or deciding whether a task belongs in Connect, Prepare, Harmonize, Segment, Act, or Retrieve.
CREDITS.mdLICENSEREADME.mdUPSTREAM.mdassets/definitions/activation-target.template.jsonassets/definitions/activation.template.jsonassets/definitions/calculated-insight.template.jsonassets/definitions/data-action-target.template.jsonassets/definitions/data-action.template.jsonassets/definitions/data-graph.template.jsonassets/definitions/data-stream.template.jsonassets/definitions/dmo.template.jsonassets/definitions/identity-resolution.template.jsonassets/definitions/mapping.template.jsonassets/definitions/relationship.template.jsonassets/definitions/search-index.template.jsonassets/definitions/segment.template.jsonreferences/feature-readiness.mdreferences/plugin-setup.mdscripts/bootstrap-plugin.shCreates and manages Salesforce Data Cloud data streams, DLOs, transforms, Document AI configurations, and ingestion after connector setup. Use sf data360 CLI for prepare phase tasks.
Guides Salesforce Data Cloud (2025) integration patterns and architecture: data ingestion from 200+ sources, harmonization, identity resolution, real-time activation, zero-copy querying.
Guides Salesforce data migrations using Bulk API 2.0, jsforce ETL, Data Loader for org-to-org transfers, CRM imports, and validation with TypeScript examples.
Share bugs, ideas, or general feedback.
Use this skill when the user needs product-level Data Cloud workflow guidance rather than a single isolated command family: pipeline setup, cross-phase troubleshooting, data spaces, data kits, or deciding whether a task belongs in Connect, Prepare, Harmonize, Segment, Act, or Retrieve.
This skill intentionally follows sf-skills house style while using the external sf data360 command surface as the runtime. The plugin is not vendored into this repo.
Use sf-datacloud when the work involves:
sf data360 data-space *)sf data360 data-kit *)sf data360 doctor)Delegate to a phase-specific skill when the user is focused on one area:
| Phase | Use this skill | Typical scope |
|---|---|---|
| Connect | sf-datacloud-connect | connections, connectors, source discovery |
| Prepare | sf-datacloud-prepare | data streams, DLOs, transforms, DocAI |
| Harmonize | sf-datacloud-harmonize | DMOs, mappings, identity resolution, data graphs |
| Segment | sf-datacloud-segment | segments, calculated insights |
| Act | sf-datacloud-act | activations, activation targets, data actions |
| Retrieve | sf-datacloud-retrieve | SQL, search indexes, vector search, async query |
Delegate outside the family when the user is:
Ask for or infer:
scripts/diagnose-org.mjsIf plugin availability or org readiness is uncertain, start with:
scripts/verify-plugin.shscripts/diagnose-org.mjsscripts/bootstrap-plugin.shsf data360 plugin runtime; do not reimplement or vendor the command layer.scripts/diagnose-org.mjs over guessing from one failing command.sf data360 commands, suppress linked-plugin warning noise with 2>/dev/null unless the stderr output is needed for debugging.sf data360 doctor as a full-product readiness check; the current upstream command only checks the search-index surface.query describe as a universal tenant probe; only use it with a known DMO/DLO table after broader readiness is confirmed.Confirm:
sf is installedRecommended checks:
sf data360 man
sf org display -o <alias>
bash ~/.claude/skills/sf-datacloud/scripts/verify-plugin.sh <alias>
Treat sf data360 doctor as a broad health signal, not the sole gate. On partially provisioned orgs it can fail even when read-only command families like connectors, DMOs, or segments still work.
Run the shared classifier first:
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --json
Only use a query-plane probe after you know the table name is real:
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --phase retrieve --describe-table MyDMO__dlm --json
Use the classifier to distinguish:
Use targeted inspection after classification:
sf data360 doctor -o <org> 2>/dev/null
sf data360 data-space list -o <org> 2>/dev/null
sf data360 data-stream list -o <org> 2>/dev/null
sf data360 dmo list -o <org> 2>/dev/null
sf data360 identity-resolution list -o <org> 2>/dev/null
sf data360 segment list -o <org> 2>/dev/null
sf data360 activation platforms -o <org> 2>/dev/null
Route the task:
Prefer JSON definition files and repeatable scripts over one-off manual steps. Generic templates live in:
assets/definitions/data-stream.template.jsonassets/definitions/dmo.template.jsonassets/definitions/mapping.template.jsonassets/definitions/relationship.template.jsonassets/definitions/identity-resolution.template.jsonassets/definitions/data-graph.template.jsonassets/definitions/calculated-insight.template.jsonassets/definitions/segment.template.jsonassets/definitions/activation-target.template.jsonassets/definitions/activation.template.jsonassets/definitions/data-action-target.template.jsonassets/definitions/data-action.template.jsonassets/definitions/search-index.template.jsonTypical verification:
connection list requires --connector-type.dmo list --all is useful when you need the full catalog, but first-page dmo list is often enough for readiness checks and much faster.--api-version 64.0.segment members returns opaque IDs; use SQL joins for human-readable details.sf data360 doctor can fail on partially provisioned orgs even when some read-only commands still work; fall back to targeted smoke checks.query describe errors such as Couldn't find CDP tenant ID or DataModelEntity ... not found are query-plane clues, not automatic proof that the whole product is disabled.When finishing, report in this order:
Suggested shape:
Data Cloud task: <setup / inspect / troubleshoot / migrate>
Runtime: <plugin ready / missing / partially verified>
Readiness: <ready / ready_empty / partial / feature_gated / blocked>
Phases: <connect / prepare / harmonize / segment / act / retrieve>
Artifacts: <json files, commands, scripts>
Verification: <passed / partial / blocked>
Next step: <next phase, setup guidance, or cross-skill handoff>
| Need | Delegate to | Reason |
|---|---|---|
| load or clean CRM source data | sf-data | seed or fix source records before ingestion |
| create missing CRM schema | sf-metadata | Data Cloud expects existing objects/fields |
| deploy permissions or bundles | sf-deploy | environment preparation |
| write Apex against Data Cloud outputs | sf-apex | code implementation |
| Flow automation after segmentation/activation | sf-flow | declarative orchestration |
| session tracing / STDM / parquet analysis | sf-ai-agentforce-observability | different Data Cloud use case |