Salesforce Data Cloud integration patterns and architecture (2025)
/plugin marketplace add JosiahSiegel/claude-plugin-marketplace/plugin install salesforce-master@claude-plugin-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
MANDATORY: Always Use Backslashes on Windows for File Paths
When using Edit or Write tools on Windows, you MUST use backslashes (\) in file paths, NOT forward slashes (/).
Examples:
D:/repos/project/file.tsxD:\repos\project\file.tsxThis applies to:
NEVER create new documentation files unless explicitly requested by the user.
Salesforce Data Cloud is a real-time customer data platform (CDP) that unifies data from any source to create a complete, actionable view of every customer. It powers AI, automation, and analytics across the entire Customer 360 platform.
Key Capabilities:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Data Sources β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Salesforce CRM β External Apps β Data Warehouses β APIs β
ββββββββββ¬ββββββββββββββββββ¬βββββββββββββββ¬ββββββββββββ¬βββββ
β β β β
ββββββΌββββββββββββββββββΌβββββββββββββββΌββββββββββββΌβββββ
β Data Cloud Connectors & Ingestion β
β ββ Real-time Streaming (Change Data Capture) β
β ββ Batch Import (scheduled/on-demand) β
β ββ Zero Copy (Snowflake, Databricks, BigQuery) β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Data Model & Harmonization β
β ββ Map to Common Data Model (DMO objects) β
β ββ Identity Resolution (match & merge) β
β ββ Data Transformation (calculated insights) β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Unified Customer Profile (360Β° View) β
β ββ Demographics, Transactions, Behavior, Events β
β ββ Real-time Profile API for instant access β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β Activation & Actions β
β ββ Salesforce Flow (real-time automation) β
β ββ Marketing Cloud (segmentation/journeys) β
β ββ Agentforce (AI agents) β
β ββ Einstein AI (predictions/recommendations) β
β ββ External Systems (reverse ETL) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Use Case: Keep Data Cloud synchronized with Salesforce objects in real-time
// Enable Change Data Capture for objects
// Setup β Change Data Capture β Select: Account, Contact, Opportunity
// Data Cloud automatically subscribes to CDC channels
// No code needed - configure in Data Cloud UI
// Optional: Custom streaming logic
public class DataCloudStreamHandler {
public static void publishCustomEvent(Id recordId, String changeType) {
// Publish custom platform event for Data Cloud
DataCloudChangeEvent__e event = new DataCloudChangeEvent__e(
RecordId__c = recordId,
ObjectType__c = 'Custom_Object__c',
ChangeType__c = changeType,
Timestamp__c = System.now(),
PayloadJson__c = JSON.serialize(getRecordData(recordId))
);
EventBus.publish(event);
}
private static Map<String, Object> getRecordData(Id recordId) {
// Retrieve and return record data
String objectType = recordId.getSObjectType().getDescribe().getName();
String query = 'SELECT FIELDS(ALL) FROM ' + objectType +
' WHERE Id = :recordId LIMIT 1';
SObject record = Database.query(query);
return (Map<String, Object>)JSON.deserializeUntyped(JSON.serialize(record));
}
}
Use Case: Import data from ERP, e-commerce, or other business systems
Data Cloud Configuration:
1. Create Data Source (Setup β Data Cloud β Data Sources)
- Type: Amazon S3, SFTP, Azure Blob, Google Cloud Storage
- Authentication: API key, OAuth, IAM role
- Schedule: Hourly, Daily, Weekly
2. Map to Data Model Objects (DMO)
- Source Field β DMO Field mapping
- Data type conversions
- Formula fields and transformations
3. Configure Identity Resolution
- Match rules (email, customer ID, phone)
- Reconciliation rules (which source wins)
API-Based Batch Import:
# Python example: Push data to Data Cloud via API
import requests
import pandas as pd
def upload_to_data_cloud(csv_file, object_name, access_token, instance_url):
"""Upload CSV to Data Cloud via Bulk API"""
# Step 1: Create ingestion job
job_url = f"{instance_url}/services/data/v62.0/jobs/ingest"
job_payload = {
"object": object_name,
"operation": "upsert",
"externalIdFieldName": "ExternalId__c"
}
response = requests.post(
job_url,
headers={
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
},
json=job_payload
)
job_id = response.json()["id"]
# Step 2: Upload CSV data
with open(csv_file, 'rb') as f:
csv_data = f.read()
upload_url = f"{job_url}/{job_id}/batches"
requests.put(
upload_url,
headers={
"Authorization": f"Bearer {access_token}",
"Content-Type": "text/csv"
},
data=csv_data
)
# Step 3: Close job
close_url = f"{job_url}/{job_id}"
requests.patch(
close_url,
headers={
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
},
json={"state": "UploadComplete"}
)
return job_id
Use Case: Access data warehouse data without copying to Salesforce
Benefits:
Snowflake Zero Copy Setup:
-- In Snowflake: Grant access to Salesforce
GRANT USAGE ON DATABASE customer_data TO ROLE salesforce_role;
GRANT USAGE ON SCHEMA customer_data.public TO ROLE salesforce_role;
GRANT SELECT ON TABLE customer_data.public.orders TO ROLE salesforce_role;
-- Create secure share
CREATE SHARE salesforce_data_share;
GRANT USAGE ON DATABASE customer_data TO SHARE salesforce_data_share;
ALTER SHARE salesforce_data_share ADD ACCOUNTS = 'SALESFORCE_ORG_ID';
Data Cloud Configuration:
1. Add Zero Copy Connector (Data Cloud β Data Sources)
- Type: Snowflake Zero Copy
- Connection: Account URL, username, private key
- Database/Schema selection
2. Create Data Stream (virtual tables)
- Select Snowflake tables to expose
- Map to DMO or keep as is
- Configure refresh (real-time or scheduled)
3. Query in Salesforce
- Use SOQL-like syntax to query Snowflake data
- Join with Salesforce data
- No data movement required
Query Zero Copy Data:
// Query Snowflake data from Apex (via Data Cloud)
public class DataCloudZeroCopyQuery {
public static List<Map<String, Object>> querySnowflakeOrders(String customerId) {
// Data Cloud Query API
String query = 'SELECT order_id, total_amount, order_date ' +
'FROM snowflake_orders ' +
'WHERE customer_id = \'' + customerId + '\' ' +
'ORDER BY order_date DESC LIMIT 10';
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:DataCloud/v1/query');
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serialize(new Map<String, String>{'query' => query}));
Http http = new Http();
HttpResponse res = http.send(req);
if (res.getStatusCode() == 200) {
Map<String, Object> result = (Map<String, Object>)JSON.deserializeUntyped(res.getBody());
return (List<Map<String, Object>>)result.get('data');
}
return new List<Map<String, Object>>();
}
}
Configure identity resolution to create unified profiles:
Match Rules Configuration:
ββ Primary Match (exact match on email)
β ββ IF email matches THEN merge profiles
ββ Secondary Match (fuzzy match on name + phone)
β ββ IF firstName + lastName similar AND phone matches THEN merge
ββ Tertiary Match (external ID)
ββ IF ExternalCustomerId matches THEN merge
Reconciliation Rules (conflict resolution):
ββ Most Recent: Use most recently updated value
ββ Source Priority: Salesforce > ERP > Website
ββ Field-Level Rules: Email from Salesforce, Revenue from ERP
Custom Matching Logic:
// Custom matching for complex scenarios
public class DataCloudMatchingService {
public static Boolean shouldMatch(Map<String, Object> profile1,
Map<String, Object> profile2) {
// Custom matching logic beyond standard rules
String email1 = (String)profile1.get('email');
String email2 = (String)profile2.get('email');
// Exact email match
if (email1 != null && email1.equalsIgnoreCase(email2)) {
return true;
}
// Fuzzy name + address match
String name1 = (String)profile1.get('fullName');
String name2 = (String)profile2.get('fullName');
String address1 = (String)profile1.get('address');
String address2 = (String)profile2.get('address');
if (isNameSimilar(name1, name2) && isSameAddress(address1, address2)) {
return true;
}
return false;
}
private static Boolean isNameSimilar(String name1, String name2) {
// Implement Levenshtein distance or phonetic matching
return calculateSimilarity(name1, name2) > 0.85;
}
}
Use Case: Trigger Flow when customer behavior detected in Data Cloud
Data Cloud Calculated Insight: "High-Value Customer at Risk"
- Logic: Purchase frequency decreased by 50% in last 30 days
- Trigger: When insight calculated
β
Platform Event: HighValueCustomerRisk__e
β
Salesforce Flow: "Retain High-Value Customer"
- Create Task for Account Manager
- Send personalized offer via Marketing Cloud
- Add to "At-Risk" campaign
- Log activity timeline
Apex Implementation:
// Subscribe to Data Cloud insights
trigger DataCloudInsightTrigger on HighValueCustomerRisk__e (after insert) {
List<Task> tasks = new List<Task>();
for (HighValueCustomerRisk__e event : Trigger.new) {
// Create retention task
Task task = new Task(
Subject = 'Urgent: High-value customer at risk',
Description = 'Customer ' + event.CustomerName__c +
' shows declining engagement. Take action.',
WhatId = event.AccountId__c,
Priority = 'High',
Status = 'Open',
ActivityDate = Date.today().addDays(1)
);
tasks.add(task);
// Trigger retention campaign
RetentionCampaignService.addToRetentionCampaign(
event.CustomerId__c,
event.RiskScore__c
);
}
if (!tasks.isEmpty()) {
insert tasks;
}
}
Use Case: AI agent uses Data Cloud for complete customer context
// Agentforce action: Get unified customer view
public class AgentforceDataCloudActions {
@InvocableMethod(label='Get Customer 360 Profile')
public static List<CustomerProfile> getCustomer360(List<String> customerIds) {
List<CustomerProfile> profiles = new List<CustomerProfile>();
for (String customerId : customerIds) {
// Query Data Cloud unified profile
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:DataCloud/v1/profile/' + customerId);
req.setMethod('GET');
Http http = new Http();
HttpResponse res = http.send(req);
if (res.getStatusCode() == 200) {
Map<String, Object> data = (Map<String, Object>)
JSON.deserializeUntyped(res.getBody());
CustomerProfile profile = new CustomerProfile();
profile.customerId = customerId;
// Demographics
profile.name = (String)data.get('name');
profile.email = (String)data.get('email');
profile.segment = (String)data.get('segment');
// Behavioral
profile.totalPurchases = (Decimal)data.get('total_purchases');
profile.avgOrderValue = (Decimal)data.get('avg_order_value');
profile.lastPurchaseDate = Date.valueOf((String)data.get('last_purchase_date'));
profile.preferredChannel = (String)data.get('preferred_channel');
// Engagement
profile.emailEngagement = (Decimal)data.get('email_engagement_score');
profile.websiteVisits = (Integer)data.get('website_visits_30d');
profile.supportCases = (Integer)data.get('support_cases_90d');
// Predictive
profile.churnRisk = (Decimal)data.get('churn_risk_score');
profile.lifetimeValue = (Decimal)data.get('predicted_lifetime_value');
profile.nextBestAction = (String)data.get('next_best_action');
profiles.add(profile);
}
}
return profiles;
}
public class CustomerProfile {
@InvocableVariable public String customerId;
@InvocableVariable public String name;
@InvocableVariable public String email;
@InvocableVariable public String segment;
@InvocableVariable public Decimal totalPurchases;
@InvocableVariable public Decimal avgOrderValue;
@InvocableVariable public Date lastPurchaseDate;
@InvocableVariable public String preferredChannel;
@InvocableVariable public Decimal emailEngagement;
@InvocableVariable public Integer websiteVisits;
@InvocableVariable public Integer supportCases;
@InvocableVariable public Decimal churnRisk;
@InvocableVariable public Decimal lifetimeValue;
@InvocableVariable public String nextBestAction;
}
}
Use Case: Push enriched Data Cloud data back to external systems
Configuration:
Data Cloud β Data Actions β Create Data Action
- Target: External API endpoint
- Trigger: Segment membership change, insight calculated
- Payload: Customer profile fields
- Authentication: Named Credential
- Schedule: Real-time or batch
Apex Outbound Sync:
public class DataCloudReverseETL {
@InvocableMethod(label='Sync Enriched Profile to External System')
public static void syncToExternalSystem(List<String> customerIds) {
for (String customerId : customerIds) {
// Get enriched profile from Data Cloud
Map<String, Object> profile = DataCloudService.getProfile(customerId);
// Transform for external system
Map<String, Object> payload = new Map<String, Object>{
'customer_id' => customerId,
'segment' => profile.get('segment'),
'lifetime_value' => profile.get('ltv'),
'churn_risk' => profile.get('churn_risk'),
'next_best_product' => profile.get('next_best_product')
};
// Send to external system
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:ExternalCRM/api/customers/' + customerId);
req.setMethod('PUT');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serialize(payload));
Http http = new Http();
HttpResponse res = http.send(req);
// Log result
DataCloudSyncLog__c log = new DataCloudSyncLog__c(
CustomerId__c = customerId,
Direction__c = 'Outbound',
Success__c = res.getStatusCode() == 200,
Timestamp__c = System.now()
);
insert log;
}
}
}
Use Case: Define metrics and KPIs on unified data
-- Example: Customer Lifetime Value
CREATE CALCULATED INSIGHT customer_lifetime_value AS
SELECT
customer_id,
SUM(order_total) as total_revenue,
COUNT(order_id) as total_orders,
AVG(order_total) as avg_order_value,
DATEDIFF(day, first_order_date, CURRENT_DATE) as customer_age_days,
SUM(order_total) / NULLIF(DATEDIFF(day, first_order_date, CURRENT_DATE), 0) * 365 as annual_revenue,
(SUM(order_total) / NULLIF(DATEDIFF(day, first_order_date, CURRENT_DATE), 0) * 365) * 5 as predicted_ltv_5yr
FROM unified_orders
GROUP BY customer_id, first_order_date
Use Case: Create segments that update in real-time
-- Segment: High-Value Active Customers
CREATE SEGMENT high_value_active_customers AS
SELECT customer_id
FROM customer_360_profile
WHERE
predicted_ltv_5yr > 10000
AND last_purchase_date >= CURRENT_DATE - INTERVAL '30' DAY
AND email_engagement_score > 0.7
AND churn_risk_score < 0.3
Use in Salesforce:
// Query segment membership
List<Contact> highValueContacts = [
SELECT Id, Name, Email
FROM Contact
WHERE Id IN (
SELECT ContactId__c
FROM DataCloudSegmentMember__c
WHERE SegmentName__c = 'high_value_active_customers'
)
];
Data Cloud Vector Database ingests, stores, unifies, indexes, and allows semantic queries of unstructured data using generative AI techniques. It creates embeddings that enable semantic querying and seamless integration with structured data in the Einstein platform.
Supported Unstructured Data:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Unstructured Data Sources β
β Emails β Documents β Transcripts β Social β Knowledge β
βββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ
β Text Embedding Generation β
β Uses LLM to convert text β vector embeddings β
β (768-dimensional numeric representations) β
βββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ
β Vector Database Storage & Indexing β
β Stores embeddings with metadata β
β Creates high-performance vector index β
βββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ
β Semantic Search Queries β
β Natural language query β embedding β similarity β
β Returns most semantically similar content β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Semantic search understands the meaning and intent of queries, going beyond keyword matching:
Example:
Step 1: Configure Unstructured Data Sources
Setup β Data Cloud β Data Sources β Create
- Source Type: Unstructured Data
- Options:
ββ Salesforce Knowledge
ββ EmailMessage object
ββ External documents (S3, Azure Blob, Google Drive)
ββ API-based ingestion
ββ ContentDocument/File objects
Step 2: Enable Vector Indexing
// API to index unstructured content
public class VectorDatabaseService {
public static void indexDocument(String documentId, String content, Map<String, Object> metadata) {
// Create vector embedding request
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:DataCloud/v1/vector/index');
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
Map<String, Object> payload = new Map<String, Object>{
'documentId' => documentId,
'content' => content,
'metadata' => metadata,
'source' => 'Salesforce',
'timestamp' => System.now().getTime()
};
req.setBody(JSON.serialize(payload));
Http http = new Http();
HttpResponse res = http.send(req);
if (res.getStatusCode() == 200) {
System.debug('Document indexed: ' + documentId);
} else {
System.debug('Indexing failed: ' + res.getBody());
}
}
}
// Trigger to auto-index Knowledge articles
trigger KnowledgeArticleTrigger on Knowledge__kav (after insert, after update) {
for (Knowledge__kav article : Trigger.new) {
if (article.PublishStatus == 'Online') {
Map<String, Object> metadata = new Map<String, Object>{
'articleNumber' => article.ArticleNumber,
'title' => article.Title,
'category' => article.Category__c,
'language' => article.Language
};
VectorDatabaseService.indexDocument(
article.Id,
article.Body__c,
metadata
);
}
}
}
Step 3: Perform Semantic Search
public class SemanticSearchService {
@InvocableMethod(label='Semantic Search' description='Search unstructured data semantically')
public static List<SearchResult> semanticSearch(List<SearchRequest> requests) {
List<SearchResult> results = new List<SearchResult>();
for (SearchRequest req : requests) {
HttpRequest httpReq = new HttpRequest();
httpReq.setEndpoint('callout:DataCloud/v1/vector/search');
httpReq.setMethod('POST');
httpReq.setHeader('Content-Type', 'application/json');
Map<String, Object> payload = new Map<String, Object>{
'query' => req.query,
'topK' => req.maxResults,
'filters' => req.filters,
'includeMetadata' => true
};
httpReq.setBody(JSON.serialize(payload));
Http http = new Http();
HttpResponse httpRes = http.send(httpReq);
if (httpRes.getStatusCode() == 200) {
Map<String, Object> response = (Map<String, Object>)
JSON.deserializeUntyped(httpRes.getBody());
List<Object> hits = (List<Object>)response.get('results');
SearchResult result = new SearchResult();
result.query = req.query;
result.matches = new List<String>();
for (Object hit : hits) {
Map<String, Object> doc = (Map<String, Object>)hit;
result.matches.add((String)doc.get('content'));
}
results.add(result);
}
}
return results;
}
public class SearchRequest {
@InvocableVariable(required=true)
public String query;
@InvocableVariable
public Integer maxResults = 10;
@InvocableVariable
public Map<String, String> filters;
}
public class SearchResult {
@InvocableVariable
public String query;
@InvocableVariable
public List<String> matches;
}
}
Hybrid search combines semantic search with traditional keyword search for improved accuracy:
Benefits:
Use Case Example:
Service agent searches: "customer wants refund for SKU-12345"
Semantic Search finds:
- Return policy documents
- Refund procedures
- Customer satisfaction articles
Keyword Search finds:
- Specific SKU-12345 product documentation
- Previous cases mentioning SKU-12345
- Product-specific return windows
Hybrid Search combines both:
- Return procedures specifically for SKU-12345
- Previous refund cases for this product
- Product warranty terms
Implementation:
public class HybridSearchService {
public static List<Map<String, Object>> hybridSearch(String query, Map<String, Object> filters) {
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:DataCloud/v1/search/hybrid');
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
Map<String, Object> payload = new Map<String, Object>{
'query' => query,
'semantic' => new Map<String, Object>{
'enabled' => true,
'weight' => 0.6 // 60% semantic
},
'keyword' => new Map<String, Object>{
'enabled' => true,
'weight' => 0.4 // 40% keyword
},
'filters' => filters,
'topK' => 20
};
req.setBody(JSON.serialize(payload));
Http http = new Http();
HttpResponse res = http.send(req);
if (res.getStatusCode() == 200) {
Map<String, Object> response = (Map<String, Object>)JSON.deserializeUntyped(res.getBody());
return (List<Map<String, Object>>)response.get('results');
}
return new List<Map<String, Object>>();
}
}
Vector database supports cross-language semantic search:
Example:
Configuration:
Data Cloud β Vector Database β Settings
- Enable multi-language support
- Supported languages: 100+ languages via multilingual embeddings
- Automatic language detection
- Cross-language similarity matching
1. Customer Service Knowledge Retrieval
// Agentforce action: Find relevant knowledge articles
@InvocableMethod(label='Find Relevant Articles')
public static List<String> findRelevantArticles(List<String> customerQueries) {
List<String> articles = new List<String>();
for (String query : customerQueries) {
// Semantic search finds conceptually similar articles
List<SearchResult> results = SemanticSearchService.semanticSearch(
new List<SearchRequest>{new SearchRequest(query, 5)}
);
if (!results.isEmpty()) {
articles.addAll(results[0].matches);
}
}
return articles;
}
2. Case Similarity Detection
// Find similar past cases to suggest solutions
public class CaseSimilarityService {
public static List<Case> findSimilarCases(String caseDescription) {
// Semantic search in past cases
List<SearchResult> results = SemanticSearchService.semanticSearch(
new List<SearchRequest>{new SearchRequest(caseDescription, 10)}
);
// Extract case IDs from metadata
Set<Id> caseIds = new Set<Id>();
// ... extract IDs from results
return [SELECT Id, Subject, Description, Status, Resolution__c
FROM Case
WHERE Id IN :caseIds
AND Status = 'Closed'
ORDER BY ClosedDate DESC];
}
}
3. Lead Scoring from Unstructured Data
// Analyze email content and web behavior for lead scoring
public class LeadScoringService {
public static Decimal scoreLeadFromContent(Id leadId) {
// Get all email interactions
List<EmailMessage> emails = [SELECT Id, TextBody
FROM EmailMessage
WHERE RelatedToId = :leadId];
Decimal score = 0;
// Semantic search for buying intent keywords
String allContent = '';
for (EmailMessage email : emails) {
allContent += email.TextBody + ' ';
}
// Check semantic similarity to high-intent phrases
List<String> intentPhrases = new List<String>{
'ready to purchase',
'need pricing quote',
'schedule demo',
'implementation timeline'
};
for (String phrase : intentPhrases) {
// Semantic similarity score
Decimal similarity = calculateSemanticSimilarity(allContent, phrase);
score += similarity * 10;
}
return score;
}
}
Query Data Cloud using standard SQL:
-- Complex analytical query across multiple sources
SELECT
c.customer_id,
c.name,
c.segment,
COUNT(DISTINCT o.order_id) as total_orders,
SUM(o.order_total) as revenue,
AVG(s.satisfaction_score) as avg_satisfaction,
MAX(o.order_date) as last_order_date
FROM
unified_customer c
INNER JOIN unified_orders o ON c.customer_id = o.customer_id
LEFT JOIN support_interactions s ON c.customer_id = s.customer_id
WHERE
o.order_date >= CURRENT_DATE - INTERVAL '90' DAY
GROUP BY
c.customer_id, c.name, c.segment
HAVING
COUNT(DISTINCT o.order_id) >= 3
ORDER BY
revenue DESC
LIMIT 100
# External system β Data Cloud authentication
import jwt
import time
import requests
def get_data_cloud_access_token(client_id, private_key, username, instance_url):
"""Get access token for Data Cloud API"""
# Create JWT
payload = {
'iss': client_id,
'sub': username,
'aud': instance_url,
'exp': int(time.time()) + 180 # 3 minutes
}
encoded_jwt = jwt.encode(payload, private_key, algorithm='RS256')
# Exchange JWT for access token
token_url = f"{instance_url}/services/oauth2/token"
response = requests.post(token_url, data={
'grant_type': 'urn:ietf:params:oauth:grant-type:jwt-bearer',
'assertion': encoded_jwt
})
return response.json()['access_token']
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.