From datasphere
Automates SAP Datasphere administration via CLI for bulk user/space provisioning, connection management, certificate rotation, and CI/CD pipeline integration.
npx claudepluginhub mariodefelipe/sap-datasphere-plugin-for-claude-coworkThis skill uses the workspace's default tool permissions.
The SAP Datasphere CLI (Command-Line Interface) is the power user's gateway to programmatic administration and automation. While the Datasphere UI excels at interactive tasks, the CLI is your tool of choice when you need to:
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
The SAP Datasphere CLI (Command-Line Interface) is the power user's gateway to programmatic administration and automation. While the Datasphere UI excels at interactive tasks, the CLI is your tool of choice when you need to:
| Scenario | CLI | GUI |
|---|---|---|
| Creating 500 users with attributes | ✓ | ✗ |
| Exploring data model visually | ✗ | ✓ |
| One-time user creation | Possible | ✓ |
| Batch certificate rotation | ✓ | ✗ |
| Setting up connection for testing | ✗ | ✓ |
| Deploying 10 connections across 5 systems | ✓ | ✗ |
| Configuring space capacity | ✓ | ✓ |
| Validating connection credentials | ✓ | ✓ |
Before using the CLI, configure authentication to your Datasphere instance.
Service keys are non-human identities ideal for automation, CI/CD, and scheduled tasks.
Create a Service Key in Datasphere:
Configure CLI with Service Key:
datasphere config init \
--client-id "your-client-id" \
--client-secret "your-client-secret" \
--instance-url "https://your-datasphere-instance.com" \
--auth-method service-key
Store in Environment Variables (for CI/CD pipelines):
export DATASPHERE_CLIENT_ID="your-client-id"
export DATASPHERE_CLIENT_SECRET="your-client-secret"
export DATASPHERE_INSTANCE_URL="https://your-datasphere-instance.com"
For personal workstations with interactive CLI use:
datasphere config init --auth-method oauth
# Opens browser for authentication
datasphere config validate
# Output: Configuration valid. Connected to datasphere.acme.com
Spaces are the foundational containers in Datasphere. Manage them programmatically for consistent environments.
datasphere spaces create \
--name "SALES_ANALYTICS" \
--description "Sales and Revenue Analytics Space" \
--ram-allocation 100 \
--disk-allocation 500 \
--priority standard
Create a spaces.json file for reusable space configurations:
{
"spaces": [
{
"name": "SALES_ANALYTICS",
"description": "Sales and Revenue Analytics",
"configuration": {
"memory": {
"allocated_gb": 100,
"reserved_gb": 50
},
"disk": {
"allocated_gb": 500
},
"priority": "standard",
"network": {
"enable_public_access": false,
"data_isolation_level": "tenant"
}
},
"owner": "sales-admin@company.com",
"tags": ["production", "analytics"]
},
{
"name": "FINANCE_REPORTING",
"description": "Finance and Accounting Reporting",
"configuration": {
"memory": {
"allocated_gb": 150,
"reserved_gb": 75
},
"disk": {
"allocated_gb": 1000
},
"priority": "high",
"network": {
"enable_public_access": false,
"data_isolation_level": "tenant"
}
},
"owner": "finance-admin@company.com",
"tags": ["production", "finance"]
}
]
}
datasphere spaces create-bulk --file spaces.json --validate --dry-run
# Review output before committing
datasphere spaces create-bulk --file spaces.json --confirm
Control memory, disk, and processing priority during creation:
datasphere spaces create \
--name "HIGH_PERFORMANCE_SPACE" \
--ram-allocation 200 \
--disk-allocation 2000 \
--priority high \
--reserved-memory 100 \
--network-isolation strict
Resource Allocation Guide:
low (shared), standard (default), high (guaranteed resources)Duplicate an existing space configuration as a template:
datasphere spaces clone \
--source "PROD_TEMPLATE" \
--target "NEW_ENVIRONMENT" \
--copy-connections true \
--copy-users false
Modify existing space settings:
datasphere spaces update SALES_ANALYTICS \
--new-ram-allocation 150 \
--new-disk-allocation 750 \
--new-description "Updated: Sales and Revenue Analytics (upgraded)"
Efficiently provision and manage users at scale.
Create a users.json file:
{
"users": [
{
"email": "alice.johnson@company.com",
"first_name": "Alice",
"last_name": "Johnson",
"roles": [
{
"role": "datasphere.admin",
"scope": "global"
}
],
"space_assignments": [
{
"space_name": "SALES_ANALYTICS",
"role": "space_admin"
}
],
"attributes": {
"department": "Sales",
"cost_center": "CC-1001",
"manager": "bob.smith@company.com"
},
"status": "active"
},
{
"email": "charlie.brown@company.com",
"first_name": "Charlie",
"last_name": "Brown",
"roles": [
{
"role": "datasphere.analyst",
"scope": "global"
}
],
"space_assignments": [
{
"space_name": "SALES_ANALYTICS",
"role": "viewer"
},
{
"space_name": "FINANCE_REPORTING",
"role": "editor"
}
],
"attributes": {
"department": "Finance",
"cost_center": "CC-2001",
"manager": "alice.johnson@company.com"
},
"status": "active"
}
]
}
datasphere users create-bulk \
--file users.json \
--validate \
--dry-run
# Review output
datasphere users create-bulk \
--file users.json \
--send-invitations true \
--confirm
Output: Lists success/failure per user, generates report with invitation links.
Scoped Roles attach users to specific spaces with granular permissions:
datasphere users assign-role \
--email "alice.johnson@company.com" \
--role "datasphere.space_admin" \
--space "SALES_ANALYTICS" \
--effective-date "2024-02-01"
Available Scoped Roles:
datasphere.space_admin — Full space administrationdatasphere.space_editor — Create and modify objectsdatasphere.space_viewer — Read-only accessStore custom metadata on users for governance and integration:
datasphere users set-attribute \
--email "alice.johnson@company.com" \
--attribute "department" \
--value "Sales" \
--attribute "cost_center" \
--value "CC-1001"
Bulk update attributes:
datasphere users batch-attributes \
--file user_attributes.json
Where user_attributes.json contains:
{
"updates": [
{
"email": "alice.johnson@company.com",
"attributes": {
"department": "Sales",
"cost_center": "CC-1001",
"manager": "bob.smith@company.com"
}
}
]
}
Soft Deprovisioning (disable access without deleting):
datasphere users disable \
--email "alice.johnson@company.com" \
--reason "Employee departure" \
--effective-date "2024-03-15"
Hard Deprovisioning (permanent deletion):
datasphere users delete \
--email "alice.johnson@company.com" \
--transfer-owned-objects-to "admin@company.com" \
--confirm
Datasphere connections link to source and target systems. Manage them at scale via JSON templates.
Create a connections.json file:
{
"connections": [
{
"name": "PROD_SAP_S4",
"type": "sap_s4hana",
"description": "Production SAP S/4HANA System",
"technical_user": "DATASPHERE_USER",
"connection_details": {
"host": "s4h-prod.company.com",
"port": 50013,
"client": "100",
"use_ssl": true,
"tls_version": "1.2"
},
"authentication": {
"method": "basic",
"username_variable": "SAP_USER",
"password_variable": "SAP_PASS"
},
"test_table": "MARA",
"retry_policy": {
"max_attempts": 3,
"backoff_seconds": 5
},
"owner": "integration-admin@company.com"
},
{
"name": "SNOWFLAKE_WAREHOUSE",
"type": "snowflake",
"description": "Snowflake Data Warehouse",
"connection_details": {
"account_identifier": "xy12345.us-east-1",
"warehouse": "COMPUTE_WH",
"database": "DATASPHERE_DB",
"schema": "STAGING"
},
"authentication": {
"method": "oauth",
"client_id_variable": "SF_CLIENT_ID",
"client_secret_variable": "SF_CLIENT_SECRET",
"token_endpoint": "https://xy12345.us-east-1.snowflakecomputing.com/oauth/authorize"
},
"test_query": "SELECT 1",
"owner": "data-team@company.com"
}
]
}
datasphere connections create-bulk \
--file connections.json \
--validate-credentials \
--dry-run
# Verify output
datasphere connections create-bulk \
--file connections.json \
--confirm
Verify connectivity before deployment:
datasphere connections test \
--name "PROD_SAP_S4" \
--verbose
# Output: Connection test successful. Response time: 145ms
Batch test multiple connections:
datasphere connections test-batch \
--file connections.json \
--generate-report test_results.html
Manage TLS server certificates for secure connections to external systems.
List Current Certificates:
datasphere configuration certificates list \
--show-expiry \
--sort-by "expiry_date"
Upload New Certificate:
datasphere configuration certificates upload \
--name "PROD_SAP_S4_CERT" \
--certificate-file "/path/to/certificate.pem" \
--key-file "/path/to/private.key" \
--description "Production S/4HANA TLS Certificate"
Certificate Rotation Workflow:
# 1. Upload new certificate
datasphere configuration certificates upload \
--name "PROD_SAP_S4_CERT_NEW" \
--certificate-file "/path/to/new_cert.pem" \
--key-file "/path/to/new_key.pem" \
--scheduled-activation "2024-02-15T00:00:00Z"
# 2. Activate new certificate (automatic at scheduled time or manual)
datasphere configuration certificates activate \
--name "PROD_SAP_S4_CERT_NEW"
# 3. Deactivate old certificate
datasphere configuration certificates deactivate \
--name "PROD_SAP_S4_CERT"
# 4. Clean up (optional, after verification period)
datasphere configuration certificates delete \
--name "PROD_SAP_S4_CERT" \
--force
Create an automated monitoring script:
#!/bin/bash
# cert_expiry_check.sh - Monitor certificate expiry
ALERT_DAYS=30
RECIPIENTS="security-team@company.com"
datasphere configuration certificates list --json > certs.json
python3 << 'EOF'
import json
from datetime import datetime, timedelta
with open('certs.json') as f:
certs = json.load(f)
alert_threshold = datetime.utcnow() + timedelta(days=ALERT_DAYS)
for cert in certs['certificates']:
expiry = datetime.fromisoformat(cert['expiry_date'])
if expiry < alert_threshold:
print(f"ALERT: {cert['name']} expires on {cert['expiry_date']}")
EOF
Schedule in cron:
# Run daily at 6 AM
0 6 * * * /opt/datasphere/cert_expiry_check.sh | mail -s "Datasphere Certificate Expiry Alert" security-team@company.com
Execute administrative tasks on a schedule:
# Daily space quota report
0 2 * * * datasphere spaces report --format json > /var/reports/space_quota_$(date +\%Y\%m\%d).json
# Weekly user access review
0 3 * * 0 datasphere users list --inactive-days 30 > /var/reports/inactive_users.txt
# Monthly certificate expiry check
0 4 1 * * /opt/datasphere/cert_expiry_check.sh
Create .github/workflows/datasphere-deploy.yml:
name: Deploy Datasphere Configuration
on:
push:
branches: [main]
paths: ['datasphere/**']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Datasphere CLI
run: |
curl -sL https://datasphere-cli.company.com/install.sh | bash
datasphere --version
- name: Configure CLI
env:
DATASPHERE_CLIENT_ID: ${{ secrets.DATASPHERE_CLIENT_ID }}
DATASPHERE_CLIENT_SECRET: ${{ secrets.DATASPHERE_CLIENT_SECRET }}
DATASPHERE_INSTANCE_URL: ${{ secrets.DATASPHERE_INSTANCE_URL }}
run: datasphere config init --auth-method service-key
- name: Validate Configuration
run: |
datasphere spaces create-bulk --file datasphere/spaces.json --validate --dry-run
datasphere users create-bulk --file datasphere/users.json --validate --dry-run
datasphere connections create-bulk --file datasphere/connections.json --validate --dry-run
- name: Deploy Changes
run: |
datasphere spaces create-bulk --file datasphere/spaces.json --confirm
datasphere users create-bulk --file datasphere/users.json --send-invitations true --confirm
datasphere connections create-bulk --file datasphere/connections.json --confirm
- name: Run Post-Deployment Tests
run: |
datasphere connections test-batch --file datasphere/connections.json --generate-report deployment_test.html
- name: Archive Reports
if: always()
uses: actions/upload-artifact@v3
with:
name: deployment-reports
path: deployment_test.html
Create datasphere-pipeline.yml:
trigger:
branches:
include:
- main
paths:
include:
- datasphere/*
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: Validate
jobs:
- job: ValidateConfiguration
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: |
curl -sL https://datasphere-cli.company.com/install.sh | bash
export DATASPHERE_CLIENT_ID=$(DATASPHERE_CLIENT_ID)
export DATASPHERE_CLIENT_SECRET=$(DATASPHERE_CLIENT_SECRET)
export DATASPHERE_INSTANCE_URL=$(DATASPHERE_INSTANCE_URL)
datasphere config init --auth-method service-key
datasphere spaces create-bulk --file datasphere/spaces.json --validate --dry-run
datasphere users create-bulk --file datasphere/users.json --validate --dry-run
- stage: Deploy
condition: succeeded()
jobs:
- deployment: DeployDatasphere
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: |
datasphere spaces create-bulk --file datasphere/spaces.json --confirm
datasphere users create-bulk --file datasphere/users.json --send-invitations true --confirm
datasphere connections create-bulk --file datasphere/connections.json --confirm
Version all Datasphere configuration in Git:
datasphere-config/
├── spaces/
│ ├── sales.json
│ ├── finance.json
│ └── marketing.json
├── users/
│ ├── bulk_onboarding.json
│ └── role_assignments.json
├── connections/
│ ├── sap_systems.json
│ └── data_warehouses.json
├── certificates/
│ └── certificates.json
└── deploy.sh
deploy.sh orchestrates all deployments:
#!/bin/bash
set -e
echo "Deploying Datasphere Configuration..."
# Validate all configurations
echo "Validating spaces..."
datasphere spaces create-bulk --file spaces/*.json --validate --dry-run
echo "Validating users..."
datasphere users create-bulk --file users/*.json --validate --dry-run
echo "Validating connections..."
datasphere connections create-bulk --file connections/*.json --validate --dry-run
# Deploy
echo "Deploying spaces..."
datasphere spaces create-bulk --file spaces/*.json --confirm
echo "Deploying users..."
datasphere users create-bulk --file users/*.json --send-invitations true --confirm
echo "Deploying connections..."
datasphere connections create-bulk --file connections/*.json --confirm
echo "Deployment complete!"
datasphere --log-level debug spaces create-bulk --file spaces.json
# Output includes detailed request/response logs
datasphere spaces create-bulk --file spaces.json --output json > deployment_log.json
# Parse with jq for post-processing
jq '.results[] | select(.status == "FAILED")' deployment_log.json
| Code | Error | Resolution |
|---|---|---|
| 401 | Authentication failed | Verify service key credentials and expiry |
| 403 | Permission denied | Check role assignments and space membership |
| 409 | Conflict (object exists) | Use --force-overwrite or change name |
| 422 | Invalid configuration | Validate JSON schema and retry |
| 503 | Service unavailable | Retry with exponential backoff |
#!/bin/bash
retry_with_backoff() {
local max_attempts=5
local attempt=1
local delay=2
while [ $attempt -le $max_attempts ]; do
if "$@"; then
return 0
fi
if [ $attempt -lt $max_attempts ]; then
echo "Attempt $attempt failed. Retrying in ${delay}s..."
sleep $delay
delay=$((delay * 2))
fi
attempt=$((attempt + 1))
done
return 1
}
retry_with_backoff datasphere spaces create-bulk --file spaces.json --confirm
This skill leverages these MCP (Model Context Protocol) tools for enhanced automation:
list_spaces — List all spaces with metadataget_space_info — Retrieve detailed space configurationlist_database_users — Query user records from Datasphere databasecreate_database_user — Create users programmatically (advanced)test_connection — Validate connection before deploymentUse these tools in conjunction with CLI commands for end-to-end automation workflows.
--dry-run before committingSee the references/cli-reference.md for complete command syntax, JSON schemas, and troubleshooting guides.