From orchestration
Creates and executes temporary Python, Node.js, shell, Ruby, or Go scripts for API integrations, data processing with libraries, and custom tools during workflows.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-1 --plugin mbruhler-claude-orchestrationThis skill uses the workspace's default tool permissions.
I help you create and execute temporary scripts during workflow execution. Perfect for API integrations, data processing with specialized libraries, and creating temporary tools that execute and return results.
Creates multi-agent orchestration workflows from natural language using Socratic questioning, pattern detection for sequential/parallel/conditional flows, syntax generation, and visualization.
Chains multiple Claude Code skills into YAML-defined automated workflows supporting conditional logic, parallel execution, error handling, and retries. Ideal for complex multi-step automations.
Provides expert guidance on Vercel Workflow DevKit for durable workflows, long-running tasks, API routes, agents with pause/resume, retries, step execution, crash-safe orchestration.
Share bugs, ideas, or general feedback.
I help you create and execute temporary scripts during workflow execution. Perfect for API integrations, data processing with specialized libraries, and creating temporary tools that execute and return results.
I automatically activate when you:
Temporary scripts are code files that:
Supported Languages:
# 1. Ask for API credentials
AskUserQuestion:"Reddit API key needed":api_key ->
# 2. Create Python script with embedded credentials
general-purpose:"Create Python script: reddit_client.py with {api_key}":script_path ->
# 3. Execute script and capture output
Bash:"python3 {script_path}":reddit_data ->
# 4. Process results in workflow
general-purpose:"Analyze {reddit_data} and create summary":analysis ->
# 5. Cleanup happens automatically
See script-lifecycle.md for complete details.
Overview:
1. Creation
↓
Write script to /tmp/workflow-scripts/
2. Preparation
↓
Set permissions (chmod +x)
Install dependencies if needed
3. Execution
↓
Run via Bash tool
Capture stdout/stderr
4. Data Return
↓
Parse output (JSON, CSV, text)
Pass to next workflow step
5. Cleanup
↓
Remove script files
Clean temp directories
Reddit API Client:
# /tmp/workflow-scripts/reddit_client.py
import requests
import json
import sys
api_key = sys.argv[1]
subreddit = sys.argv[2]
headers = {'Authorization': f'Bearer {api_key}'}
response = requests.get(
f'https://oauth.reddit.com/r/{subreddit}/hot.json',
headers=headers
)
print(json.dumps(response.json(), indent=2))
In workflow:
$script-creator:"Create reddit_client.py":script ->
Bash:"python3 {script} {api_key} programming":posts ->
general-purpose:"Parse {posts} and extract top 10 titles"
CSV Analysis:
# /tmp/workflow-scripts/analyze_data.py
import pandas as pd
import sys
df = pd.read_csv(sys.argv[1])
summary = df.describe().to_json()
print(summary)
In workflow:
general-purpose:"Create analyze_data.py script":script ->
Bash:"pip install pandas && python3 {script} data.csv":analysis ->
general-purpose:"Interpret {analysis} and create report"
Article Scraper:
// /tmp/workflow-scripts/scraper.js
const axios = require('axios');
const cheerio = require('cheerio');
async function scrapeArticles(url) {
const {data} = await axios.get(url);
const $ = cheerio.load(data);
const articles = [];
$('.article').each((i, el) => {
articles.push({
title: $(el).find('.title').text(),
url: $(el).find('a').attr('href')
});
});
console.log(JSON.stringify(articles));
}
scrapeArticles(process.argv[2]);
In workflow:
general-purpose:"Create scraper.js":script ->
Bash:"npm install axios cheerio && node {script} https://news.site":articles ->
general-purpose:"Process {articles}"
See script-templates.md for complete library.
Quick templates:
See security.md for comprehensive guide.
Quick checklist:
✅ Credentials Management:
✅ File Permissions:
chmod 700 /tmp/workflow-scripts/script.py # Owner only
✅ Output Sanitization:
✅ Dependency Management:
See integration-patterns.md for detailed patterns.
general-purpose:"Create script.py that fetches data":script ->
Bash:"python3 {script}":data ->
general-purpose:"Process {data}"
AskUserQuestion:"API credentials needed":creds ->
general-purpose:"Create api_client.py with {creds}":script ->
Bash:"python3 {script}":results ->
general-purpose:"Analyze {results}"
general-purpose:"Create multiple API clients":scripts ->
[
Bash:"python3 {scripts.reddit}":reddit_data ||
Bash:"python3 {scripts.twitter}":twitter_data ||
Bash:"python3 {scripts.github}":github_data
] ->
general-purpose:"Merge all data sources"
@process_batch ->
general-purpose:"Create batch_processor.py":script ->
Bash:"python3 {script} batch_{n}":results ->
(if results.has_more)~> @process_batch ~>
(if results.complete)~> general-purpose:"Finalize"
/tmp/workflow-scripts/
├── {workflow-id}/ # Unique per workflow
│ ├── reddit_client.py
│ ├── data_processor.py
│ ├── requirements.txt # Python dependencies
│ ├── package.json # Node.js dependencies
│ └── .env # Environment variables
Automatic cleanup after workflow:
rm -rf /tmp/workflow-scripts/{workflow-id}
general-purpose:"Create Python script:
```python
import requests
import sys
api_key = sys.argv[1]
response = requests.get(
'https://api.example.com/data',
headers={'Authorization': f'Bearer {api_key}'}
)
print(response.text)
Save to /tmp/workflow-scripts/api_client.py":script_path ->
Bash:"python3 {script_path} {api_key}":data
### Method 2: Template-Based Creation
```flow
general-purpose:"Use template: api-rest-client
- Language: Python
- API: Reddit
- Auth: Bearer token
- Output: JSON
Create script in /tmp/workflow-scripts/":script ->
Bash:"python3 {script}":data
general-purpose:"Create script package:
- main.py (entry point)
- utils.py (helper functions)
- requirements.txt (dependencies)
Save to /tmp/workflow-scripts/package/":package_path ->
Bash:"cd {package_path} && pip install -r requirements.txt && python3 main.py":data
general-purpose:"Create requirements.txt:
requests==2.31.0
pandas==2.0.0
Save to /tmp/workflow-scripts/":deps ->
general-purpose:"Create script.py":script ->
Bash:"pip install -r {deps} && python3 {script}":data
general-purpose:"Create package.json with dependencies":package ->
general-purpose:"Create script.js":script ->
Bash:"cd /tmp/workflow-scripts && npm install && node {script}":data
Bash:"python3 -m venv /tmp/workflow-scripts/venv":venv ->
general-purpose:"Create script in venv":script ->
Bash:"source /tmp/workflow-scripts/venv/bin/activate && python3 {script}":data
Bash:"python3 {script} 2>&1":output ->
(if output.contains('Error'))~>
general-purpose:"Parse error: {output}":error ->
@review-error:"Script failed: {error}" ~>
(if output.success)~>
general-purpose:"Process {output}"
@retry ->
Bash:"python3 {script}":result ->
(if result.failed)~>
general-purpose:"Wait 5 seconds" ->
@retry ~>
(if result.success)~>
general-purpose:"Process {result}"
Scripts can return data in various formats:
import json
result = {"data": [...], "status": "success"}
print(json.dumps(result))
import csv
import sys
writer = csv.writer(sys.stdout)
writer.writerows(data)
for item in results:
print(f"{item['title']}: {item['url']}")
Cleanup happens automatically after workflow completion:
# At end of workflow execution:
general-purpose:"Remove all scripts in /tmp/workflow-scripts/{workflow-id}"
For long-running workflows:
general-purpose:"Create and execute script":result ->
general-purpose:"Process {result}":output ->
Bash:"rm -rf /tmp/workflow-scripts/{script-dir}":cleanup
✅ Use unique workflow IDs for script directories ✅ Pass credentials as arguments, not hardcoded ✅ Validate and sanitize script output ✅ Use virtual environments for Python ✅ Specify exact dependency versions ✅ Return structured data (JSON preferred) ✅ Clean up after workflow completion ✅ Set restrictive file permissions ✅ Use timeouts for script execution ✅ Log script output for debugging
❌ Hardcode API keys in scripts ❌ Execute untrusted code ❌ Store sensitive data in script files ❌ Leave scripts after workflow ❌ Use global Python/Node packages ❌ Ignore script errors ❌ Return massive outputs (>1MB) ❌ Use system-wide directories
Use Descriptive Names: reddit_api_client.py not script.py
Return Structured Data: Always use JSON when possible
Error Messages: Include detailed error messages in output
Logging: Add logging for debugging:
import logging
logging.basicConfig(level=logging.INFO)
logging.info(f"Fetching data from {url}")
Validation: Validate inputs before execution
Timeouts: Set execution timeouts:
Bash:"timeout 30 python3 {script}":data
See detail files for:
Ready to use temporary scripts? Just describe what API or processing you need!