From aws-data-analytics
Resolves data lake and lakehouse asset references across Glue Data Catalog, S3, S3 Tables, and Redshift using table names, keywords, columns, or S3 paths.
npx claudepluginhub aws/agent-toolkit-for-aws --plugin aws-data-analytics[table-name|keyword|column-name|s3://path]This skill uses the workspace's default tool permissions.
Resolves data lake asset references to concrete catalog entries. Acts as a
Inventories and audits AWS Glue Data Catalog assets across S3 tables, Redshift-federated, and remote Iceberg catalogs. For listing catalogs, databases, tables and data landscape overviews.
Explore S3-compatible storage (S3, R2, GCS, MinIO) using DuckDB: lists files/sizes, previews Parquet/CSV/JSON schemas/samples/row counts without downloading.
Searches DataHub catalog to discover entities, find datasets by platform/domain, and answer ad-hoc metadata questions like ownership, PII columns, or table schemas.
Share bugs, ideas, or general feedback.
Resolves data lake asset references to concrete catalog entries. Acts as a resolver for other skills and direct user requests. Covers Glue, S3, S3 Tables, and Redshift. Optimized for low token usage — return the answer fast and get out of the way.
Constraints for parameter acquisition:
You MUST execute commands using AWS MCP server tools when connected — they provide validation, sandboxed execution, and audit logging. Fall back to AWS CLI only if MCP is unavailable. You MUST explain each step before executing.
Check for required tools and AWS access before searching.
Constraints:
aws___call_aws) are available; fall back to AWS CLI if notaws sts get-caller-identityDetermine the mode:
You SHOULD default to Resolve mode when ambiguous.
Parse the request into search dimensions:
Search sources in order. Stop at the first layer that returns a high-confidence match. Do NOT search all layers every time.
You MUST track which layers were searched and which were skipped. Report this in the output (see Step 6).
Layer 1: Glue Data Catalog (always start here)
You SHOULD use SearchTables as the primary API — it searches table
names, column names, and column comments across the entire catalog in
one call. You MUST NOT loop over databases with get-tables unless
you already know the database name. See
search-strategy.md for patterns.
aws glue search-tables --search-text "orders"
aws glue get-tables --database-name sales --expression "order.*"
Layer 2: S3 Reverse Lookup (S3 path provided)
When a user provides an S3 path, you SHOULD default to reverse lookup first — they usually want the Glue table, not the file contents.
aws glue search-tables --search-text "<path-keyword>"
aws s3api list-objects-v2 --bucket <bucket-name> --prefix <prefix>
Layer 3: Redshift Catalog (if user mentions Redshift, warehouse, or lakehouse)
SELECT schema_name, table_name, table_type
FROM svv_all_tables
WHERE table_name ILIKE '%orders%';
Redshift Spectrum external tables also appear in Glue. If Layer 1 found the table with a Spectrum SerDe, skip Layer 3.
When search-tables returns nothing and S3 Tables enumeration also
misses, you MAY need to scan across databases. Do NOT issue separate
CLI calls per database — that burns turns and tokens. Instead, write a
short Python script using boto3 paginators that does the full scan in
one execution. Write the script to a file and run it with python3.
The script MUST:
get_databases() to collect all database namesget_tables() with an Expression
filter matching the search termimport boto3, sys, json
region = sys.argv[1]
term = sys.argv[2]
glue = boto3.client("glue", region_name=region)
matches = []
db_paginator = glue.get_paginator("get_databases")
for db_page in db_paginator.paginate():
for db in db_page["DatabaseList"]:
db_name = db["Name"]
tbl_paginator = glue.get_paginator("get_tables")
for tbl_page in tbl_paginator.paginate(
DatabaseName=db_name, Expression=f".*{term}.*"
):
for tbl in tbl_page["TableList"]:
matches.append({
"database": db_name,
"table": tbl["Name"],
"format": tbl.get("Parameters", {}).get("classification", "unknown"),
"location": tbl.get("StorageDescriptor", {}).get("Location", ""),
})
print(json.dumps(matches, indent=2) if matches else "No matches found.")
You MUST only use this fallback after search-tables and S3 Tables
enumeration have already returned nothing. This is a last resort, not
a first choice.
exploring-data-catalog.For high-confidence resolve, return a structured reference. Always include a "Sources searched / skipped" line so the user knows which data stores were checked and which were not.
Table: database_name.table_name
Catalog: default | catalog_name
Format: Parquet | CSV | JSON | ORC | Iceberg
Location: s3://bucket/prefix/
Partition keys: [key1, key2] or none
Sources searched: Glue Data Catalog
Sources skipped: S3, Redshift (stopped early — high-confidence match in Glue)
S3 Tables use a 4-level hierarchy (catalog / table-bucket / namespace /
table), and search-tables does not index s3tablescatalog/*. If the
user mentions S3 Tables explicitly or Layer 1 returns nothing for an
expected S3 Tables asset, enumerate via aws s3tables list-table-buckets
and list-namespaces. Return as:
Table: s3tablescatalog/<table-bucket>/<namespace>/<table>
Format: Iceberg
Location: arn:aws:s3tables:<region>:<account>:bucket/<table-bucket>/table/<table-uuid>
Sources searched: Glue Data Catalog, S3 Tables
Sources skipped: Redshift (not relevant to S3 Tables lookup)
SQL reference: "s3tablescatalog/<table-bucket>"."<namespace>"."<table>".
You MUST always include both "Sources searched" and "Sources skipped" in the output. List the reason for skipping in parentheses. Valid reasons: "stopped early", "not relevant to this request", "access denied", "no results in prior layer".
| Error | Cause | Fix |
|---|---|---|
get-tables fails with missing database | Requires --database-name | For cross-database search, use search-tables instead |
search-tables returns nothing for S3 Tables | Does not cover S3 Tables federated catalogs | Use aws s3tables list-table-buckets when S3 Tables is in play |
AccessDeniedException on search-tables | Caller lacks glue:SearchTables permission | Request the permission or fall back to Glue get-tables with a known database |
API call times out or throttles (ThrottlingException) | Throttled by service-level rate limits | Retry with exponential backoff; reduce parallel calls |
| Resource not in expected region | Cross-region lookup | Confirm AWS region; the Glue catalog is region-scoped |
| Delegating caller expects verbose output | Other skill called this as a resolver | Return minimal output — caller needs a catalog reference, not a formatted summary |
search-tables over iterating databases. One API call beats N.Expression filter when calling get-tables; never call it without one.