Plan DataHub connectors via blueprints and source research; review PRs and validate code against 22 golden standards; search catalogs, enrich metadata with tags/ownership, trace lineage for impact analysis, and manage data quality assertions/incidents.
npx claudepluginhub datahub-project/datahub-skills --plugin datahub-skillsAdd or update metadata in DataHub - descriptions, tags, glossary terms, ownership
Explore lineage, trace data dependencies, and perform impact analysis
Manage data quality — assertions, incidents, and subscriptions
Search the DataHub catalog and answer questions about your data
Set up DataHub connection, install CLI, configure authentication and default scopes
Plan a new DataHub connector - research the source system, map entities, design architecture, and create a planning document
Review DataHub connector code for standards compliance and quality
Read and confirm all golden connector standard files have been loaded.
Use this agent when you need to verify whether a PR author has genuinely addressed previous review comments before re-review. This agent fetches review comments, classifies them by type (code change request vs. discussion vs. question), and checks whether each was substantively addressed — not just marked as resolved. <example> Context: A PR has been updated after review and the author is requesting re-review. user: "Check if the author addressed all review comments on PR #1234" assistant: "I'll use the comment-resolution-checker agent to verify whether all review comments on PR #1234 have been substantively addressed." <commentary> PR re-review readiness check triggers this agent. </commentary> </example> <example> Context: User wants to know what's still outstanding on a PR before approving. user: "What review comments are still unaddressed on this PR?" assistant: "I'll use the comment-resolution-checker agent to analyze the PR's review comments and identify any that haven't been addressed." <commentary> Checking for unaddressed comments triggers this agent. </commentary> </example>
Research source systems for DataHub connector development. Gathers documentation, finds similar connectors, identifies entity mappings, and assesses implementation complexity. Returns structured findings for planning phase. <example> Context: User wants to build a new DataHub connector for a source system. user: "Research Snowplow for a new DataHub connector" assistant: "I'll use the connector-researcher agent to gather comprehensive research on Snowplow including API documentation, similar connectors, and entity mappings." <commentary> New connector research request triggers this agent. </commentary> </example> <example> Context: User is starting connector development and needs background information. user: "I need to build a connector for DuckDB, what do I need to know?" assistant: "I'll use the connector-researcher agent to research DuckDB's metadata APIs, find similar DataHub connectors, and assess implementation complexity." <commentary> Connector development information request triggers this agent. </commentary> </example>
Run provided validation scripts, analyze their output, and report results for DataHub connector verification steps. Handles extraction verification, capability checks, code quality gates, source connectivity, ingestion runs, and CLI verification. <example> Context: Workflow needs to verify that extraction output contains expected entities. user: "Run the verify-extraction script on the output file" assistant: "I'll use the connector-validator agent to run the verification script and analyze the results." <commentary> Extraction verification is a procedural script-running task that triggers this agent. </commentary> </example> <example> Context: Workflow needs to check that declared capabilities produce actual output. user: "Run the capability check on the connector" assistant: "I'll use the connector-validator agent to run the capability check script and report coverage." <commentary> Capability validation is a script-based check that triggers this agent. </commentary> </example>
Execute DataHub search, browse, and lineage operations, retrieve entity metadata, and return structured results. Used by the datahub-search and datahub-lineage skills to delegate catalog queries. <example> Context: User wants to find all Snowflake datasets with PII tags. user: "Search DataHub for Snowflake datasets tagged with PII" assistant: "I'll use the metadata-searcher agent to query DataHub for Snowflake datasets with PII tags." <commentary> The search skill delegates the actual search execution to this agent, which runs the queries and returns structured results. </commentary> </example> <example> Context: User asks who owns the revenue pipeline and needs metadata gathered. user: "Who owns the revenue pipeline?" assistant: "I'll use the metadata-searcher agent to find revenue-related pipelines and retrieve their ownership metadata." <commentary> The search skill delegates multi-step metadata retrieval to this agent, which searches, fetches aspects, and returns evidence for answering the question. </commentary> </example>
Plans new DataHub connectors by classifying the source system, researching it using a dedicated agent or inline research, and generating a _PLANNING.md blueprint with entity mapping and architecture decisions. Use when building a new connector, researching a source system for DataHub, or designing connector architecture. Triggers on: "plan a connector", "new connector for X", "research X for DataHub", "design connector for X", "create planning doc", or any request to plan/research/design a DataHub ingestion source.
Reviews DataHub connector implementations against 22 golden standards for compliance, code quality, silent failures, test coverage, type design, and merge readiness. Use when reviewing connector code, checking a PR, auditing a connector implementation, or verifying connector standards compliance.
Use this skill when the user wants to add or update metadata in DataHub: descriptions, tags, glossary terms, ownership, deprecation, domains, data products, structured properties, documents, or field-level metadata. Triggers on: "add tag to X", "update description for X", "set owner of X", "add glossary term", "deprecate X", "create a domain", "create a glossary term", "add a document", or any request to modify DataHub metadata.
Use this skill when the user wants to explore lineage, trace data dependencies, perform impact analysis, find root causes, map data pipelines, or understand how data flows between systems. Triggers on: "what feeds into X", "what depends on X", "show lineage for X", "impact analysis", "trace the pipeline", "root cause", "upstream of X", "downstream of X", or any request involving data lineage and dependency tracking.
Configure a DataHub instance to load and display a Micro Frontend (MFE) app. Use when the user wants to register an MFE with DataHub, add an MFE to the nav sidebar, set up MFE config for local dev or production/k8s, or troubleshoot MFE loading issues.
Scaffold a new DataHub Micro Frontend (MFE) app with all boilerplate files. Use when the user wants to create a new micro frontend, MFE, remote app, or Module Federation app for DataHub.
Use this skill when the user wants to manage data quality in DataHub: create or run assertions, check assertion outcomes, raise or resolve incidents, create notification subscriptions, or diagnose health problems across their estate. Triggers on: "create assertion", "run assertion", "check quality", "data quality", "health check", "raise incident", "resolve incident", "subscribe to", "failing assertions", "active incidents", or any request involving data quality, assertions, incidents, or quality notifications.
Use this skill when the user wants to search the DataHub catalog, discover entities, answer ad-hoc questions about their data, find datasets, or browse by platform or domain. Triggers on: "search DataHub", "find datasets", "who owns X", "what tables contain PII", "what columns does X have", or any request to search, discover, browse, or answer one-off questions about DataHub metadata. For lineage questions ("what feeds into X"), use `/datahub-lineage`. For systematic audits ("how complete is our metadata"), use `/datahub-audit`.
Use this skill when the user needs to set up a DataHub connection, install the DataHub CLI, configure authentication, verify connectivity, set default scopes, or create agent configuration profiles. Triggers on: "set up DataHub", "connect to DataHub", "install datahub CLI", "configure DataHub", "set default platform", "focus on domain X", "create profile", or any request to establish, configure, or troubleshoot DataHub connectivity.
Loads all 22 DataHub connector golden standards into context. Use before starting connector development or review work to ensure the full set of standards is available for reference. Triggers on: "load standards", "show standards", "what are the connector standards", "load golden standards", "review standards", or any request to load DataHub connector development guidelines.
This skill provides routing guidance for all DataHub interaction skills. It is injected at session start and helps map user intent to the correct skill. Do not invoke this skill directly — it is loaded automatically.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Connect to Knowledge Catalog to discover, manage, monitor, and govern data and AI artifacts across your data platform
Quick insights from dlt pipeline data. Connect to a pipeline, profile tables, plan charts, and assemble marimo dashboards.
Data engineering plugin - warehouse exploration, pipeline authoring, Airflow integration
Curated agent skills collection for dbt workflows, helping AI agents understand and execute data transformation pipelines more effectively.
The most comprehensive SAP Datasphere plugin for Claude. 18 specialized skills covering exploration, data modeling, integration, BW Bridge migration, security architecture, CLI automation, business content activation, catalog governance, performance optimization, and troubleshooting — all through natural language. Powered by 45 MCP tools with enterprise-grade security.