By zenml-io
AI coding agent skills for ZenML MLOps workflows, pipeline setup, and quick wins in ML engineering.
npx claudepluginhub joshuarweaver/cascade-ai-ml-engineering --plugin zenml-io-skillsAuthor ZenML pipelines: @step/@pipeline decorators, type hints, multi-output steps, dynamic vs static pipelines, artifact data flow, ExternalArtifact, YAML configuration, DockerSettings for remote execution, custom materializers, metadata logging, secrets management, and custom visualizations. Use this skill whenever asked to write a ZenML pipeline, create ZenML steps, make a pipeline work on Kubernetes/Vertex/SageMaker, add Docker settings, write a materializer, create a custom visualization, handle "works locally but fails on cloud" issues, or configure pipeline YAML files. Even if the user doesn't explicitly mention "pipeline authoring", use this skill when they ask to build an ML workflow, data pipeline, or training pipeline with ZenML.
Migrate Apache Airflow DAGs, operators, and workflows to idiomatic ZenML pipelines. Handles concept mapping (DAG→pipeline, operator→step, XCom→artifact), code translation, scheduling, retry config, Docker settings, and flags unsupported patterns (trigger rules, sensors, dynamic task mapping) for human review. Use this skill whenever the user mentions Airflow migration, converting Airflow DAGs, porting workflows from Airflow, replacing Airflow with ZenML, or asks how an Airflow concept maps to ZenML — even if they don't explicitly say "migrate". Also use when they paste Airflow code and ask to make it work with ZenML, or when they describe a workflow using Airflow terminology (DAG, operator, XCom, sensor, task group) in a ZenML context. If the user just asks a quick conceptual question ("what's the ZenML equivalent of XCom?"), answer it directly from the concept map — no need to run the full migration workflow.
Migrate Argo Workflows, WorkflowTemplates, ClusterWorkflowTemplates, and CronWorkflows to idiomatic ZenML pipelines. Handles concept mapping, YAML-to-Python translation, scheduling, retries, Kubernetes-native pattern analysis, and flags unsupported patterns such as status-based depends logic, shared volumes, containerSet, sidecars, synchronization locks, and Argo Events for human review. Use this skill whenever the user mentions Argo migration, converting Argo YAML, replacing Argo with ZenML, mapping an Argo concept to ZenML, or provides workflow YAML using terms like WorkflowTemplate, CronWorkflow, when, withItems, withParam, containerSet, onExit, Sensor, or EventSource. For quick conceptual questions, answer from the concept map without running the full migration workflow.
Migrate Azure Machine Learning SDK v2 pipelines, components, environments, and schedules to idiomatic ZenML pipelines. Handles concept mapping (`@pipeline` -> `@pipeline`, `@command_component` -> `@step`, `Environment(...)` -> `DockerSettings(...)`, AzureML compute -> `AzureMLOrchestratorSettings`), code translation, Azure-aware "keep AzureML" migration paths, and flags unsupported or unsafe patterns (sweep jobs, parallel jobs, managed endpoints, AzureML Registry, Responsible AI dashboard, and unverified control-flow helpers like `if_else` and `do_while`) for human review. Use this skill whenever the user mentions AzureML migration, Azure Machine Learning SDK v2 migration, converting AzureML pipelines or components, porting workflows from AzureML, replacing AzureML authoring with ZenML, or asks how AzureML concepts map to ZenML -- even if they don't explicitly say "migrate". Also use when they paste AzureML SDK v2 code, `mldesigner` components, YAML components, `load_component()` usage, MLTable/data asset definitions, or AzureML scheduling/deployment code and ask to make it work with ZenML. If the user just asks a quick conceptual question ("what's the ZenML equivalent of an AzureML Environment?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Dagster assets, ops, graphs, jobs, and software-defined asset workflows to idiomatic ZenML pipelines. Handles concept mapping (asset->step output, job->pipeline, IOManager->artifact store/materializer + explicit IO steps), asset-boundary planning, code translation, scheduling, retry config, resources/config migration, and flags unsupported patterns (asset selection, partitions/backfills, sensors, declarative automation, freshness policies, observable source assets) for human review. Use this skill whenever the user mentions Dagster migration, converting Dagster assets or jobs, porting workflows from Dagster, replacing Dagster with ZenML, or asks how a Dagster concept maps to ZenML -- even if they do not explicitly say "migrate". Also use when they paste Dagster code and ask to make it work with ZenML, or when they describe a workflow using Dagster terminology (`@asset`, `@multi_asset`, `Definitions`, `IOManager`, `ConfigurableResource`, partitions, sensors, asset checks) in a ZenML context. If the user just asks a quick conceptual question ("what is the ZenML equivalent of an IOManager?" or "how should I think about Dagster assets in ZenML?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Databricks Workflows (Lakeflow Jobs) to idiomatic ZenML pipelines. Handles concept mapping (Job->pipeline, Task->step, task values->artifact), notebook refactoring, code translation for all Databricks task types (notebook_task, python_wheel_task, sql_task, dbt_task, condition_task, for_each_task, run_job_task, spark_jar_task), scheduling, retry config, compute mapping, and flags unsupported patterns (file arrival triggers, run_if semantics, shared cluster state, DBFS paths) for human review. Use this skill whenever the user mentions Databricks migration, converting Databricks Jobs or Workflows, porting workflows from Databricks, replacing Databricks orchestration with ZenML, or asks how a Databricks concept maps to ZenML -- even if they don't explicitly say "migrate". Also use when they paste Databricks job JSON or notebook code and ask to make it work with ZenML, or when they describe a workflow using Databricks terminology (task, job, notebook_task, dbutils, task values, job clusters, condition_task, for_each_task) in a ZenML context. If the user just asks a quick conceptual question ("what's the ZenML equivalent of dbutils.jobs.taskValues?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Flyte workflows, tasks, LaunchPlans, and Flytekit code to idiomatic ZenML pipelines. Handles concept mapping (`@task`->`@step`, `@workflow`->`@pipeline`, `map_task()`->dynamic `.map()`, `conditional()`->dynamic branching, `LaunchPlan`->schedule/config split), code translation, special-type migration (`FlyteFile`, `FlyteDirectory`, `StructuredDataset`, `FlyteSchema`), Docker/image mapping, and flags unsupported patterns (`@eager`, `ContainerTask`, reference entities, checkpointing, interruptible semantics) for human review. Use this skill whenever the user mentions Flyte migration, converting Flyte to ZenML, porting Flyte workflows, replacing Flyte with ZenML, or asks how a Flyte concept maps to ZenML -- even if they do not explicitly say "migrate". Also use when they paste Flytekit code and ask to make it work with ZenML, or when they describe a workflow using Flyte terminology (`@dynamic`, `LaunchPlan`, `map_task`, `conditional`, `ImageSpec`, `FlyteFile`, `StructuredDataset`, `reference_task`, `reference_workflow`) in a ZenML context. If the user just asks a quick conceptual question ("what is the ZenML equivalent of LaunchPlan?" or "how should FlyteFile map?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Kedro pipelines and projects to idiomatic ZenML pipelines. Handles concept mapping (node->step, Pipeline->pipeline, Data Catalog->explicit boundary steps plus artifacts, params:->typed parameters), catalog analysis, code translation, hooks/runners/deployment mapping, and flags unsupported patterns (transcoding, dataset lifecycle hooks, namespace remapping, SharedMemoryDataset, slicing semantics) for human review. Use this skill whenever the user mentions Kedro migration, converting a Kedro project to ZenML, porting Kedro pipelines, replacing Kedro orchestration or deployment plugins with ZenML, or asks how a Kedro concept maps to ZenML -- even if they do not explicitly say "migrate". Also use when the user pastes `catalog.yml`, `parameters.yml`, `pipeline_registry.py`, node code, hook code, or describes a workflow using Kedro terminology such as node, pipeline, Data Catalog, `params:`, namespace, modular pipeline, runner, `MemoryDataset`, or transcoding in a ZenML context. If the user just asks a quick conceptual question ("what is the ZenML equivalent of `MemoryDataset`?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Metaflow flows and Outerbounds-flavored Metaflow projects to idiomatic ZenML pipelines. Handles concept mapping (FlowSpec->pipeline, @step->@step, self.* artifacts->explicit returns and inputs), code translation for Parameters, IncludeFile, Config, self.next transitions, branch/join, foreach, scheduling, retry/resource/dependency decorators, and flags unsupported or high-risk patterns (@catch, merge_artifacts, resume and checkpoint semantics, recursion, event triggers, @batch) for human review. Use this skill whenever the user mentions Metaflow migration, converting FlowSpec code, porting flows from Metaflow or Outerbounds, replacing Metaflow orchestration with ZenML, or asks how a Metaflow concept maps to ZenML -- even if they don't explicitly say "migrate". Also use when they paste FlowSpec code or describe workflows using Metaflow terminology (self.next, foreach, current, Parameter, IncludeFile, Config, @catch, @kubernetes, @batch, Runner, Deployer) in a ZenML context. If the user just asks a quick conceptual question ("what's the ZenML equivalent of merge_artifacts?"), answer it directly from the concept map -- no need to run the full migration workflow.
Migrate Prefect flows, tasks, and deployment patterns to idiomatic ZenML pipelines. Handles concept mapping (`@flow`→`@pipeline`, `@task`→`@step`, result persistence→artifacts), dynamic-execution analysis, code translation, scheduling, retries, Blocks/secrets decomposition, and flags unsupported patterns (`allow_failure()`, `return_state=True`, pause/suspend, global concurrency, task-runner semantics) for human review. Use this skill whenever the user mentions Prefect migration, converting Prefect flows, porting workflows from Prefect, replacing Prefect with ZenML, or asks how a Prefect concept maps to ZenML — even if they do not explicitly say "migrate". Also use when they paste Prefect code and ask to make it work with ZenML, or when they describe a workflow using Prefect terminology (`@flow`, `@task`, `.submit()`, `.map()`, `State`, Blocks, Deployments, work pools, Automations) in a ZenML context. If the user asks a quick conceptual question ("what is the ZenML equivalent of a Prefect Block?"), answer it directly from the concept map — no need to run the full migration workflow.
Implements ZenML quick wins to enhance MLOps workflows. Investigates codebase and stack configuration, recommends high-priority improvements, and implements metadata logging, experiment tracking, alerts, scheduling, secrets management, tags, git hooks, HTML reports, and Model Control Plane setup. Use when: user wants to improve their ZenML setup, asks about MLOps best practices, mentions "quick wins", wants to enhance pipelines, or needs help with ZenML features like experiment tracking, alerting, scheduling, or model governance.
Migrate Amazon SageMaker Pipelines and workflow code to idiomatic ZenML pipelines. Handles concept mapping (Pipeline->@pipeline, ProcessingStep/TrainingStep->@step, PropertyFile/JsonGet->artifacts), code translation, SagemakerOrchestratorSettings mapping, scheduling, model-registration strategy, and flags unsupported or high-risk patterns (CallbackStep, LambdaStep handshake semantics, step.properties placeholders, dynamic-pipeline scheduling on SageMaker) for human review. Use this skill whenever the user mentions SageMaker migration, converting SageMaker Pipelines, porting workflow code from SageMaker, replacing SageMaker DSL authoring with ZenML, or asks how a SageMaker Pipelines concept maps to ZenML -- even if they do not explicitly say "migrate". Also use when they paste `sagemaker.workflow.*` code and ask to make it work with ZenML, or when they describe a workflow using SageMaker terms (`ProcessingStep`, `TrainingStep`, `ConditionStep`, `PropertyFile`, `JsonGet`, `ModelStep`, `PipelineSession`) in a ZenML context. If the user just asks a quick conceptual question ("what's the ZenML equivalent of PropertyFile?"), answer it directly from the concept map -- no need to run the full migration workflow.
Scope and decompose ML workflow ideas into realistic ZenML pipeline architectures. Runs an in-depth interview to help users break down ambitious or over-engineered plans into well-composed multi-pipeline setups, identify what belongs in a pipeline vs. what doesn't, define cross-pipeline data flow via the Model Control Plane, and select an MVP to build first. Produces a pipeline_architecture.md specification document. Use this skill whenever a user describes a complex ML workflow, mentions multiple pipelines, talks about end-to-end ML platforms, asks how to structure their ML system in ZenML, or seems to be over-engineering their pipeline design. Also use when the user's pipeline idea sounds like it's trying to do too many things at once, or when they ask about composing pipelines together. Even if the user just says "I want to build an ML pipeline" with a long list of requirements, this skill helps scope it down before the pipeline-authoring skill takes over.
Migrate Vertex AI Pipelines (Kubeflow Pipelines v2 / PipelineJob workflows) to idiomatic ZenML pipelines. Handles concept mapping (`@dsl.pipeline` -> `@pipeline`, `@dsl.component` -> `@step`, `PipelineJob.create_schedule(...)` -> `Schedule(...)`), artifact-contract translation (`Input[Dataset]`, `InputPath`, `.uri`, `.path`), Google Cloud Pipeline Components (GCPC) rewrites, dynamic control flow (`dsl.If`, `dsl.ParallelFor`, `dsl.Collected`), resource/config migration, and flags unsupported patterns (compiled template workflows, `dsl.ExitHandler`, path-coupled artifacts, schedule lifecycle parity) for human review. Use this skill whenever the user mentions Vertex AI Pipelines migration, KFP v2 to ZenML, PipelineJob migration, GCPC migration, or asks how a Vertex/KFP concept maps to ZenML — even if they do not explicitly say "migrate". Also use when they paste KFP DSL code, compiled pipeline YAML/JSON, Vertex submission code, or describe a workflow using Vertex/KFP terminology (`dsl.component`, `dsl.pipeline`, `dsl.If`, `dsl.ParallelFor`, `PipelineJob`, GCPC) in a ZenML context. If the user just asks a quick conceptual question ("what is the ZenML equivalent of `dsl.importer`?"), answer it directly from the concept map — no need to run the full migration workflow.
Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer
Claude Code skills for Godot 4.x game development - GDScript patterns, interactive MCP workflows, scene design, and shaders
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Meta-prompting and spec-driven development system for Claude Code. Productivity framework for structured AI-assisted development.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.