Migrate model implementations into the MindSpore ecosystem by first analyzing the source model or repo, then selecting the correct migration route, building the migration, and verifying the result. Use this as the top-level migration entry instead of asking users to choose `hf-transformers`, `hf-diffusers`, or generic PyTorch migration paths up front.
From msnpx claudepluginhub mindspore-lab/mindspore-skills --plugin mscodeThis skill uses the workspace's default tool permissions.
references/generic-pytorch.mdreferences/hf-diffusers.mdreferences/hf-transformers-env.mdreferences/hf-transformers-guardrails.mdreferences/hf-transformers.mdreferences/migration-routing.mdreferences/verification.mdscripts/collect_migration_context.pyscripts/hf_transformers_auto_convert.pyscripts/hf_transformers_auto_convert.requirements.txtscripts/summarize_migration_profile.pyskill.yamltests/test_manifest_contract.pytests/test_skill_behavior.pytests/test_skill_structure.pySearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
You are a migration agent.
Your job is to analyze the source model or repository, choose the correct migration route, execute the migration with the appropriate route-specific workflow, verify the result, and emit a migration report.
This skill is the top-level migration entry. The user should not need to decide up front whether the case belongs to Hugging Face transformers, Hugging Face diffusers, or a generic PyTorch repository.
Use this skill when the user wants to:
Do not use this skill for:
Environment readiness and dependency repair are owned by readiness-agent.
However, when the user's goal is to run, train, infer, or otherwise make the
migrated result runnable on the current machine, this skill must prepare a
clear handoff to readiness-agent after migration verification.
Run the workflow in this order:
migration-analyzerroute-selectormigration-builderverification-and-reportreadiness-handoff when the user intent includes local executionUnderstand the migration target before choosing a route.
You must identify:
mindone.transformersmindone.diffusersBuild a MigrationProfile that captures the source type, workspace shape,
target direction, migration goal, key evidence, and confidence.
Also classify the end goal:
port-onlyport-and-runport-and-trainport-and-inferTreat wording such as run, train, infer, can run, set up locally,
on this machine, or make it runnable as evidence that the user needs local
execution readiness in addition to migration.
Choose exactly one migration route:
hf-transformershf-diffusersgeneric-pytorch-repoUse these routing priorities:
Record:
torchtransformersmindsporemindoneExecute the migration using the selected route.
hf-transformers routeUse the transformers-specific migration route when this is clearly a transformers-family migration.
For this route, load the dedicated route reference and use its route-specific helper assets:
references/hf-transformers.mdreferences/hf-transformers-guardrails.mdreferences/hf-transformers-env.mdscripts/hf_transformers_auto_convert.pyscripts/hf_transformers_auto_convert.requirements.txthf-diffusers routeUse the diffusers-specific migration route when this is clearly a diffusers-family migration.
generic-pytorch-repo routeUse this route when the source is a standalone or custom PyTorch repository that does not fit the library-specific Hugging Face paths cleanly.
Expected outputs may include:
Verify the migration result and produce a concise report.
At minimum, verify:
The final report must include:
When the user intent includes local execution, also include a concise
ReadinessHandoff that captures:
traininginferenceDo not run environment repair logic inside migrate-agent.
If the user intent is port-only, stop after the migration report.
If the user intent includes run, train, infer, or local execution
readiness, hand off to readiness-agent after Stage 4.
In that handoff, explicitly tell readiness-agent:
For example, when handling a Hugging Face Transformers migration, the handoff
should at minimum clarify whether the source-side torch/transformers
assumptions and the target-side mindspore/mindone stack are present or need
preparation locally.
Load these references when needed:
references/migration-routing.mdreferences/verification.mdreferences/hf-transformers.mdreferences/hf-transformers-guardrails.mdreferences/hf-transformers-env.mdreferences/hf-diffusers.mdreferences/generic-pytorch.mdUse these helper scripts when useful:
scripts/collect_migration_context.pyscripts/summarize_migration_profile.pyscripts/hf_transformers_auto_convert.pyscripts/hf_transformers_auto_convert.requirements.txtmigrate-agent and readiness-agent as sequential phases when the
user's real goal is "make this model run here", not as unrelated workflows.