npx claudepluginhub render-oss/skills --plugin renderThis skill uses the workspace's default tool permissions.
Migrate from Heroku to Render by reading local project files first, then optionally enriching with live Heroku data via MCP.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Migrate from Heroku to Render by reading local project files first, then optionally enriching with live Heroku data via MCP.
Before starting, verify what's available:
Procfile, app.json, package.json, requirements.txt, Gemfile, go.mod, or similar)list_services tool is available. Required for MCP Direct Creation (Step 3B) and automated verification (Step 6). Not required for the Blueprint path — the Render CLI and Dashboard handle generation, validation, and deployment.list_apps tool is availableIf Render MCP is missing and the user needs it, guide them through setup using the MCP setup guide. If Heroku MCP is missing, note that config var values and add-on plan details will need to be provided manually.
Execute steps in order. Present findings to the user and get confirmation before creating any resources.
Gather app details from local files first, then supplement with Heroku MCP if available.
Read these files from the repo to determine runtime, commands, and dependencies:
| File | What it tells you |
|---|---|
Procfile | Process types and start commands (web, worker, clock, release) |
package.json | Node.js runtime, build scripts, framework deps (Next.js, React, etc.) |
requirements.txt / Pipfile / pyproject.toml | Python runtime, dependencies (Django, Flask, etc.) |
Gemfile | Ruby runtime, dependencies (Rails, Sidekiq, etc.) |
go.mod | Go runtime |
Cargo.toml | Rust runtime |
app.json | Declared add-ons, env var descriptions, buildpacks |
runtime.txt | Pinned runtime version |
static.json | Static site indicator |
yarn.lock / pnpm-lock.yaml | Package manager (affects build command) |
From these files, determine:
runtime.txt, .node-version, or engines in package.json. If pinned, carry it over as an env var (e.g., PYTHON_VERSION, NODE_VERSION). If not pinned, do not specify a version — never assume or state what Render's default version is.Procfile entriesProcfile (web, worker, clock, release)app.json addons field, or infer from dependency files (e.g., pg in package.json suggests Postgres, redis suggests Key Value)static.json, SPA framework deps, or static buildpack in app.jsonIf the Heroku MCP server is connected, call these tools to fill in details that aren't in the repo. The dyno size and add-on plan slug are critical — they determine which Render plans to use.
list_apps — let user select which app to migrate (confirms app name)get_app_info — capture: region, stack, buildpacks, config var nameslist_addons — capture the exact add-on plan slug (e.g., heroku-postgresql:essential-2, heroku-redis:premium-0). The part after the colon maps to a specific Render plan in the service mapping.ps_list — capture the exact dyno size for each process type (e.g., Standard-2X, Performance-M). Each dyno size maps to a specific Render plan in the service mapping.pg_info (if Postgres exists) — capture Data Size (actual usage) and the plan's disk allocation. The plan's disk size determines the diskSizeGB value in the Blueprint (see the service mapping).If Heroku MCP is not available, ask the user to provide:
heroku ps:type -a <app> and paste output)heroku addons -a <app> and paste output)heroku pg:info -a <app> and paste output — captures plan name, data size, and disk allocation)us or eu)heroku config -a <app> --shell and paste output)If the user cannot provide dyno sizes or add-on plans, use the fallback defaults from the service mapping: starter for compute, basic-1gb for Postgres, starter for Key Value.
App: [name] | Region: [region] | Runtime: [node/python/ruby/etc]
Source: [local files | local files + Heroku MCP]
Build command: [inferred from buildpack/deps]
Processes:
web: [command from Procfile] → Render web service ([mapped-plan])
worker: [command] → Render background worker ([mapped-plan], Blueprint only)
clock: [command] → Render cron job ([mapped-plan])
release: [command] → Append to build command
Add-ons:
Heroku Postgres ([plan-slug], [disk-size]) → Render Postgres ([mapped-plan], diskSizeGB: [size])
Heroku Redis ([plan-slug]) → Render Key Value ([mapped-plan])
Config vars: 14 total (list names, not values)
Before creating anything, run through the pre-flight checklist to validate the migration plan. Key checks:
Look up each Heroku dyno size and add-on plan in the service mapping to determine correct Render plans and cost estimates. Present the migration plan table from the pre-flight checklist and wait for user confirmation before creating any resources.
After the user approves the pre-flight plan, apply this decision rule. Default to Blueprint — only use MCP Direct Creation when every condition below is met.
Use Blueprint (the default) when ANY are true:
Fall back to MCP Direct Creation ONLY when ALL are true:
If unsure, use Blueprint. Most Heroku apps have at least a database, so Blueprint applies to the vast majority of migrations.
This step has three mandatory sub-steps. Complete all three in order.
Generate a render.yaml file and write it to the repo root. See the Blueprint example for a complete example, the Blueprint docs for usage guidance, and the Blueprint YAML JSON schema for the full field reference.
IMPORTANT: Always use the projects/environments pattern. The YAML must start with a projects: key — never use flat top-level services: or databases: keys. This groups all migrated resources into a single Render project.
Set the plan: field for each service and database using the mapped Render plan from the service mapping. Look up the Heroku dyno size (from ps_list) and add-on plan slug (from list_addons) to find the correct Render plan. If the Heroku plan is unknown, use the fallback defaults: starter for compute, basic-1gb for Postgres, starter for Key Value.
Generate the YAML following the full template, rules, and patterns in the Blueprint example. Critical rules:
projects:/environments: pattern — never flat top-level services:plan: field using the service mappingdiskSizeGB on databases from the Heroku disk allocationfromDatabase for DATABASE_URL and fromService for REDIS_URL — never hardcode connection stringssync: falseThis step is mandatory. First, check if the Render CLI is installed:
render --version
If not installed, offer to install it:
brew install rendercurl -fsSL https://raw.githubusercontent.com/render-oss/cli/main/bin/install.sh | shOnce the CLI is available, run the validation command and show the output to the user:
render blueprints validate render.yaml
If validation fails, fix the errors in the YAML and re-validate. Repeat until validation passes. Do not proceed to the next step until the Blueprint validates successfully.
After validation passes:
git add render.yaml && git commit -m "Add Render migration Blueprint" && git pushgit remote get-url origin. If the URL is SSH format (e.g., git@github.com:user/repo.git), convert it to HTTPS (https://github.com/user/repo). Then construct the deeplink: https://dashboard.render.com/blueprint/new?repo=<HTTPS_REPO_URL>sync: false secrets, and click ApplyDo not skip the deploy URL. The user needs this link to apply the Blueprint on Render.
Before creating resources via MCP, verify the active workspace:
get_selected_workspace()
If the workspace is wrong, list available workspaces with list_workspaces() and ask the user to select the correct one. Resources will be created in whichever workspace is active.
For single-service migrations without databases, create via MCP tools:
create_web_service with:
runtime: from the buildpack mappingbuildCommand: from the buildpack mappingstartCommand: from Procfile web: entryrepo: user-provided GitHub/GitLab URLregion: mapped from Heroku regionplan: mapped from Heroku dyno size using the service mapping (fallback: starter)create_static_site if detected (instead of web service)Present the creation result (service URL, ID) when complete.
Use the first available source:
get_app_info results (Step 1b)heroku config -a <app> --shellapp.json — var names and descriptions (no values, but useful for sync: false entries)Remove auto-generated and Heroku-specific vars (see the full filter list in the service mapping):
DATABASE_URL, REDIS_URL, REDIS_TLS_URL (Render generates these)HEROKU_* vars (e.g., HEROKU_APP_NAME, HEROKU_SLUG_COMMIT)PAPERTRAIL_*, SENDGRID_*, etc.)Present filtered list to user — do not write without confirmation.
Blueprint path (Step 3A): Env vars are already embedded in the render.yaml on each service (non-secret values inline, secrets marked sync: false for the user to fill in during Blueprint apply). No separate MCP call is needed — skip to Step 5.
MCP path (Step 3B): Call Render update_environment_variables with confirmed vars (supports bulk set, merges by default).
Follow the data migration guide to migrate Postgres and Redis data. The guide covers sub-steps 5a through 5e in detail. Summary of the flow:
list_postgres_instances() and list_key_value(), check source DB size, verify Render CLI (render --version), pg_dump, and pg_restore are installedpg_credentials (MCP) or user CLI paste. For Key Value, construct a Dashboard deeplink from the ID.render psql (no Render connection string needed); 2-50 GB uses pg_dump -Fc + pg_restore with external connection string from Dashboard (faster, compressed, parallel restore).redis-cli dump/restore with Dashboard-provided Render URL.query_render_postgres, compare against Heroku source if MCP is available.After user confirms database migration is complete, run through each check in order. Stop at the first failure, fix it, and redeploy before continuing.
list_deploys(serviceId: "<service-id>", limit: 1)
Expect status: "live". If status is failed, inspect build and runtime logs immediately.
Hit the health endpoint (or /) and confirm a 200 response. If there is no health endpoint, verify the app binds to 0.0.0.0:$PORT (not localhost).
list_logs(resource: ["<service-id>"], level: ["error"], limit: 50)
Look for clear failure signatures: missing env vars, connection refused, module not found, port binding errors.
Confirm all required env vars are set — especially secrets marked sync: false during Blueprint apply. Ensure the app binds to 0.0.0.0:$PORT.
get_metrics(
resourceId: "<service-id>",
metricTypes: ["http_request_count", "cpu_usage", "memory_usage"]
)
Verify CPU and memory are within expected ranges for the selected plan.
query_render_postgres(postgresId: "<postgres-id>", sql: "SELECT count(*) FROM <key_table>")
Run a read-only query on a key table to confirm data was restored correctly. Compare row counts against the Heroku source if possible.
Present a health summary after all checks pass.
Instruct user to:
[service-name].onrender.comIf the migration fails at any point:
update_environment_variables with replace: true to overwrite, or fix individual vars.maintenance_off is called on Heroku, the original app is fully operational again.Key principle: Heroku stays fully functional until the user explicitly cuts over DNS. The migration is additive until that final step.
heroku login or check HEROKU_API_KEY