From magic-powers
Use when improving Azure Pipelines performance — caching dependencies, parallel job strategies, artifact management between stages, test result publishing, code coverage gates, and reducing pipeline runtime.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Pipeline taking more than 15 minutes and needs to be faster
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
# npm — cache node_modules using package-lock.json as key
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: 'npm | "$(Agent.OS)"'
path: $(npm_config_cache)
displayName: Cache npm packages
- script: npm ci # use ci not install (respects lockfile, deterministic)
# NuGet — cache packages folder
- task: Cache@2
inputs:
key: 'nuget | "$(Agent.OS)" | **/packages.lock.json,!**/bin/**,!**/obj/**'
restoreKeys: 'nuget | "$(Agent.OS)"'
path: $(NUGET_PACKAGES)
displayName: Cache NuGet packages
- task: DotNetCoreCLI@2
inputs:
command: restore
feedsToUse: select
# pip — cache Python packages
- task: Cache@2
inputs:
key: 'pip | "$(Agent.OS)" | requirements.txt'
restoreKeys: 'pip | "$(Agent.OS)"'
path: $(PIP_CACHE_DIR)
displayName: Cache pip packages
- script: pip install -r requirements.txt
Cache hits reduce dependency install time from 2-5 minutes down to seconds. The key must include the lockfile to invalidate when dependencies change.
# Run independent validation jobs in parallel within a stage
stages:
- stage: Validate
jobs:
- job: Lint # runs in parallel
pool:
vmImage: ubuntu-latest
steps:
- script: npm run lint
- job: TypeCheck # runs in parallel
pool:
vmImage: ubuntu-latest
steps:
- script: npm run typecheck
- job: UnitTests # runs in parallel
pool:
vmImage: ubuntu-latest
steps:
- script: npm test
- job: BuildDocker # runs in parallel
pool:
vmImage: ubuntu-latest
steps:
- script: docker build .
- stage: IntegrationTests
dependsOn: Validate # waits for ALL Validate jobs to succeed
jobs:
- job: IntTests
steps:
- script: npm run test:integration
# Split large test suite across multiple agents
jobs:
- job: TestSplit
strategy:
parallel: 4 # spin up 4 identical agents
pool:
vmImage: ubuntu-latest
steps:
- script: |
# Azure DevOps provides these variables automatically
# System.JobPositionInPhase: 1, 2, 3, or 4
# System.TotalJobsInPhase: 4
python -m pytest tests/ \
--splits $(System.TotalJobsInPhase) \
--group $(System.JobPositionInPhase) \
--junitxml=test-results-$(System.JobPositionInPhase).xml
- task: PublishTestResults@2
inputs:
testResultsFiles: test-results-$(System.JobPositionInPhase).xml
condition: always()
# Run tests with coverage collection
- task: DotNetCoreCLI@2
inputs:
command: test
arguments: '--collect:"XPlat Code Coverage" --results-directory $(Agent.TempDirectory)'
# Publish test results — shows in Tests tab, required for failure visibility
- task: PublishTestResults@2
inputs:
testResultsFormat: VSTest
testResultsFiles: '$(Agent.TempDirectory)/**/*.trx'
condition: always() # CRITICAL: publish even when tests fail, so failures show up
# Publish code coverage — shows in Coverage tab, enables trending over time
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: $(Agent.TempDirectory)/**/coverage.cobertura.xml
pathToSources: $(Build.SourcesDirectory)/src
condition: always()
# Optional: fail build if coverage drops below threshold
- script: |
COVERAGE=$(python -c "import xml.etree.ElementTree as ET; \
tree=ET.parse('coverage.cobertura.xml'); \
root=tree.getroot(); \
print(round(float(root.attrib['line-rate'])*100))")
echo "Coverage: $COVERAGE%"
[ "$COVERAGE" -ge 80 ] || (echo "Coverage below 80%!" && exit 1)
# Build stage: compile and publish artifact exactly once
- stage: Build
jobs:
- job: BuildJob
steps:
- script: dotnet publish -c Release -o $(Build.ArtifactStagingDirectory)/app
- task: PublishPipelineArtifact@1
inputs:
artifactName: app-package
targetPath: $(Build.ArtifactStagingDirectory)/app
# Deploy Dev: download the SAME artifact built above
- stage: Deploy_Dev
dependsOn: Build
jobs:
- deployment: DeployDev
environment: dev
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: app-package
- task: AzureWebApp@1
inputs:
appName: myapp-dev
package: $(Pipeline.Workspace)/app-package
# Deploy Prod: download the SAME artifact — identical binary that was tested
- stage: Deploy_Prod
dependsOn: Deploy_Dev
jobs:
- deployment: DeployProd
environment: production
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: app-package # same artifact as deployed to Dev
- task: AzureWebApp@1
inputs:
appName: myapp-prod
package: $(Pipeline.Workspace)/app-package
# Identify slow stages via Azure DevOps REST API
az pipelines runs list \
--pipeline-ids 5 \
--project MyProject \
--result succeeded \
--top 10 \
--output table
# Get timeline for a specific run (shows per-job durations)
az pipelines runs show \
--id 12345 \
--project MyProject \
--open # opens in browser with visual timeline
# Rules of thumb:
# Build + unit test stage: target < 10 minutes
# Integration test stage: target < 15 minutes
# Full pipeline (build through dev deploy): target < 25 minutes
System.JobPositionInPhase and System.TotalJobsInPhase identify each shardcondition: always() so failures are visiblecondition: always() (visible in Tests tab even on failure)?strategy.parallel?npm install instead of npm ci (non-deterministic)npm install instead of npm ci (ignores lockfile, produces non-deterministic installs)condition: always() on PublishTestResults (coverage data missing when tests fail — least useful time)package.json changesmaxParallel too high — can exceed available parallel job slots and cause queuingado-pipeline-design — the pipeline structure to optimize; parallelism requires well-separated stagesado-pipeline-security — do not cache credential files or secrets alongside dependencies