From magic-powers
Use when designing or implementing test automation — choosing the right automation framework (Playwright, pytest, JUnit), Page Object Model, selector strategies, test isolation, managing flaky tests, and CI integration.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Starting a new test automation project
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Playwright (recommended for web E2E):
✅ Cross-browser (Chromium, Firefox, WebKit)
✅ Auto-wait for elements (no explicit waits needed)
✅ Network interception built-in
✅ TypeScript/JavaScript/Python/Java
✅ Trace viewer for debugging failures
Use for: Web UI automation, API testing
pytest (Python backend/API):
✅ Fixtures for setup/teardown
✅ Parameterization built-in
✅ Rich plugin ecosystem (pytest-cov, pytest-xdist)
Use for: API testing, unit tests, data pipeline testing
JUnit 5 / TestNG (Java):
✅ Native Java ecosystem
✅ Parameterized tests, test lifecycle annotations
Use for: Java application testing
Cypress (alternative web E2E):
✅ Developer-friendly, real-time reload
❌ Chrome-only (Chromium), no multi-tab support
Use for: Component testing, developer-owned tests
// pages/LoginPage.ts — Page Object
export class LoginPage {
constructor(private page: Page) {}
// Locators as getters (lazy evaluation)
get emailInput() { return this.page.getByLabel('Email'); }
get passwordInput() { return this.page.getByLabel('Password'); }
get submitButton() { return this.page.getByRole('button', { name: 'Sign in' }); }
get errorMessage() { return this.page.getByRole('alert'); }
async login(email: string, password: string) {
await this.emailInput.fill(email);
await this.passwordInput.fill(password);
await this.submitButton.click();
}
async expectLoginError(message: string) {
await expect(this.errorMessage).toContainText(message);
}
}
// tests/auth.spec.ts — Test using POM
test('invalid password shows error', async ({ page }) => {
const loginPage = new LoginPage(page);
await page.goto('/login');
await loginPage.login('user@test.com', 'wrongpassword');
await loginPage.expectLoginError('Invalid credentials');
});
// Priority order for selectors:
// 1. Role-based (most resilient)
page.getByRole('button', { name: 'Submit' });
page.getByRole('textbox', { name: 'Email' });
// 2. Label-based
page.getByLabel('Password');
// 3. Text content
page.getByText('Confirm order');
// 4. Test ID (explicit, stable)
page.getByTestId('checkout-button'); // data-testid="checkout-button"
// 5. CSS/XPath (last resort — brittle)
page.locator('#submit-btn'); // ❌ avoid — breaks on refactor
page.locator('//div[@class="btn"]'); // ❌ avoid
# conftest.py — pytest fixtures for test isolation
import pytest
from playwright.sync_api import sync_playwright
@pytest.fixture(scope="session")
def browser():
with sync_playwright() as p:
browser = p.chromium.launch()
yield browser
browser.close()
@pytest.fixture(scope="function")
def page(browser):
context = browser.new_context()
page = context.new_page()
yield page
context.close() # fresh context per test = isolation
@pytest.fixture
def logged_in_page(page, test_user):
"""Pre-authenticated page — reuse auth state"""
page.goto('/login')
page.get_by_label('Email').fill(test_user.email)
page.get_by_label('Password').fill(test_user.password)
page.get_by_role('button', name='Sign in').click()
return page
@pytest.fixture
def test_user(db):
"""Create test user, cleanup after test"""
user = db.create_user(email='test@example.com', password='Test123!')
yield user
db.delete_user(user.id) # cleanup
Root causes of flaky tests:
1. Timing issues — hardcoded sleeps, no proper waits
Fix: Use explicit waits (await page.waitForSelector), never time.sleep()
2. Test interdependence — tests share state
Fix: Each test creates its own data, cleans up after itself
3. Environment differences — works locally, fails in CI
Fix: Use Docker for consistent environments, seed data deterministically
4. Random data — tests rely on dynamic content
Fix: Seed with fixed data, mock random number generators
5. Network instability — external API calls
Fix: Mock external APIs in tests, use contract tests for real integration
Tracking flaky tests:
- Tag flaky tests: @pytest.mark.flaky(reruns=3)
- Track in CI: collect flaky test metrics over time
- Quarantine: move flaky tests to separate suite, fix before re-enabling
# GitHub Actions
- name: Run Playwright tests
run: npx playwright test --reporter=html
env:
BASE_URL: ${{ env.TEST_BASE_URL }}
- name: Upload test results
uses: actions/upload-artifact@v3
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 7
# Parallel execution with pytest
- name: Run pytest in parallel
run: pytest tests/ -n auto --dist=loadbalance
time.sleep() / hardcoded waits (use auto-wait or explicit conditions)?sleep() everywhere (timing = flaky), tests sharing state (order-dependent), CSS selectors hardcoded in test files (brittle)data-testid to new components, configure test retry for known-flaky teststime.sleep(5) as the "fix" for timing issues (makes tests slower AND still flaky)qc-test-design — test cases designed there are automated hereqc-test-data — fixture patterns for test data creationado-pipeline-optimization — publishing automation results in CItest-driven-development — TDD and automation complement each other