Write comprehensive pytest tests for PySpark data pipelines using **LIVE DATA** sources: **$ARGUMENTS**
# Write Tests - pytest + PySpark with Live Data Write comprehensive pytest tests for PySpark data pipelines using **LIVE DATA** sources: **$ARGUMENTS** ## Current Testing Context - Test framework: !`[ -f pytest.ini ] && echo "pytest configured" || echo "pytest setup needed"` - Target: $ARGUMENTS (file/layer to test) - Test location: !`ls -d tests/ test/ 2>/dev/null | head -1 || echo "tests/ (will create)"` - Live data available: Bronze/Silver/Gold layers with real FVMS, CMS, NicheRMS tables ## Core Principle: TEST WITH LIVE DATA **ALWAYS use real data from Bronze/Silver/Gold layers**. ...
Systematically add test coverage for all local code changes using specialized review and development agents. Add tests for uncommitted changes (including untracked files), or if everything is commited, then will cover latest commit.
Write comprehensive unit and integration tests with framework-specific best practices: **$ARGUMENTS**
Generate automated test code from requirements, source code, or recorded browser actions