Guides writing expert Rust tests using 'What Could Break?' framework, five transformations from superficial to expert, flake hunting, intent-based assertions, naming, and self-review checklist. Triggers on Rust test writing, design, quality improvement, coverage review.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-1 --plugin pproenca-dot-skills-1This skill uses the workspace's default tool permissions.
Write tests that catch real bugs. Every test must guard a specific invariant -- not just prove the code "works."
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Write tests that catch real bugs. Every test must guard a specific invariant -- not just prove the code "works."
Before writing any test, answer these four questions:
If you can only answer #1, your test is a happy-path test. Answer all four and you have a regression suite.
Each transformation shows a superficial test pattern and its expert replacement.
// BEFORE: proves nothing about the rest of the struct
let result = parse_config(input)?;
assert!(result.is_ok());
// AFTER: catches any unexpected field change
use pretty_assertions::assert_eq;
let result = parse_config(input)?;
assert_eq!(result, Config {
name: "default".into(),
timeout: Duration::from_secs(30),
retries: 3,
verbose: false,
});
// BEFORE: one test, one path
#[test]
fn test_parse_config() {
let cfg = parse("valid input").unwrap();
assert!(cfg.is_valid());
}
// AFTER: 3-6 tests covering happy, error, edge, platform
#[test]
fn parse_config_returns_defaults_for_minimal_input() { .. }
#[test]
fn parse_config_rejects_negative_timeout() { .. }
#[test]
fn parse_config_preserves_unknown_fields_as_extensions() { .. }
#[test]
fn parse_config_handles_empty_string_gracefully() { .. }
#[cfg(windows)]
#[test]
fn parse_config_normalizes_backslash_paths() { .. }
_tests.rs file// BEFORE: tests pollute the production file diff
// foo.rs
pub fn compute() -> u32 { 42 }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() { assert_eq!(compute(), 42); }
}
// AFTER: production code and test code in sibling files
// foo.rs
pub fn compute() -> u32 { 42 }
#[cfg(test)]
#[path = "foo_tests.rs"]
mod tests;
// foo_tests.rs
use super::*;
#[test]
fn compute_returns_expected_value() { assert_eq!(compute(), 42); }
For mod.rs modules, use mod_tests.rs.
// BEFORE: silent breakage when fields change
let input: Config = serde_json::from_str(r#"{"name":"test","timeout":30}"#)?;
// AFTER: compile-time safety for field additions/renames
fn make_config(name: &str, timeout_secs: u64) -> Config {
Config {
name: name.to_string(),
timeout: Duration::from_secs(timeout_secs),
retries: 0,
verbose: false,
}
}
let input = make_config("test", 30);
Factory functions for domain objects let each test construct exactly the fixture it needs. No shared mutable state. No JSON parsing at test time.
HashMap in fixtures -> BTreeMap for determinism// BEFORE: test passes 99% of the time, flakes in CI
let mut map = HashMap::new();
map.insert("b", 2);
map.insert("a", 1);
assert_eq!(format!("{map:?}"), r#"{"a": 1, "b": 2}"#); // order not guaranteed
// AFTER: deterministic iteration order
let mut map = BTreeMap::new();
map.insert("b", 2);
map.insert("a", 1);
assert_eq!(format!("{map:?}"), r#"{"a": 1, "b": 2}"#); // always this order
Use BTreeMap whenever output order affects assertions or snapshots.
Bolin's single most frequent pattern (97+ references across 30+ commits). When a test is flaky, follow this exact protocol:
sleep as a synchronization primitive.cargo nextest run -p <crate> -j 2 --no-fail-fast --stress-count 50 --status-level leak// BEFORE (timing-dependent):
tokio::time::sleep(Duration::from_millis(100)).await;
assert_eq!(events.len(), 2);
assert_eq!(events[0].type_name, "item.create");
assert_eq!(events[1].type_name, "audio.delta");
// AFTER (event-driven, order-independent):
wait_for_event(&rx, |e| e.type_name == "item.create").await;
wait_for_event(&rx, |e| e.type_name == "audio.delta").await;
// OR: collect, sort, compare
let mut types: Vec<_> = events.iter().map(|e| &e.type_name).collect();
types.sort();
assert_eq!(types, vec!["audio.delta", "item.create"]);
Common flake sources to watch for:
turn/started emitted optimistically before state is actually readyReplace exact command-string matching with intent-based semantic matching. Check that the test observes the right INTENT (operation + target) rather than a specific command format that varies across platforms or refactors.
// BEFORE: brittle -- breaks if command formatting changes
assert_eq!(cmd.to_string(), "rm -rf /tmp/workspace/build");
// AFTER: intent-based -- asserts the operation and target
assert_eq!(cmd.operation(), Operation::Remove);
assert!(cmd.target().ends_with("workspace/build"));
assert!(cmd.is_recursive());
When exact strings are unavoidable, assert on the semantically meaningful parts (path suffix, flag presence) rather than the full formatted string.
Pattern: {subject}_{scenario}_{expected_outcome}
A failed test name must be an actionable bug description. When it fails in CI, the name alone tells you what broke. The name should read as a specification: if it fails, you know exactly what invariant was violated.
Exemplary names:
sandbox_detection_requires_keywords
sandbox_detection_ignores_non_sandbox_mode
aggregate_output_rebalances_when_stderr_is_small
parse_config_rejects_negative_timeout
permissions_profiles_reject_writes_outside_workspace_root
permissions_profiles_allow_network_enablement
legacy_sandbox_mode_config_builds_split_policies_without_drift
under_development_features_are_disabled_by_default
usage_limit_reached_error_formats_free_plan
unexpected_status_cloudflare_html_is_simplified
root_write_plus_carveouts_still_requires_platform_sandbox
explicit_unreadable_paths_prevent_auto_approval_for_external_sandbox
denied_hosts_take_priority_over_allowed_hosts_glob
Anti-pattern names: test_parse, it_works, test_config_1, happy_path.
| Scenario | Tool |
|---|---|
| HTTP mocking | wiremock::MockServer |
| Filesystem isolation | TempDir (tempfile crate) |
| Async tests | #[tokio::test] |
| UI / output snapshots | insta::assert_snapshot! |
| Struct comparison | pretty_assertions::assert_eq |
| Enum variant checks | assert_matches! |
| Deterministic collections | BTreeMap over HashMap |
| Flake stress-testing | cargo nextest run --stress-count 50 |
Mock::given().respond_with(). Assert request bodies after the test./tmp or C:\.assert_snapshot!.assert_eq! calls. Gives colored diffs on failure. Import at the top of every test file.-j 2 --no-fail-fast --stress-count 50 to confirm the fix holds under concurrency.After writing tests, verify every item. Fix every violation before presenting the tests.
[ ] Uses pretty_assertions::assert_eq (not std assert_eq)
[ ] Compares entire objects, not individual fields
[ ] Each test guards a specific invariant (not just "it works")
[ ] Test names follow {subject}_{scenario}_{expected_outcome}
[ ] Test names encode the invariant being guarded
[ ] TempDir for any filesystem tests (no hardcoded paths)
[ ] No process environment mutation (no std::env::set_var)
[ ] Error paths tested (not just happy path)
[ ] At least 3 tests for any non-trivial function
[ ] BTreeMap used where iteration order affects assertions
[ ] Test file is a sibling _tests.rs, not inline mod tests {}
[ ] No timing-dependent assertions (no sleep -> assert)
[ ] Order-independent where event order is non-deterministic
[ ] String assertions use intent matching, not exact format