VCR.py HTTP recording for Python tests. Use when testing Python code making HTTP requests, recording API responses for replay, or creating deterministic tests for external services.
/plugin marketplace add yonatangross/skillforge-claude-plugin/plugin install skillforge-complete@skillforgeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
checklists/vcr-checklist.mdtemplates/vcr-cassette.pyRecord and replay HTTP interactions for Python tests.
# conftest.py
import pytest
@pytest.fixture(scope="module")
def vcr_config():
return {
"cassette_library_dir": "tests/cassettes",
"record_mode": "once",
"match_on": ["uri", "method"],
"filter_headers": ["authorization", "x-api-key"],
"filter_query_parameters": ["api_key", "token"],
}
import pytest
@pytest.mark.vcr()
def test_fetch_user():
response = requests.get("https://api.example.com/users/1")
assert response.status_code == 200
assert response.json()["name"] == "John Doe"
@pytest.mark.vcr("custom_cassette.yaml")
def test_with_custom_cassette():
response = requests.get("https://api.example.com/data")
assert response.status_code == 200
import pytest
from httpx import AsyncClient
@pytest.mark.asyncio
@pytest.mark.vcr()
async def test_async_api_call():
async with AsyncClient() as client:
response = await client.get("https://api.example.com/data")
assert response.status_code == 200
assert "items" in response.json()
@pytest.fixture(scope="module")
def vcr_config():
import os
# CI: never record, only replay
if os.environ.get("CI"):
record_mode = "none"
else:
record_mode = "new_episodes"
return {"record_mode": record_mode}
| Mode | Behavior |
|---|---|
once | Record if missing, then replay |
new_episodes | Record new, replay existing |
none | Never record (CI) |
all | Always record (refresh) |
def filter_request_body(request):
"""Redact sensitive data from request body."""
import json
if request.body:
try:
body = json.loads(request.body)
if "password" in body:
body["password"] = "REDACTED"
if "api_key" in body:
body["api_key"] = "REDACTED"
request.body = json.dumps(body)
except json.JSONDecodeError:
pass
return request
@pytest.fixture(scope="module")
def vcr_config():
return {
"filter_headers": ["authorization", "x-api-key"],
"before_record_request": filter_request_body,
}
def llm_request_matcher(r1, r2):
"""Match LLM requests ignoring dynamic fields."""
import json
if r1.uri != r2.uri or r1.method != r2.method:
return False
body1 = json.loads(r1.body)
body2 = json.loads(r2.body)
# Ignore dynamic fields
for field in ["request_id", "timestamp"]:
body1.pop(field, None)
body2.pop(field, None)
return body1 == body2
@pytest.fixture(scope="module")
def vcr_config():
return {
"custom_matchers": [llm_request_matcher],
}
# tests/cassettes/test_fetch_user.yaml
interactions:
- request:
body: null
headers:
Content-Type: application/json
method: GET
uri: https://api.example.com/users/1
response:
body:
string: '{"id": 1, "name": "John Doe"}'
status:
code: 200
version: 1
| Decision | Recommendation |
|---|---|
| Record mode | once for dev, none for CI |
| Cassette format | YAML (readable) |
| Sensitive data | Always filter headers/body |
| Custom matchers | Use for LLM APIs |
all mode in CI (makes live calls)msw-mocking - Frontend equivalentintegration-testing - API testing patternsllm-testing - LLM-specific patternsKeywords: record HTTP, vcr.use_cassette, record mode, capture HTTP Solves:
Keywords: replay, cassette, playback, mock replay Solves:
Keywords: async, aiohttp, httpx async, async cassette Solves:
Keywords: filter, scrub, redact, sensitive data, before_record Solves:
Keywords: matcher, match on, request matching, custom match Solves:
Keywords: LLM cassette, OpenAI recording, Anthropic recording Solves:
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.