From reverse-api-engineer
Reverse engineers web APIs from browser-captured HAR files using Playwright, generating production-ready Python clients. Useful for undocumented APIs, web automation, or traffic analysis.
npx claudepluginhub kalil0321/reverse-api-engineer --plugin reverse-api-engineerThis skill is limited to using the following tools:
This skill enables you to reverse engineer web APIs by:
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
This skill enables you to reverse engineer web APIs by:
[User Task] -> [Browser Capture] -> [HAR Analysis] -> [API Client Generation] -> [Testing & Refinement]
This skill provides Python utilities for HAR analysis located at:
Script Directory: plugins/reverse-api-engineer/skills/reverse-engineering-api/scripts/
Available Scripts:
har_filter.py - Filter HAR files to API endpoints onlyhar_analyze.py - Extract structured endpoint informationhar_validate.py - Validate generated code against HAR analysishar_utils.py - Shared utility functionsUse these scripts in sequence for optimal code generation:
# 1. Filter HAR to remove noise (static assets, analytics, CDN)
python {SKILL_DIR}/scripts/har_filter.py {har_path} --output filtered.har --stats
# 2. Analyze endpoints and extract patterns
python {SKILL_DIR}/scripts/har_analyze.py filtered.har --output analysis.json
# 3. Read analysis for code generation guidance
cat analysis.json
# 4. Generate API client code based on analysis
# 5. Validate generated code
python {SKILL_DIR}/scripts/har_validate.py api_client.py analysis.json
har_filter.py benefits:
har_analyze.py benefits:
har_validate.py benefits:
Use TodoWrite to track workflow progress:
pending, in_progress, or completedin_progress at a timeExample TodoWrite usage:
TodoWrite([
{"content": "Filter HAR using har_filter.py", "status": "in_progress", "activeForm": "Filtering HAR"},
{"content": "Analyze HAR using har_analyze.py", "status": "pending", "activeForm": "Analyzing endpoints"},
{"content": "Generate API client", "status": "pending", "activeForm": "Generating code"},
{"content": "Validate using har_validate.py", "status": "pending", "activeForm": "Validating code"},
{"content": "Test implementation", "status": "pending", "activeForm": "Testing API client"}
])
CRITICAL: Task tracking ensures complete workflow execution. Never skip tasks or stop early.
When starting a browser session for API capture:
{run_id}~/.reverse-api/runs/har/{run_id}/recording.harNavigate autonomously to trigger the API calls needed:
When the browser closes, note the HAR file location:
HAR file saved to: ~/.reverse-api/runs/har/{run_id}/recording.har
HAR files are JSON with this structure:
{
"log": {
"entries": [
{
"request": {
"method": "GET|POST|PUT|DELETE",
"url": "https://api.example.com/endpoint",
"headers": [...],
"postData": {...}
},
"response": {
"status": 200,
"headers": [...],
"content": {...}
}
}
]
}
}
Filter out noise by excluding:
.js, .css, .png, .jpg, .svg, .woff, .icogoogle-analytics, segment, mixpanel, hotjardoubleclick, adsense, facebook.com/trcloudflare, cdn., static.Focus on:
/api/, /v1/, /v2/, /graphqlFor each relevant endpoint, extract:
Generate a Python module with:
{output_dir}/
api_client.py # Main API client class
README.md # Usage documentation
"""
Auto-generated API client for {domain}
Generated from HAR capture on {date}
"""
import requests
from typing import Optional, Dict, Any, List
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class {ClassName}Client:
"""API client for {domain}."""
def __init__(
self,
base_url: str = "{base_url}",
session: Optional[requests.Session] = None,
):
self.base_url = base_url.rstrip("/")
self.session = session or requests.Session()
self._setup_session()
def _setup_session(self):
"""Configure session with default headers."""
self.session.headers.update({
"User-Agent": "Mozilla/5.0 (compatible)",
"Accept": "application/json",
# Add other required headers
})
def _request(
self,
method: str,
endpoint: str,
**kwargs,
) -> requests.Response:
"""Make an HTTP request with error handling."""
url = f"{self.base_url}{endpoint}"
try:
response = self.session.request(method, url, **kwargs)
response.raise_for_status()
return response
except requests.exceptions.RequestException as e:
logger.error(f"Request failed: {e}")
raise
# Generated endpoint methods go here
def get_example(self, param: str) -> Dict[str, Any]:
"""
Fetch example data.
Args:
param: Description of parameter
Returns:
JSON response data
"""
response = self._request("GET", f"/api/example/{param}")
return response.json()
# Example usage
if __name__ == "__main__":
client = {ClassName}Client()
# Example calls
All generated code must include:
After generating the client:
You have up to 5 attempts to fix issues:
Attempt 1: Initial implementation
- What was tried
- What failed (if anything)
- What was changed
Attempt 2: Refinement
...
| Issue | Solution |
|---|---|
| 403 Forbidden | Add missing headers, check authentication |
| Bot detection | Switch to Playwright with stealth mode |
| Rate limiting | Add delays, respect Retry-After headers |
| Session expiry | Implement token refresh logic |
| CORS errors | Use server-side requests (not applicable to Python) |
Before capture, you may want to map the domain to understand its structure.
Run scripts/mapper.py to quickly discover:
It is useful for generalizing your scripts on multitenants websites.
For example, for Ashby ATS or Workday it's useful to find other companies using this ATS when trying to generalize your script.
python scripts/mapper.py https://example.com
Run scripts/sitemap.py to extract URLs from sitemaps:
python scripts/sitemap.py https://example.com
~/.reverse-api/runs/har/{run_id}/./{task_name}User: "Create an API client for the Apple Jobs website"
1. [Browser Capture]
Launch browser with HAR recording
Navigate to jobs.apple.com
Perform search, browse listings
Close browser
HAR saved to: ~/.reverse-api/runs/har/{run_id}/recording.har
Note: you can monitor browser requests with the Playwright MCP
2. [HAR Analysis]
Found endpoints:
- GET /api/role/search?query=...
- GET /api/role/{id}
Authentication: None required (public API)
3. [Generate Client]
Create : {task_name}/api_client.py
4. [Test]
Ran example usage - Success!
5. [Summary]
Generated Apple Jobs API client with:
- search_roles(query, location, page)
- get_role(role_id)
Files: ./{task_name}/