Research North American mountain peaks and generate comprehensive route beta reports
Research North American mountain peaks and generate comprehensive route beta reports. Use this when users request route information, climbing beta, or trip planning for specific peaks. The skill activates on queries like "research Mt Baker" or "generate route beta for Forbidden Peak," aggregating data from PeakBagger, SummitPost, WTA, AllTrails, weather APIs, and avalanche centers.
/plugin marketplace add dreamiurg/claude-mountaineering-skills/plugin install mountaineering-skills@mountaineering-skills-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/report-template.mdexamples/2025-10-23-mount-si.mdexamples/2025-11-06-mount-adams.mdexamples/2025-11-06-wolf-peak.mdtools/README.mdtools/calculate_daylight.pytools/cloudscrape.pytools/fetch_avalanche.pytools/fetch_weather.pytools/pyproject.tomltools/test_calculate_daylight.pytools/test_fetch_avalanche.pytools/test_fetch_weather.pytools/uv.lockResearch mountain peaks across North America and generate comprehensive route beta reports combining data from multiple sources including PeakBagger, SummitPost, WTA, AllTrails, weather forecasts, avalanche conditions, and trip reports.
Data Sources: This skill aggregates information from specialized mountaineering websites (PeakBagger, SummitPost, Washington Trails Association, AllTrails, The Mountaineers, and regional avalanche centers). The quality of the generated report depends on the availability of information on these sources. If your target peak lacks coverage on these websites, the report may contain limited details. The skill works best for well-documented peaks in North America.
Use this skill when the user requests:
Examples:
Research Progress:
Goal: Identify and validate the specific peak to research.
Extract Peak Name from user message
Search PeakBagger using peakbagger-cli:
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak search "{peak_name}" --format json
Handle Multiple Matches:
If multiple peaks found: Use AskUserQuestion to present options
If single match found: Confirm with user
If no matches found:
Extract Peak ID:
peak_id fieldGoal: Get detailed peak information and coordinates needed for location-based data gathering.
This phase must complete before Phase 3, as coordinates are required for weather, daylight, and avalanche data.
Retrieve detailed peak information using the peak ID from Phase 1:
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak show {peak_id} --format json
This returns structured JSON with:
Error Handling:
Once coordinates are obtained from this step, immediately proceed to Phase 3.
Goal: Gather comprehensive route information from all available sources.
Execution Strategy: Execute ALL steps in parallel to minimize total execution time.
All Phase 3 steps run simultaneously. Do not wait for any step to complete before starting others.
Step 1: Search for route descriptions:
WebSearch queries (run in parallel):
1. "{peak_name} route description climbing"
2. "{peak_name} summit post route"
3. "{peak_name} mountain project"
4. "{peak_name} site:mountaineers.org route"
5. "{peak_name} site:alltrails.com"
6. "{peak_name} standard route"
Step 2: Fetch top relevant pages:
Universal Fetching Strategy:
For ANY website, use this two-tier approach:
cd skills/route-researcher/tools
uv run python cloudscrape.py "{url}"
Then parse the returned HTML to extract needed information.Common sites and their extraction prompts:
AllTrails (try WebFetch, fallback to cloudscrape.py):
WebFetch Prompt: "Extract route information including:
- Trail name
- Route description and key features
- Difficulty rating
- Distance and elevation gain
- Estimated time
- Route type (loop, out & back, point to point)
- Best season
- Known hazards or warnings
- Current conditions if mentioned in recent reviews"
Save AllTrails URL for Phase 4:
SummitPost, Mountaineers.org, PeakBagger (try WebFetch, fallback to cloudscrape.py):
WebFetch Prompt: "Extract route information including:
- Route name and difficulty rating
- Approach details and trailhead
- Route description and key sections
- Technical difficulty (YDS class, scramble grade, etc.)
- Crux description
- Distance and elevation gain
- Estimated time
- Known hazards and conditions"
Mountain Project, WTA (try WebFetch, fallback to cloudscrape.py):
WebFetch Prompt: "Extract route information including:
- Approach details
- Route description and key sections
- Technical difficulty (YDS class, scramble grade, etc.)
- Crux description
- Distance and elevation gain
- Estimated time
- Known hazards"
Error Handling:
Retrieve ascent data and patterns using the peak ID:
Step 1: Get overall ascent statistics
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak stats {peak_id} --format json
This returns:
Step 2: Get detailed ascent list based on activity level
Based on the total count from Step 1, adaptively retrieve ascents:
For popular peaks (>50 ascents total):
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak ascents {peak_id} --format json --within 1y
Recent data (1 year) is sufficient for active peaks.
For moderate peaks (10-50 ascents total):
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak ascents {peak_id} --format json --within 5y
Expand to 5 years to get meaningful sample size.
For rarely-climbed peaks (<10 ascents total):
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger peak ascents {peak_id} --format json
Get all available ascent data regardless of age.
Additional filters (apply as needed):
--with-gpx: Focus on ascents with GPS tracks (useful for route finding)--with-tr: Focus on ascents with trip reports (useful for conditions)Extract and save for Phase 4 (Report Generation):
date field)climber.name field)trip_report.word_count field)has_gpx field)url field)Error Handling:
Systematically search for trip report pages across major platforms:
WebSearch queries (run in parallel):
1. "{peak_name} site:wta.org" - WTA hike page with trip reports
2. "{peak_name} site:alltrails.com" - AllTrails page with reviews
3. "{peak_name} site:summitpost.org" - SummitPost route page
4. "{peak_name} site:mountaineers.org" - Mountaineers route information
5. "{peak_name} trip report site:cascadeclimbers.com" - Forum discussions
Extract and save URLs for Phase 4 (Report Generation):
Optional WebFetch for conditions data:
Requires coordinates from Phase 2 (latitude, longitude, elevation):
Gather weather data from multiple sources in parallel:
Source 1: Open-Meteo Weather API (Primary)
Use WebFetch to get detailed mountain weather forecast:
URL: https://api.open-meteo.com/v1/forecast?latitude={lat}&longitude={lon}&elevation={peak_elevation_m}&hourly=temperature_2m,precipitation,freezing_level_height,snow_depth,wind_speed_10m,wind_gusts_10m,weather_code&daily=temperature_2m_max,temperature_2m_min,precipitation_sum,precipitation_probability_max,wind_speed_10m_max&timezone=auto&forecast_days=7
Prompt: "Parse the JSON response and extract:
- Daily weather summary for 6-7 days (date, conditions based on weather_code, temps, precip probability)
- Freezing level height in feet for each day (convert from meters)
- Snow depth if applicable
- Wind speeds and gusts
- Organize by calendar date with day-of-week
- **IMPORTANT:** The timezone parameter is set to 'auto', so dates are in local time. Calculate day-of-week from the actual date strings in the JSON response (YYYY-MM-DD format). Today's date in local time is {current_date}.
- Map weather_code to descriptive conditions (0=clear, 1-3=partly cloudy, 45-48=fog, 51-67=rain, 71-77=snow, 80-82=showers, 95-99=thunderstorms)"
Weather Code to Icon/Description mapping:
Source 2: Open-Meteo Air Quality API
Use WebFetch to get air quality forecast:
URL: https://air-quality-api.open-meteo.com/v1/air-quality?latitude={lat}&longitude={lon}&hourly=pm2_5,pm10,us_aqi&timezone=auto&forecast_days=7
Prompt: "Parse the JSON and determine air quality for the forecast period:
- Check US AQI values: 0-50 (good), 51-100 (moderate), 101-150 (unhealthy for sensitive), 151-200 (unhealthy), 201-300 (very unhealthy), 301+ (hazardous)
- Check PM2.5 and PM10 levels
- Identify any days with AQI >100 (concerning for outdoor activities)
- Return overall assessment and any days to be cautious"
Source 3: NOAA/NWS Point Forecast (Supplemental)
Use WebFetch for detailed text forecast and warnings:
URL: https://forecast.weather.gov/MapClick.php?textField1={lat}&textField2={lon}
Prompt: "Extract:
- Detailed text forecasts for context
- Any weather warnings or alerts
- Hazardous weather outlook"
Source 4: NWAC Mountain Weather (if applicable)
If in avalanche season (roughly Nov-Apr), check NWAC mountain weather:
WebFetch: https://nwac.us/mountain-weather-forecast/
Prompt: "Extract general mountain weather patterns for the Cascades region including synoptic pattern and multi-day trend"
Data to extract and save for Phase 4:
https://open-meteo.com/en/docs#latitude={lat}&longitude={lon}&elevation={peak_elevation_m}&hourly=&daily=temperature_2m_max,temperature_2m_min,precipitation_sum&timezone=autohttps://open-meteo.com/en/docs/air-quality-api#latitude={lat}&longitude={lon}&hourly=&daily=&timezone=autoError Handling:
Requires coordinates from Phase 2. Only for peaks with glaciers or avalanche terrain (elevation >6000ft in winter months):
cd skills/route-researcher/tools
uv run python fetch_avalanche.py --region "North Cascades" --coordinates "{lat},{lon}"
Expected Output: JSON with NWAC avalanche forecast
Error Handling:
Requires coordinates from Phase 2 (latitude, longitude):
Use WebFetch to get sunrise/sunset data from Sunrise-Sunset.org API:
URL: https://api.sunrise-sunset.org/json?lat={latitude}&lng={longitude}&date={YYYY-MM-DD}&formatted=0
Prompt: "Extract the following data from the JSON response:
- Sunrise time (convert from UTC to local time if needed)
- Sunset time (convert from UTC to local time if needed)
- Day length (convert seconds to hours and minutes)
- Civil twilight begin/end (useful for alpine starts)
- Solar noon
Format times in a user-friendly way (e.g., '6:45 AM', '8:30 PM')"
Data to extract and save for Phase 4:
Error Handling:
WebSearch queries:
1. "{peak_name} trailhead access"
2. "{peak_name} permit requirements"
3. "{peak_name} forest service road conditions"
Extract:
Goal: Identify trip reports across all sources for comprehensive route beta coverage.
This step synthesizes information from:
Selection Strategy:
Gather a representative sample of reports covering different perspectives:
Note: Report length/word count is NOT a quality indicator. A concise 50-word report with specific route beta is more valuable than a 500-word narrative about the drive. Focus on reports that have substantive content regardless of length.
PeakBagger Trip Reports (uses data from Step 3B):
From the ascent data already retrieved in Step 3B:
Identify reports with trip report content:
trip_report.word_count > 0Extract for each report:
date field)climber.name field)trip_report.word_count field)url field)Select diverse sample:
WTA Trip Reports (if WTA URL found in Step 3C):
If WTA URL was found, extract trip reports using the AJAX endpoint:
# Construct endpoint: {wta_url}/@@related_tripreport_listing
cd skills/route-researcher/tools
uv run python cloudscrape.py "{wta_url}/@@related_tripreport_listing"
Parse HTML to extract for each report: date, author, trip report URL. Target 10-15 individual URLs, prioritize recent but include variety of dates.
Error Handling:
Mountaineers.org Trip Reports (if Mountaineers URL found in Step 3C):
If Mountaineers URL was found, extract trip reports from the trip-reports endpoint:
# Construct endpoint: {mountaineers_url}/trip-reports
cd skills/route-researcher/tools
uv run python cloudscrape.py "{mountaineers_url}/trip-reports"
Parse HTML to extract for each report: date, title, author, trip report URL. Select top 3-5 most recent reports.
Error Handling:
Extract and save for Phase 4 (Report Generation):
High-Quality PeakBagger Reports:
Recent PeakBagger Reports:
WTA Reports:
Mountaineers Reports:
Error Handling:
Goal: Fetch 10-15 trip reports to get representative sample of conditions and experiences (runs after Step 3H identifies candidates).
Selection from Step 3H results:
Fetching:
PeakBagger: Use CLI to fetch full trip report content:
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger ascent show {ascent_id} --format json
WTA/Mountaineers: Use cloudscrape.py to fetch individual trip report pages:
cd skills/route-researcher/tools
uv run python cloudscrape.py "{trip_report_url}"
Extract and organize by theme:
Error Handling:
Phase 3 Execution Summary:
Phase 3 has two execution stages:
Stage 1 - Parallel Execution (Steps 3A through 3H):
Stage 2 - Sequential Execution (Step 3I):
Performance Benefit: Stage 1 total time = max(time(3A), time(3B), ..., time(3H)) instead of summing all step times. Stage 2 runs after Stage 1 completes to use identified reports from Step 3H.
Goal: Analyze gathered data to determine route characteristics and synthesize information.
Based on route descriptions, elevation, and gear mentions, classify as:
Goal: Combine trip reports (Step 3I), route descriptions (Step 3A), and other sources into comprehensive route beta.
Source Priority:
Synthesis Pattern for Route, Crux, and Hazards:
Example (Route Description):
"The standard route follows the East Ridge (Class 3). Multiple trip reports mention a well-cairned use trail branching right at 4,800 ft—this is the correct turn. The use trail climbs through talus (described as 'tedious' and 'ankle-rolling'). In early season, this section may be snow-covered, requiring microspikes."
Apply this pattern to:
Extract Key Information:
From all synthesized data, identify:
Explicitly document what data was not found or unreliable:
Goal: Create comprehensive Markdown document following the template.
Create report in the current working directory: {YYYY-MM-DD}-{peak-name-lowercase-hyphenated}.md
Filename Examples:
2025-10-20-mount-baker.md2025-10-20-sahale-peak.mdLocation: Reports are generated in the user's current working directory, not in the plugin installation directory.
Structure and Formatting:
Read assets/report-template.md and follow it exactly for:
The template is the single source of truth for report formatting. Phase 3 (Data Gathering) specifies what data to extract. This phase (Phase 5) uses the template to determine how to present that data.
Follow these formatting rules to ensure proper Markdown rendering:
Blank lines before lists: ALWAYS add a blank line before starting a bullet or numbered list
✅ CORRECT:
This is a paragraph.
- First bullet
- Second bullet
❌ INCORRECT:
This is a paragraph.
- First bullet (missing blank line)
Blank lines after section headers: Always have a blank line after ## or ### headers
Consistent list formatting:
- for unordered lists (not * or +)Break up long text blocks:
Bold formatting: Use **text** for emphasis, not for list item headers without bullets
Hazards and Safety: Use bullet lists with sub-items:
**Known Hazards:**
- **Route-finding:** Orange markers help but can be missed
- **Slippery conditions:** Boulders treacherous when wet/icy
- **Weather exposure:** Above treeline sections exposed to elements
Information that continues after colon: Must have blank line before list:
✅ CORRECT:
Winter access adds the following:
- **Additional Distance:** 5.6 miles
- **Additional Elevation:** 1,700 ft
❌ INCORRECT:
Winter access adds the following:
- **Additional Distance:** 5.6 miles (missing blank line)
Use the Write tool to create the file in the current working directory.
Verification:
Goal: Systematically review the generated report for inconsistencies, errors, and quality issues before presenting to the user.
This phase ensures report quality by catching common issues that may occur during automated generation.
Read the complete report file that was just created in Phase 5.
Perform the following checks in order:
1. Factual Consistency:
2. Mathematical Accuracy:
3. Internal Logic:
4. Completeness:
5. Formatting Issues:
6. Source Consistency:
7. Safety & Responsibility:
For each issue found:
Priority for fixes:
Common issues to watch for:
If any issues were found and fixed:
If no issues were found:
Goal: Inform user of completion and next steps.
Report to user:
Example completion message:
Route research complete for Mount Baker!
Report saved to: 2025-10-20-mount-baker.md
Summary: Mount Baker via Coleman-Deming route is a moderate glacier climb (Class 3) with significant crevasse hazards. The route involves 5,000+ ft elevation gain and typically requires an alpine start. Weather and avalanche forecasts are included.
Next steps: Review the report and verify current conditions before your climb. Remember that mountain conditions change rapidly - check recent trip reports and weather forecasts immediately before your trip.
Throughout execution, follow these error handling guidelines:
Every generated report must:
Implemented:
{wta_url}/@@related_tripreport_listing)Pending Implementation:
fetch_avalanche.py - NWAC avalanche data (currently using WebSearch/WebFetch as fallback)When Python scripts are not yet implemented:
All commands use --format json for structured output. Run via:
uvx --from git+https://github.com/dreamiurg/peakbagger-cli.git@v1.7.0 peakbagger <command> --format json
Available Commands:
peak search <query> - Search for peaks by namepeak show <peak_id> - Get detailed peak information (coordinates, elevation, routes)peak stats <peak_id> - Get ascent statistics and temporal patterns
--within <period> - Filter by period (e.g., '1y', '5y')--after <YYYY-MM-DD> / --before <YYYY-MM-DD> - Date filterspeak ascents <peak_id> - List individual ascents with trip report links
--within <period> - Filter by period (e.g., '1y', '5y')--with-gpx - Only ascents with GPS tracks--with-tr - Only ascents with trip reports--limit <n> - Max ascents to return (default: 100)ascent show <ascent_id> - Get detailed ascent informationNote: For comprehensive command options, run peakbagger --help or peakbagger <command> --help
Common variations to try if initial search fails:
Google Maps (for summit coordinates):
https://www.google.com/maps/search/?api=1&query={latitude},{longitude}
Example: https://www.google.com/maps/search/?api=1&query=48.7768,-121.8144
USGS TopoView (for summit coordinates):
https://ngmdb.usgs.gov/topoview/viewer/#{{latitude}}/{longitude}/15
Example: https://ngmdb.usgs.gov/topoview/viewer/#17/48.7768/-121.8144
Note: Use decimal degree format for coordinates. TopoView uses zoom level in URL (15-17 works well for peaks).
If coordinates available (e.g., from Mountaineers.org place information):
https://www.google.com/maps/search/?api=1&query={latitude},{longitude}
Example: https://www.google.com/maps/search/?api=1&query=48.5123,-121.0456
If only trailhead name available:
https://www.google.com/maps/search/?api=1&query={trailhead_name}+{state}
Example: https://www.google.com/maps/search/?api=1&query=Cascade+Pass+Trailhead+WA
Note: Prefer coordinates when available for more precise location.
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.