From geoai-skills
Detects buildings, cars, ships, solar panels, parking lots, agriculture fields in geospatial imagery using GeoAI models or GroundedSAM text-prompted segmentation. GPU recommended.
npx claudepluginhub opengeos/geoai-skills --plugin geoai-skillsThis skill is limited to using the following tools:
You are helping the user run AI object detection on geospatial imagery using geoai.
Processes images with object detection, classification, and segmentation using computer-vision-processor plugin and generated Python code. Activates on 'analyze image' or computer vision requests.
Guides implementation of object detection (YOLO), semantic/instance segmentation (SAM), 3D vision, depth estimation, video understanding, and multi-modal vision using reference patterns, diagnostics, and validations.
Provides expert guidance on YOLO26 detection, SAM 3 segmentation, VLMs, depth estimation, and 3D reconstruction for real-time computer vision pipelines.
Share bugs, ideas, or general feedback.
You are helping the user run AI object detection on geospatial imagery using geoai.
Input: $@
Follow these steps in order.
Extract:
$0 as the model name: buildings, cars, ships, solar-panels, parking-lots, agriculture, or grounded-sam$1 as the input raster path--text PROMPT for GroundedSAM text-prompted segmentation (required when model is grounded-sam)--output FILE for the output vector file (default: ./<model>_detections.gpkg)If the model name is not recognized, list the available models and ask the user to pick one.
Model mapping:
| Argument | GeoAI Class |
|---|---|
buildings | geoai.BuildingFootprintExtractor |
cars | geoai.CarDetector |
ships | geoai.ShipDetector |
solar-panels | geoai.SolarPanelDetector |
parking-lots | geoai.ParkingSplotDetector |
agriculture | geoai.AgricultureFieldDelineator |
grounded-sam | geoai.GroundedSAM |
python3 -c "
import torch
if torch.cuda.is_available():
print(f'GPU: {torch.cuda.get_device_name(0)}')
print(f'CUDA: {torch.version.cuda}')
print(f'Memory: {torch.cuda.get_device_properties(0).total_mem / 1e9:.1f} GB')
else:
print('GPU: not available (CPU mode)')
print('Warning: inference will be significantly slower without a GPU')
"
If no GPU is available, warn the user but continue.
If $1 looks like an absolute path, use it directly. Otherwise:
find "$PWD" -name "$1" -not -path '*/.git/*' 2>/dev/null
If no file specified and state exists, check for recently inspected/downloaded files:
STATE_DIR=""
test -f .geoai-skills/state.json && STATE_DIR=".geoai-skills"
PROJECT_ROOT="$(git rev-parse --show-toplevel 2>/dev/null || echo "$PWD")"
PROJECT_ID="$(echo "$PROJECT_ROOT" | tr '/' '-')"
test -f "$HOME/.geoai-skills/$PROJECT_ID/state.json" && STATE_DIR="$HOME/.geoai-skills/$PROJECT_ID"
python3 -c "
import geoai
detector = geoai.DETECTOR_CLASS()
gdf = detector.predict(
'INPUT_PATH',
output_path='OUTPUT_PATH',
)
print(f'Detections: {len(gdf)}')
print(f'Output: OUTPUT_PATH')
print(f'Columns: {list(gdf.columns)}')
if len(gdf) > 0:
print('---')
print('Sample (first 5):')
print(gdf.head().to_string())
"
Replace DETECTOR_CLASS with the appropriate class from the mapping table (e.g. BuildingFootprintExtractor).
python3 -c "
import geoai
sam = geoai.GroundedSAM()
gdf = sam.predict(
'INPUT_PATH',
text_prompt='TEXT_PROMPT',
output_path='OUTPUT_PATH',
)
print(f'Segments: {len(gdf)}')
print(f'Output: OUTPUT_PATH')
print(f'Columns: {list(gdf.columns)}')
if len(gdf) > 0:
print('---')
print('Sample (first 5):')
print(gdf.head().to_string())
"
Replace TEXT_PROMPT with the user's text prompt.
Replace INPUT_PATH and OUTPUT_PATH with actual values before running.
Summarize:
Then suggest: "Use /geoai-skills:inspect-geo to examine the detection output."
import geoai fails -> delegate to /geoai-skills:install-geoai.import torch fails -> suggest installing PyTorch: pip install torch torchvision.tile_size parameter, recommend a smaller value./geoai-skills:process-raster vector-to-raster first.