From aiops-plugin
Fetches Ansible logs via SSH by time range, mode (processed/ignored/all), order, limit, or job number. Stores raw logs locally for incident investigations.
npx claudepluginhub redhat-et/rhdp-rca-pluginThis skill is limited to using the following tools:
Fetch available logs for further investigations. Currently
Guides log analysis with grep filters, bash pipelines, iterative refinement, and utility scripts to debug errors, incidents, and patterns efficiently.
Analyze logs to investigate security incidents, identify attack patterns, and build detection rules.
Collects, parses Windows EVTX and Linux syslogs, and correlates logs to reconstruct security event timelines during forensic investigations.
Share bugs, ideas, or general feedback.
Fetch available logs for further investigations. Currently
[Python] Fetch logs via SSH
cd /path/to/incident-investigator
pip install -r requirements.txt
python -m scripts.cli new --incident-id "inc-001"
python -m scripts.cli run --incident-id "inc-001" --all
python -m scripts.cli status --incident-id "inc-001"
We assume that the user has correctly set up the SSH profile to the Ansible server.
Input: User provides incident context (time window, job IDs, or investigation scope)
Output: Raw log files in local directory (default: ~/etl-logs)
Scripts: scripts/fetch_logs_ssh.py or scripts/fetch_logs_by_job.py
Ensure SSH access is configured:
ssh ${REMOTE_HOST} # Should connect without password prompt
Use this when you want to fetch recent logs or a range of logs based on time.
Determine the scope:
--mode processed --order desc --limit 10-50--start-time and/or --end-time to filter by creation date--mode all (includes both processed and ignored)Run the fetch script:
# Fetch recent logs with limit
python -m scripts.fetch_logs_ssh \
--mode processed \
--order desc \
--limit 20 \
--local-dir .incidents/<incident-id>/raw_logs
# Fetch logs within a specific time range
python -m scripts.fetch_logs_ssh \
--mode processed \
--start-time "2025-12-09 08:00:00" \
--end-time "2025-12-10 17:00:00" \
--local-dir .incidents/<incident-id>/raw_logs
# Fetch logs from a specific day
python -m scripts.fetch_logs_ssh \
--mode all \
--start-time "2025-12-10" \
--end-time "2025-12-10" \
--local-dir .incidents/<incident-id>/raw_logs
Parameters:
--mode: Select log type (processed, ignored, or all)--order: Sort by time (desc = newest first, asc = oldest first)--limit: Number of files to fetch (omit for all files)--start-time: Filter logs created on or after this time (format: YYYY-MM-DD [HH:MM[:SS]])--end-time: Filter logs created on or before this time (format: YYYY-MM-DD [HH:MM[:SS]])--local-dir: Where to store logs (default: ~/etl-logs)Time Filtering Examples:
"2025-12-10 14:30:45""2025-12-10 14:30""2025-12-10"--start-time "2025-12-10 00:00" --limit 10 --order descUse this when you know specific job numbers to investigate.
Run the fetch script with job numbers:
python -m scripts.fetch_logs_by_job \
job_1234567 job_1234568 job_1234569 \
--local-dir .incidents/<incident-id>/raw_logs
Or without the 'job_' prefix:
python -m scripts.fetch_logs_by_job \
1234567 1234568 1234569 \
--local-dir .incidents/<incident-id>/raw_logs
Parameters:
job_numbers: One or more job identifiers (with or without 'job_' prefix)--local-dir: Where to store logs (default: ~/etl-logs)What it fetches:
*.transform-processed, *.transform-ignored, etc.)job_1234567.json.gz.transform-processed)[INFO] Remote host: ${REMOTE_HOST}
[INFO] Remote dir : ${REMOTE_DIR}
[INFO] Local dir : ${LOCAL_DIR}
[INFO] Mode : processed
[INFO] Order : desc
[INFO] Limit : 10
[INFO] Start time : 2025-12-09 08:00:00
[INFO] End time : 2025-12-10 17:00:00
[INFO] Remote cmd : cd ${REMOTE_DIR} && find . -maxdepth 1 -type f -name '*.transform-processed' -newermt '2025-12-09 08:00:00' ! -newermt '2025-12-10 17:00:00' -printf '%T@ %f\n' | sort -rn | cut -d' ' -f2- | head -n 10
[INFO] Running rsync: rsync -avz --progress --files-from=- ${REMOTE_HOST}:${REMOTE_DIR}/ ...
Transfer starting: 10 files
job_1234567.json.gz.transform-processed
456170 100% 2.99MB/s 00:00:00 (xfer#1, to-check=0/10)
...
sent 637 bytes received 1457000 bytes 14576370000 bytes/sec
total size is 1470251 speedup is 1.01
[INFO] Remote host: ${REMOTE_HOST}
[INFO] Remote dir : ${REMOTE_DIR}
[INFO] Local dir : ${LOCAL_DIR}
[INFO] Job numbers: job_1234567, job_1234568
[INFO] Finding files matching job patterns...
[INFO] Found 2 file(s):
- job_1234567.json.gz.transform-processed
- job_1234568.json.gz.transform-ignored
[INFO] Running rsync...
job_1234567.json.gz.transform-processed
456170 100% 2.99MB/s 00:00:00
job_1234568.json.gz.transform-ignored
321450 100% 2.45MB/s 00:00:00
[SUCCESS] Files transferred to ${LOCAL_DIR}
Once logs are fetched, proceed to extract errors:
python -m scripts.cli run --incident-id "<id>" --step 2a