Analyze Kubernetes resource lifecycle in Prow CI job artifacts by parsing audit logs and pod logs from GCS, generating interactive HTML reports with timelines
/plugin marketplace add openshift-eng/ai-helpers/plugin install prow-job@ai-helpersThis skill inherits all available tools. When active, it can use any tool Claude has access to.
CHANGELOG.mdREADME.mdSCRIPTS.mdcreate_context_html_files.pycreate_inline_html_files.pygenerate_html_report.pygenerate_report.pyparse_all_logs.pyparse_audit_logs.pyparse_pod_logs.pyparse_url.pyprow_job_resource_grep.shreport_template.htmlThis skill analyzes the lifecycle of Kubernetes resources during Prow CI job execution by downloading and parsing artifacts from Google Cloud Storage.
Use this skill when the user wants to:
Before starting, verify these prerequisites:
gcloud CLI Installation
which gcloudgcloud Authentication (Optional)
test-platform-results bucket is publicly accessibleThe user will provide:
Prow job URL - gcsweb URL containing test-platform-results/
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/30393/pull-ci-openshift-origin-main-okd-scos-e2e-aws-ovn/1978913325970362368/Resource specifications - Comma-delimited list in format [namespace:][kind/]name
pod/etcd-0 - pod named etcd-0 in any namespaceopenshift-etcd:pod/etcd-0 - pod in specific namespaceetcd-0 - any resource named etcd-0 (no kind filter)pod/etcd-0,configmap/cluster-config - multiple resourcesresource-name-1|resource-name-2 - multiple resources using regex ORe2e-test-project-api-.* - all resources matching the patternExtract bucket path
test-platform-results/ in URLExtract build_id
/(\d{10,})/ in the bucket pathExtract prowjob name
.../pull-ci-openshift-origin-main-okd-scos-e2e-aws-ovn/1978913325970362368/pull-ci-openshift-origin-main-okd-scos-e2e-aws-ovnConstruct GCS paths
test-platform-resultsgs://test-platform-results/{bucket-path}//For each comma-delimited resource spec:
Parse format [namespace:][kind/]name
: to get namespace (optional)/ to get kind (optional) and name (required){namespace, kind, name}Validate
pod/etcd-0 → {kind: "pod", name: "etcd-0"}openshift-etcd:pod/etcd-0 → {namespace: "openshift-etcd", kind: "pod", name: "etcd-0"}etcd-0 → {name: "etcd-0"}Check for existing artifacts first
.work/prow-job-analyze-resource/{build_id}/logs/ directory exists and has contentrm -rf .work/prow-job-analyze-resource/{build_id}/logs/rm -rf .work/prow-job-analyze-resource/{build_id}/tmp/Create directory structure
mkdir -p .work/prow-job-analyze-resource/{build_id}/logs
mkdir -p .work/prow-job-analyze-resource/{build_id}/tmp
.work/prow-job-analyze-resource/ as the base directory (already in .gitignore)logs/ subdirectory for all downloadstmp/ subdirectory for temporary files (intermediate JSON, etc.).work/prow-job-analyze-resource/{build_id}/Download prowjob.json
gcloud storage cp gs://test-platform-results/{bucket-path}/prowjob.json .work/prow-job-analyze-resource/{build_id}/logs/prowjob.json --no-user-output-enabled
Parse and validate
.work/prow-job-analyze-resource/{build_id}/logs/prowjob.json--target=([a-zA-Z0-9-]+)Extract target name
e2e-aws-ovn)Construct gather-extra paths
gs://test-platform-results/{bucket-path}/artifacts/{target}/gather-extra/.work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/Download audit logs
mkdir -p .work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/audit_logs
gcloud storage cp -r gs://test-platform-results/{bucket-path}/artifacts/{target}/gather-extra/artifacts/audit_logs/ .work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/audit_logs/ --no-user-output-enabled
--no-user-output-enabled to suppress progress outputDownload pod logs
mkdir -p .work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/pods
gcloud storage cp -r gs://test-platform-results/{bucket-path}/artifacts/{target}/gather-extra/artifacts/pods/ .work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/pods/ --no-user-output-enabled
--no-user-output-enabled to suppress progress outputIMPORTANT: Use the provided Python script parse_all_logs.py from the skill directory to parse both audit logs and pod logs efficiently.
Usage:
python3 plugins/prow-job/skills/prow-job-analyze-resource/parse_all_logs.py <resource_pattern> \
.work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/audit_logs \
.work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/pods \
> .work/prow-job-analyze-resource/{build_id}/tmp/all_entries.json
Resource Pattern Parameter:
<resource_pattern> parameter supports regex patterns| (pipe) to search for multiple resources: resource1|resource2|resource3.* for wildcards: e2e-test-project-.*my-namespacee2e-test-project-api-pkjxfe2e-test-project-api-pkjxf|e2e-test-project-api-7zdxxe2e-test-project-api-.*Note: The script outputs status messages to stderr which will display as progress. The JSON output to stdout is clean and ready to use.
What the script does:
Find all log files
.work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/audit_logs/**/*.log.work/prow-job-analyze-resource/{build_id}/logs/artifacts/{target}/gather-extra/artifacts/pods/**/*.logParse audit log files (JSONL format)
eExtract fields from each audit log entry
e.verb - action (get, list, create, update, patch, delete, watch)e.user.username - user making requeste.responseStatus.code - HTTP response code (integer)e.objectRef.namespace - namespace (if namespaced)e.objectRef.resource - lowercase plural kind (e.g., "pods", "configmaps")e.objectRef.name - resource namee.requestReceivedTimestamp - ISO 8601 timestampFilter matches for each resource spec
e.objectRef.namespace and e.objectRef.nameresource1|resource2 matches either resourcee2e-test-.* matches all resources starting with e2e-test-[abc] matches a, b, or cFor each audit log match, capture
e.responseStatus.code
e.requestReceivedTimestamp to datetime{verb} {resource}/{name} in {namespace} by {username} → HTTP {code}create pod/etcd-0 in openshift-etcd by system:serviceaccount:kube-system:deployment-controller → HTTP 201Parse pod log files (plain text format)
For each pod log match, capture
E0910 11:43:41.153414 ... (E=error, W=warn, I=info, F=fatal→error)YYYY-MM-DDTHH:MM:SS.mmmmmmZ)Combine and sort all entries
IMPORTANT: Use the provided Python script generate_html_report.py from the skill directory.
Usage:
python3 plugins/prow-job/skills/prow-job-analyze-resource/generate_html_report.py \
.work/prow-job-analyze-resource/{build_id}/tmp/all_entries.json \
"{prowjob_name}" \
"{build_id}" \
"{target}" \
"{resource_pattern}" \
"{gcsweb_url}"
Resource Pattern Parameter:
{resource_pattern} should be the same pattern used in the parse scripte2e-test-project-api-pkjxfe2e-test-project-api-pkjxf|e2e-test-project-api-7zdxxOutput: The script generates .work/prow-job-analyze-resource/{build_id}/{first_resource_name}.html
What the script does:
Determine report filename
.work/prow-job-analyze-resource/{build_id}/{resource_name}.htmlSort all entries by timestamp
Calculate timeline bounds
Generate HTML structure
Header Section:
<div class="header">
<h1>Prow Job Resource Lifecycle Analysis</h1>
<div class="metadata">
<p><strong>Prow Job:</strong> {prowjob-name}</p>
<p><strong>Build ID:</strong> {build_id}</p>
<p><strong>gcsweb URL:</strong> <a href="{original-url}">{original-url}</a></p>
<p><strong>Target:</strong> {target}</p>
<p><strong>Resources:</strong> {resource-list}</p>
<p><strong>Total Entries:</strong> {count}</p>
<p><strong>Time Range:</strong> {min_time} to {max_time}</p>
</div>
</div>
Interactive Timeline:
<div class="timeline-container">
<svg id="timeline" width="100%" height="100">
<!-- For each entry, render colored vertical line -->
<line x1="{position}%" y1="0" x2="{position}%" y2="100"
stroke="{color}" stroke-width="2"
class="timeline-event" data-entry-id="{entry-id}"
title="{summary}">
</line>
</svg>
</div>
Log Entries Section:
<div class="entries">
<div class="filters">
<!-- Filter controls: by level, by resource, by time range -->
</div>
<div class="entry" id="entry-{index}">
<div class="entry-header">
<span class="timestamp">{formatted-timestamp}</span>
<span class="level badge-{level}">{level}</span>
<span class="source">{filename}:{line-number}</span>
</div>
<div class="entry-summary">{summary}</div>
<details class="entry-details">
<summary>Show full content</summary>
<pre><code>{content}</code></pre>
</details>
</div>
</div>
CSS Styling:
JavaScript Interactivity:
// Timeline click handler
document.querySelectorAll('.timeline-event').forEach(el => {
el.addEventListener('click', () => {
const entryId = el.dataset.entryId;
document.getElementById(entryId).scrollIntoView({behavior: 'smooth'});
});
});
// Filter controls
// Expand/collapse details
// Search within entries
Write HTML to file
.work/prow-job-analyze-resource/{build_id}/{resource_name}.htmlDisplay summary
Resource Lifecycle Analysis Complete
Prow Job: {prowjob-name}
Build ID: {build_id}
Target: {target}
Resources Analyzed:
- {resource-spec-1}
- {resource-spec-2}
...
Artifacts downloaded to: .work/prow-job-analyze-resource/{build_id}/logs/
Results:
- Audit log entries: {audit-count}
- Pod log entries: {pod-count}
- Total entries: {total-count}
- Time range: {min_time} to {max_time}
Report generated: .work/prow-job-analyze-resource/{build_id}/{resource_name}.html
Open in browser to view interactive timeline and detailed entries.
Open report in browser
xdg-open .work/prow-job-analyze-resource/{build_id}/{resource_name}.htmlopen .work/prow-job-analyze-resource/{build_id}/{resource_name}.htmlstart .work/prow-job-analyze-resource/{build_id}/{resource_name}.htmlxdg-openOffer next steps
.work/prow-job-analyze-resource/{build_id}/ for faster subsequent searchesHandle these error scenarios gracefully:
Invalid URL format
Build ID not found
gcloud not installed
which gcloudgcloud not authenticated
gcloud auth listNo access to bucket
prowjob.json not found
Not a ci-operator job
gather-extra not found
No matches found
Timestamp parsing failures
Avoid re-downloading
.work/prow-job-analyze-resource/{build_id}/logs/ already has contentEfficient downloads
gcloud storage cp -r for recursive downloads--no-user-output-enabled to suppress verbose outputmkdir -p before downloading to avoid gcloud errorsMemory efficiency
parse_all_logs.py script processes log files incrementally (line by line)Content length limits
Progress indicators
User: "Analyze e2e-test-project-api-p28m in this Prow job: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-okd-scos-4.20-e2e-aws-ovn-techpreview/1964725888612306944"
Output:
- Downloads artifacts to: .work/prow-job-analyze-resource/1964725888612306944/logs/
- Finds actual resource name: e2e-test-project-api-p28mx (namespace)
- Parses 382 audit log entries
- Finds 86 pod log mentions
- Creates: .work/prow-job-analyze-resource/1964725888612306944/e2e-test-project-api-p28mx.html
- Shows timeline from creation (18:11:02) to deletion (18:17:32)
User: "Analyze pod/etcd-0 in this Prow job: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/30393/pull-ci-openshift-origin-main-okd-scos-e2e-aws-ovn/1978913325970362368/"
Output:
- Creates: .work/prow-job-analyze-resource/1978913325970362368/etcd-0.html
- Shows timeline of all pod/etcd-0 events across namespaces
User: "Find all resources named cluster-version-operator in job {url}"
Output:
- Searches without kind filter
- Finds deployments, pods, services, etc. all named cluster-version-operator
- Creates: .work/prow-job-analyze-resource/{build_id}/cluster-version-operator.html
User: "Analyze e2e-test-project-api-pkjxf and e2e-test-project-api-7zdxx in job {url}"
Output:
- Uses regex pattern: `e2e-test-project-api-pkjxf|e2e-test-project-api-7zdxx`
- Finds all events for both namespaces in a single pass
- Parses 1,047 total entries (501 for first namespace, 546 for second)
- Passes the same pattern to generate_html_report.py
- HTML displays: "Resources: e2e-test-project-api-7zdxx, e2e-test-project-api-pkjxf"
- Creates: .work/prow-job-analyze-resource/{build_id}/e2e-test-project-api-pkjxf.html
- Timeline shows interleaved events from both namespaces chronologically
.work/prow-job-analyze-resource/{build_id}/ directory structure for organization.work/ which is already in .gitignore.work/prow-job-analyze-resource/{build_id}/ to speed up subsequent searchesresource1|resource2 to search for multiple resources in a single pass.* wildcards to match resource name patternse2e-test-project-api-p28m but actual resource is e2e-test-project-api-p28mxe2e-test-project-api-p28m.* to find partial matchesnamespace and project resourcesResource Name Matching:
|) to search for multiple resources: resource1|resource2.*) for pattern matching: e2e-test-.*Namespace vs Project:
project is essentially a namespace with additional metadataTarget Extraction:
--target argument from prowjob.jsonWorking with Scripts:
plugins/prow-job/skills/prow-job-analyze-resource/parse_all_logs.py - Parses audit logs and pod logs, outputs JSON
generate_html_report.py - Generates interactive HTML report from JSONPod Log Glog Format Support:
E0910 11:43:41.153414 ...
E = severity (E/F → error, W → warn, I → info)0910 = month/day (MMDD)11:43:41.153414 = time with microsecondsCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.