Generate a Tuned (tuned.openshift.io/v1) profile manifest for the Node Tuning Operator
Generates Tuned profile manifests for the OpenShift Node Tuning Operator with sysctl, bootloader, and node targeting options.
/plugin marketplace add openshift-eng/ai-helpers/plugin install test-coverage@ai-helpers[profile-name] [--summary ...] [--sysctl ...] [options]node-tuning:generate-tuned-profile
/node-tuning:generate-tuned-profile [profile-name] [--summary TEXT] [--include VALUE ...] [--sysctl KEY=VALUE ...] [--match-label KEY[=VALUE] ...] [options]
The node-tuning:generate-tuned-profile command streamlines creation of tuned.openshift.io/v1 manifests for the OpenShift Node Tuning Operator. It captures the desired Tuned profile metadata, tuned daemon configuration blocks (e.g. [sysctl], [variables], [bootloader]), and recommendation rules, then invokes the helper script at plugins/node-tuning/skills/scripts/generate_tuned_profile.py to render a ready-to-apply YAML file.
Use this command whenever you need to:
The generated manifest follows the structure expected by the cluster Node Tuning Operator:
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: <profile-name>
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=...
include=...
...
name: <profile-name>
recommend:
- machineConfigLabels: {...}
match:
- label: ...
value: ...
priority: <priority>
profile: <profile-name>
Collect inputs
python3 --version).[bootloader], [service]).--match-label entries (node labels) or --machine-config-label entries (MachineConfigPool selectors).--list-nodes and --label-node flags when you need to inspect or label nodes prior to manifest generation.Build execution workspace
.work/node-tuning/<profile-name>/.tuned.yaml inside the workspace) or provide --output to override.Invoke the generator script
bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name "$PROFILE_NAME" \
--summary "$SUMMARY" \
--include openshift-node \
--sysctl net.core.netdev_max_backlog=16384 \
--variable isolated_cores=1 \
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
--match-label tuned.openshift.io/elasticsearch="" \
--priority 25 \
--output ".work/node-tuning/$PROFILE_NAME/tuned.yaml"
--dry-run to print the manifest to stdout before writing, if desired.Validate output
yq e . .work/node-tuning/$PROFILE_NAME/tuned.yaml or open in an editor).oc apply --server-dry-run=client -f .work/node-tuning/$PROFILE_NAME/tuned.yaml to confirm schema compatibility.Apply or distribute
oc apply -f .work/node-tuning/$PROFILE_NAME/tuned.yaml.Use this workflow when enabling huge pages or other kernel boot parameters that require coordinating the Node Tuning Operator with the Machine Config Operator while minimizing reboots.
Label target nodes
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py --list-nodes --node-selector "node-role.kubernetes.io/worker" --skip-manifest.bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--label-node ip-10-0-1-23.ec2.internal:node-role.kubernetes.io/worker-hp= \
--overwrite-labels \
--skip-manifest
oc label node <node> node-role.kubernetes.io/worker-hp= directly if you prefer the CLI.Generate the Tuned manifest
bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name "openshift-node-hugepages" \
--summary "Boot time configuration for hugepages" \
--include openshift-node \
--section bootloader:cmdline_openshift_node_hugepages="hugepagesz=2M hugepages=50" \
--machine-config-label machineconfiguration.openshift.io/role=worker-hp \
--priority 30 \
--output .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yaml
[bootloader] section to ensure the kernel arguments match the desired configuration (e.g. kernel-rt, huge pages, additional sysctls).Author the MachineConfigPool manifest
.work/node-tuning/openshift-node-hugepages/hugepages-mcp.yaml with:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-hp
labels:
worker-hp: ""
spec:
machineConfigSelector:
matchExpressions:
- key: machineconfiguration.openshift.io/role
operator: In
values:
- worker
- worker-hp
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-hp: ""
Apply manifests (optional --dry-run)
oc apply -f .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yamloc apply -f .work/node-tuning/openshift-node-hugepages/hugepages-mcp.yamloc get mcp worker-hp -wVerify results
oc get node <node> -o jsonpath="{.status.allocatable.hugepages-2Mi}"oc debug node/<node> -q -- chroot /host cat /proc/cmdlineKEY=VALUE pair, missing labels, unwritable output path).Realtime worker profile targeting worker-rt MCP
/node-tuning:generate-tuned-profile openshift-realtime \
--summary "Custom realtime tuned profile" \
--include openshift-node --include realtime \
--variable isolated_cores=1 \
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
--output .work/node-tuning/openshift-realtime/realtime.yaml
Sysctl-only profile matched by node label
/node-tuning:generate-tuned-profile custom-net-tuned \
--summary "Increase conntrack table" \
--sysctl net.netfilter.nf_conntrack_max=262144 \
--match-label tuned.openshift.io/custom-net \
--priority 18
Preview manifest without writing to disk
/node-tuning:generate-tuned-profile pidmax-test \
--summary "Raise pid max" \
--sysctl kernel.pid_max=131072 \
--match-label tuned.openshift.io/pidmax="" \
--dry-run
profile-name): Name for the Tuned profile and manifest resource.[main] section.[main] section key/value pairs (KEY=VALUE).[variables] section (KEY=VALUE).[sysctl] section (KEY=VALUE).SECTION:KEY=VALUE.key=value) applied under machineConfigLabels.recommend[].match[] block; omit =value to match existence only.openshift-cluster-node-tuning-operator).<profile-name>.yaml in the current directory.oc get nodes (works with --node-selector).--list-nodes is used.NODE:KEY[=VALUE] notation; repeatable.oc executable (defaults to $OC_BIN or oc).