Ensure that the --make-iptables-util-chains argument is set to true

LOW

Description

Description:

Allow Kubelet to manage iptables.

Rationale:

Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open.

Kubelet would manage the iptables on the system and keep it in sync. If you are using any other iptables management solution, then there might be some conflicts.

Remediation

Remediation Method 1:

If modifying the Kubelet config file, edit the kubelet-config.json file '/etc/kubernetes/kubelet/kubelet-config.json' and set the below parameter to true

"makeIPTablesUtilChains": true

Ensure that '/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf' does not set the '--make-iptables-util-chains' argument because that would override your Kubelet config file.

Remediation Method 2:

If using executable arguments, edit the kubelet service file '/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf' on each worker node and add the below parameter at the end of the 'KUBELET_ARGS' variable string.

--make-iptables-util-chains:true

Remediation Method 3:

If using the api configz endpoint consider searching for the status of '"makeIPTablesUtilChains.: true' by extracting the live configuration from the nodes running kubelet.

**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a Live Cluster, and then rerun the curl statement from audit process to check for kubelet configuration changes

kubectl proxy --port=8001 &

export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")

curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"

For all three remediations:
Based on your system, restart the 'kubelet' service and check status

systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l