5.10.4 Consider GKE Sandbox for running untrusted workloads

Warning! Audit Deprecated

This audit has been deprecated and will be removed in a future update.

View Next Audit Version

Information

Use GKE Sandbox to restrict untrusted workloads as an additional layer of protection when running in a multi-tenant environment.

Rationale:

GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes.

When you enable GKE Sandbox on a Node pool, a sandbox is created for each Pod running on a node in that Node pool. In addition, nodes running sandboxed Pods are prevented from accessing other GCP services or cluster metadata. Each sandbox uses its own userspace kernel.

Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. A flaw in the container runtime or in the host kernel could allow a process running within a container to 'escape' the container and affect the node's kernel, potentially bringing down the node.

The potential also exists for a malicious tenant to gain access to and exfiltrate another tenant's data in memory or on disk, by exploiting such a defect.

Impact:

Using GKE Sandbox requires the node image to be set to Container-Optimized OS with containerd (cos_containerd).

It is not currently possible to use GKE Sandbox along with the following Kubernetes features:

Accelerators such as GPUs or TPUs

Istio

Monitoring statistics at the level of the Pod or container

Hostpath storage

Per-container PID namespace

CPU and memory limits are only applied for Guaranteed Pods and Burstable Pods, and only when CPU and memory limits are specified for all containers running in the Pod

Pods using PodSecurityPolicies that specify host namespaces, such as hostNetwork, hostPID, or hostIPC

Pods using PodSecurityPolicy settings such as privileged mode

VolumeDevices

Portforward

Linux kernel security modules such as Seccomp, Apparmor, or Selinux Sysctl, NoNewPrivileges, bidirectional MountPropagation, FSGroup, or ProcMount

Solution

Once a node pool is created, GKE Sandbox cannot be enabled, rather a new node pool is required. The default node pool (the first node pool in your cluster, created when the cluster is created) cannot use GKE Sandbox.
Using Google Cloud Console

Go to Kubernetes Engine by visiting https://console.cloud.google.com/kubernetes/

Click ADD NODE POOL

Configure the Node pool with following settings:

For the node version, select v1.12.6-gke.8 or higher

For the node image, select 'Container-Optimized OS with Containerd (cos_containerd) (beta)'

Under 'Security', select 'Enable sandbox with gVisor'

Configure other Node pool settings as required

Click SAVE.

Using Command Line
To enable GKE Sandbox on an existing cluster, a new Node pool must be created.

gcloud container node-pools create [NODE_POOL_NAME] \
--zone=[COMPUTE-ZONE] \
--cluster=[CLUSTER_NAME] \
--image-type=cos_containerd \
--sandbox type=gvisor

Default Value:

By default, GKE Sandbox is disabled.

See Also

https://workbench.cisecurity.org/files/4135