Kubernetes Node Scanner

Overview for Deploying and Configuring the Kubernetes Node Scanner

Section 4.1 of the CIS Benchmark (version 1.6.0) for Kubernetes includes a set of security checks for Kubernetes clusters that requires access information on the file system of Kubernetes cluster nodes. To meet this requirement, the deployment of a thin Rapid7 Node Scanner as a daemonset is necessary. This scanner is particularly important for manually managed Kubernetes deployments that might be exposed to deployment mistakes more than Kubernetes clusters managed by a cloud provider. The guide below will walk you through the installation process for the scanner, along with the required configurations and steps.

Prerequisites

  • This scanner is relevant for standard Kubernetes clusters and has been tested on AWS' Elastic Kubernetes Service (EKS), GCP's Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS)
    • It is not, however, for clusters with fully-managed infrastructure, such as AWS' EKS Fargate and GKE Autopilot
  • A Kubernetes cluster that was configured according to the Kubernetes Local Scanner or Kubernetes Remote Scanner documentation
  • Clusters scanned by local Kubernetes Scanners and are standard (see the first bullet) will require this config to be added to the helm command (default is false):
    --set Config.HasNodeFsAccess=true
    
  • Save the Node Scanner YAML code as a file named r7-node-scan.yaml and ensure it's available to the cluster you're attempting to configure

Node Scanner YAML

📘

Deployment Limitations?

Take into consideration any cluster deployment limitations, such as node affinity, that might require changes in the Node Scanner YAML in order to allow the daemonset pods run on all nodes.

#  ****** IMPORTANT. read this ************
# First prepare a secret in the SAME NAMESPACE as the daemonset will be deployed
#   kubectl -n same-namespace-of-daemonset create secret generic r7-node-scanner-key --from-literal=encKey=$(openssl rand -hex 32)
# If the secret was updated make sure to restart the pods

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: r7-node-scanner
spec:
  selector:
    matchLabels:
      app: r7-node-scanner
  template:
    metadata:
      labels:
        app: r7-node-scanner
    spec:
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
      ### runAsUser is only possible if K8S config files are visible by that user ##
      containers:
        # Do not modify this container's name. r7-k8s-scanner will be looking for it by name.
        - name: r7-node-scanner
          image: alpine
          command: ["/bin/sh", "-c"]
          args:
            - |
              apk add openssl

              cat > /workdir/r7-node-scanner.sh << "EOF"
              #!/bin/sh
              
              # POSIX_LINE_MAX represent the minimum value for line width by Posix standard
              POSIX_LINE_MAX=2048
              OUTPUT_VERSION=v0.0.1
              
              WORK_DIR=/workdir
              RESULTS_FILE=scan.json-lines
              FILE_NAMES=file_names.list
              
              ENC_KEY=$(cat /mnt/enc-key/encKey)
              ENC_IV=$(openssl rand -hex 16)
              if [[ "$ENC_KEY" == "" ]]; then
                echo "ENC_KEY is empty"
                exit 1
              fi
              if [[ "$ENC_IV" == "" ]]; then
                echo "ENC_IV is empty"
                exit 1
              fi
              
              while true; do
                START_TIME=$(date +%s)
              
                mkdir -p $WORK_DIR
                cd $WORK_DIR
                rm -f $RESULTS_FILE
                rm -f $FILE_NAMES

                HOST_FS_ROOT=/mnt/host-fs
              
                chroot $HOST_FS_ROOT find /etc /var -maxdepth 7 -type f | grep kube | grep -v "fluent\|/var/log\|lib/kubelet/pods\|/var/lib/cni"   >> $FILE_NAMES
              
                for f in $(cat $FILE_NAMES); do
                  # save ps containing the file name. stream directly to file in order to avoid too long command line
                  printf "{\"psB64\": \"" >> $RESULTS_FILE
                  chroot $HOST_FS_ROOT ps ax | grep -v grep | grep $f | base64 | tr -d '\n' >> $RESULTS_FILE
                  chroot $HOST_FS_ROOT [ -f $f ] && \
                    chroot $HOST_FS_ROOT stat -c "\", \"file\": \"%n\", \"perms\": %a, \"gid\": %g, \"group\": \"%G\", \"uid\": %u, \"user\": \"%U\" }" "$f" >> $RESULTS_FILE
                done
              
                tar czf $RESULTS_FILE.tar.gz $RESULTS_FILE
                FILES_LINE_COUNT=$(wc -l $RESULTS_FILE | awk '{print $1}')
                rm $RESULTS_FILE
              
                TGZ_SIZE=$(ls -l $RESULTS_FILE.tar.gz | awk {'print $5'})
              
                cat $RESULTS_FILE.tar.gz | \
                   openssl enc -e -aes-256-cbc -K $ENC_KEY -iv $ENC_IV | \
                   base64 -w $POSIX_LINE_MAX > $RESULTS_FILE.tgz.base64
              
                rm $RESULTS_FILE.tar.gz
                BASE64_LINES_COUNT=$(wc -l $RESULTS_FILE.tgz.base64 | awk '{print $1}')
              
                cat $RESULTS_FILE.tgz.base64
                rm $RESULTS_FILE.tgz.base64
              
                END_TIME=$(date +%s)
                TOTAL_EXEC_SEC=$(( END_TIME - START_TIME ))
              
                SUMMARY=$(echo "{
                  'ver': '$OUTPUT_VERSION',
                  'totalSec': $TOTAL_EXEC_SEC,
                  'filesLineCount': $FILES_LINE_COUNT,
                  'tgzSize': $TGZ_SIZE,
                  'ivHex': '$ENC_IV',
                  'base64LineCount': $BASE64_LINES_COUNT,
                  'nodeName': '$NODE_NAME',
                  'podName': '$POD_NAME'
                }" | tr "'" '"')
              
                echo $SUMMARY
              
                touch $LAST_SUCCESS_FLAG_FILE
                sleep $EXECUTION_INTERVAL_SECONDS
              done
              
              EOF
              
              chmod 550 /workdir/r7-node-scanner.sh
              # run with exec to keep commandline clean (without exec, the entire script preparation will be shown on "ps ax")              
              exec /workdir/r7-node-scanner.sh
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: LAST_SUCCESS_FLAG_FILE
              value: "/workdir/last-success-flag-file"
            - name: EXECUTION_INTERVAL_SECONDS
              value: "360"
          volumeMounts:
            - name: host-fs
              mountPath: /mnt/host-fs
              readOnly: true
            - name: workdir
              mountPath: /workdir
            - name: r7-node-scanner-key
              mountPath: /mnt/enc-key
      volumes:
        - name: host-fs
          hostPath:
            path: /
        - name: workdir
          emptyDir: {}
        - name: r7-node-scanner-key
          secret:
            secretName: r7-node-scanner-key

Installation

🚧

Cluster Administrator Required

Due to its required permissions, the installation needs to be performed locally on each cluster by the cluster administrator.

  1. Create a namespace of your choice for the Rapid7 Node Scanner. For this guide, we used r7-node-scan as an example:

    kubectl create namespace r7-node-scan
    
  2. Create a secret to encrypt the scanner data. This secret will be used by the cluster's configured scanner (local or remote) to decrypt and consume the scan results. Anyone with access to this secret will be able to read the scan results. Use the following command to create the secret and ensure that the openssl command is installed:

    kubectl -n r7-node-scan create secret generic r7-node-scanner-key --from-literal=encKey=$(openssl rand -hex 32)
    
  3. After ensuring the Node Scanner YAML file is available to the cluster, deploy the Rapid7 Node Scanner by applying the file using the following command:

🚧

Isolated Environment

The Node Scanner will run apk add openssl on startup, which requires access to the internet. In isolated environments, you can replace the image with your own Alpine image that has pre-installed openssl.

kubectl -n r7-node-scan apply -f r7-node-scan.yaml

Related Insights

To compliment the Rapid7 Kubernetes Node Scanner, there are several Insights available to check compliance throughout your cloud environment(s). Once the node scanner is up-and-running, you can use the Ensure that r7-node-scanner is deployed Insight (introduced in InsightCloudSec version 23.7.25) to see which clusters have (or do not have) the Node Scanner deployed. The CIS - Kubernetes 1.6.0 Compliance Pack has also been updated to feature the following Insights (which require the Node Scanner to be deployed):

  • Ensure that the kubelet service file permissions are set to 644 or more restrictive
  • Ensure that the kubelet service file ownership is set to root:root
  • If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive
  • If proxy kubeconfig file exists ensure ownership is set to root:root
  • Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive
  • Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root
  • Ensure that the certificate authorities file permissions are set to 644 or more restrictive
  • Ensure that the client certificate authorities file ownership is set to root:root
  • Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive
  • Ensure that the kubelet --config configuration file ownership is set to root:root