While Kubernetes’ Role-based access control (RBAC) authorization model is an essential part of securing Kubernetes, managing it has proven to be a significant challenge — especially when dealing with numerous users and pods. Fortunately, KubiScan is here to help address this issue.
KubiScan is our open-source tool for scanning Kubernetes clusters for “risky permissions” that can lead to security issues (e.g., privilege escalation, information leaking, etc.) in the Kubernetes RBAC authorization model.
As we added a new offline feature to KubiScan that allows security teams to scan their environments without running community open sources on a production setup, we figured this is a great opportunity to dedicate a blogpost solely to KubiScan.
While in previous publications we’ve mentioned KubiScan’s applications for detecting risky permissions and detailed some of its features, this post starts fresh and provides a hands-on tutorial for auditing your systems. After you have completed this tutorial, you will be able to identify risky permissions and improve your security posture.
What Can KubiScan Do?
Find risky entities such as: Roles\ClusterRoles, Subjects (Users, Groups and ServiceAccounts), RoleBindings\ClusterRoleBindings, Pods\Containers |
|
Dump tokens from pods all or by namespace |
|
Get associated RoleBindings\ClusterRoleBindings to Role, ClusterRole or Subject (user, group or service account) |
|
List Subjects with specific kind (User, Group or ServiceAccount), rules of RoleBinding or ClusterRoleBinding |
|
Show Pods that have access to secret data through a volume or environment variables |
|
Get Bootstrap tokens for the cluster |
|
CVE Scan | |
EKS\AKS\GKE support |
A Quick Setup
In the interest of meeting the seven-minute cutoff, we’ve created a Killercoda environment for this tutorial (you will need to log in using your Google, GitLab or GitHub account). This environment will create the same results you’ll see in this tutorial (Note: We trimmed the output and showed the relevant lines to keep this article “clean”).
To install and use it in your environment, there are two ways to run KubiScan when cloning the repository:
1. Running KubiScan Directly with Python3
Prerequisites:
- Python 3.6 (or newer)
- Pip3
- Kubernetes Python Client
- Prettytable
- openssl (built-in in ubuntu) – used for join token
Run the following commands to install all the relevant libraries after Python3.6+ is installed:
apt-get update apt-get install -y python3 python3-pip pip3 install -r requirements.txt
To run KubiScan:
alias kubiscan='python3 path/to/KubiScan.py'
2. Running KubiScan in a container:
- Make sure you are within the project main directory.
- Run the following:
./docker_run.sh path/to/kube_config_file
For example:
./docker_run.sh ~/.kube/config
With our setup installed, we are ready to start scanning our system.
Hunting for Risks
Risky Pods/Containers
A risky pod is a pod that has a highly privileged subject (an entity such as a user, service account or group) connected to one of its containers.
We are going to try and find risky pods using this command:
$ kubiscan -rp
We received a risky pod that is at a CRITICAL level:
+----------------+ |Risky Containers| +----------+---------+-----------+---------------+-------------------------+--------------------+ | Priority | PodName | Namespace | ContainerName | ServiceAccountNamespace | ServiceAccountName | +----------+---------+-----------+---------------+-------------------------+--------------------+ | CRITICAL | mypod | default | c2 | default | kubiscan-sa | +----------+---------+-----------+---------------+-------------------------+--------------------+
Figure 1 Outputting Risky Containers
Now let’s understand why it was marked as risky.
Risky Subjects
The pod “mypod” was marked as CRITICAL because of a service account named “kubiscan-sa”, but we want to understand why this service account is privileged.
First, let’s see another way that KubiScan can help us find “kubiscan-sa” by listing all the privileged users in the “default” namespace by running the following command:
$ kubiscan –rs –ns default
The command lists all the risky subjects that can be a user, group or service account with high-privilege roles.
We received the list in which one of the entries is “kubiscan-sa”:
+-----------+ |Risky Users| +----------+----------------+-----------+-------------+ | Priority | Kind | Namespace | Name | +----------+----------------+-----------+-------------+ | CRITICAL | ServiceAccount | default | kubiscan-sa | +----------+----------------+-----------+-------------+
Figure 2 Outputting Risky Subjects
Now back to why kubiscan-sa is a privileged service account.
Associated Any Roles Subject
We can check which ClusterRole/Role is associated with the service account “kubiscan-sa” through the following command:
$ kubiscan -aars kubiscan-sa -k serviceaccount
We received the rules associated with kubiscan-sa (that are the rules of the roles of the subject kubiscan-sa):
Roles associated to Subject 'kubiscan-sa': +-------------+-----------+----------------------+---------------------------------------------------+ | Kind | Namespace | Name | Rules | +-------------+-----------+----------------------+---------------------------------------------------+ | ClusterRole | None | kubiscan-clusterrole | (get,list)->(roles,clusterroles,rolebindings, | | | | | clusterrolebindings,pods,secrets) | +-------------+-----------+----------------------+---------------------------------------------------+
Figure 3 Associated Rules of Kubiscan-sa
We can clearly see that kubiscan-sa is associated with ClusterRole named kubiscan-clusterrole that has permission to get and list secrets, which are considered high privileges. This is also the reason it was marked as CRITICAL earlier (figure 2) when we looked over the risky subjects.
Recapping what we have done so far:
1. Found a risky Pod
2. Understood which Service Account is connected to the Pod
3. Learned the Rules of the Roles assigned to the Service Account
Now that we have covered why the pod is considered risky, let’s explore some of the standard scanning functionalities that Kubiscan offers. We will also show how we could understand that kubiscan-sa is risky service account by different scanning.
Exploring Other Useful Capabilities
Risky Roles and ClusterRoles
We will begin by scanning for all the risky Roles through the following command:
$ kubiscan -rar
We received a list of risky roles and cluster roles. One of them is kubiscan-clusterrole, the same role we previously saw associated with kubiscan-sa, indicating that we could find the risky role by using a different scan:
+----------------------------+ |Risky Roles and ClusterRoles| +----------+-------------+-------------+-----------------------------------------------------------+ | Priority | Kind | Namespace | Name | Creation Time | +----------+-------------+-------------+-----------------------------------------------------------+ | CRITICAL | ClusterRole | None | kubiscan-clusterrole | Mon Oct 23 12:20:24 2023 (3 days) | +----------+-------------+-------------+-----------------------------------------------------------+
Figure 4 Risky Roles and Cluster Roles
We now want to find which RoleBinding/ClusterRoleBinding is bound to kubiscan-sa service account.
Risky RolesBindings and ClusterRoleBindings
First, we scan for risky Roles/ClusterRoles by running the following command:
$ kubiscan -rab
We received a list of all the risky RoleBindings and ClusterRoleBinding:
kubiscan-clusterrolebinding:
+----------+--------------------+-------------+------------------------------+------------------------------------+ | Priority | Kind | Namespace | Name | Creation Time | +----------+--------------------+-------------+------------------------------+------------------------------------+ | CRITICAL | ClusterRoleBinding | None | kubiscan-clusterrolebinding | Mon Oct 23 12:20:24 2023 (3 days) | +----------+--------------------+-------------+------------------------------+------------------------------------+
Figure 5 Risky RoleBindings and ClusterRoleBindings
We can see that kubiscan-clusterrolebinding is indeed risky and prioritized as CRITICAL. The reason is that this role binding contains a role with high privileges rules. This by itself still doesn’t mean it is bound to kubiscan-sa. To find the associated role binding to kubiscan-sa, we will need to use another command.
Associated Any RoleBindings Subject
To find what role bindings are associated to the service account kubiscan-sa, we will run the following command:
$ kubiscan -aarbs kubiscan-sa -k serviceaccount
It shows us one ClusterRoleBinding named kubiscan-clusterrolebinding:
Associated Rolebindings\ClusterRoleBindings to subject "kubiscan-sa": +--------------------+-----------------------------+-----------+ | Kind | Name | Namespace | +--------------------+-----------------------------+-----------+ | ClusterRoleBinding | kubiscan-clusterrolebinding | None | +--------------------+-----------------------------+-----------+
Figure 6 Associated Rolebindings\ClusterRoleBindings
Let’s recap this part as well:
1. We’ve found a risky Cluster Role named kubiscan-clusterrole
2. We matched its Cluster Role Binding(s): kubiscan-clusterrolebinding
3. We found the service account kubiscan-sa associated to the cluster role binding kubiscan-clusterrolebinding
We have everything we need. We know what parts are risky in the pods (even down to the specific containers), why they’re risky, which service account is causing trouble and the related ClusterRoleBinding and ClusterRole that are troublesome.
All Scans
Alternatively, we have another way to achieve the same results by scanning for risky roles, bindings, users and containers in a single command:
$ kubiscan -a
We received the list of all the risky roles, bindings, users and pods:
+----------------------------+ |Risky Roles and ClusterRoles| +----------+-------------+-------------+------------------------+------------------------------------+ | Priority | Kind | Namespace | Name | Creation Time | +----------+-------------+-------------+------------------------+------------------------------------+ | CRITICAL | ClusterRole | None | kubiscan-clusterrole | Mon Oct 23 12:20:24 2023 (3 days) | +----------+-------------+-------------+------------------------+------------------------------------+ +------------------------------------------+ |Risky RoleBindings and ClusterRoleBindings| +----------+--------------------+-------------+------------------------------+------------------------------------+ | Priority | Kind | Namespace | Name | Creation Time | +----------+--------------------+-------------+------------------------------+------------------------------------+ | CRITICAL | ClusterRoleBinding | None | cluster-admin | Mon Aug 28 11:11:10 2023 (59 days) | | CRITICAL | ClusterRoleBinding | None | kubiscan-clusterrolebinding | Mon Oct 23 12:20:24 2023 (3 days) | +----------+--------------------+-------------+------------------------------+------------------------------------+ +-----------+ |Risky Users| +----------+----------------+-------------+--------------------------------+ | Priority | Kind | Namespace | Name | +----------+----------------+-------------+--------------------------------+ | CRITICAL | ServiceAccount | default | kubiscan-sa | +----------+----------------+-------------+--------------------------------+ +----------------+ |Risky Containers| +----------+------------+-----------+---------------+-------------------------+--------------------+ | Priority | PodName | Namespace | ContainerName | ServiceAccountNamespace | ServiceAccountName | +----------+------------+-----------+---------------+-------------------------+--------------------+ | CRITICAL | mypod | default | c2 | default | kubiscan-sa | +----------+------------+-----------+---------------+-------------------------+--------------------+
Figure 6 Outputting all
Privileged Pods
There is an option in Kubernetes for pods to have access to the host. KubiScan can also detect those pods by running the following command:
$ kubiscan –pp
We received a list of privileged pods and their containers:
+---------------------+ |Privileged Containers| +------------------------+-------------+-----------------------------------+-------------------------+--------------------+ | Pod | Namespace | Pod Spec | Container | Container info | +------------------------+-------------+-----------------------------------+-------------------------+--------------------+ | pod-with-host-access | default | Volumes: | host-access-container | SecurityContext: | | | | -name: host-volume | | privileged: True | | | | host_path: | | | | | | path: /path/on/host | | | | | | type: | | | | | | container_path: /host-data | | | +------------------------+-------------+-----------------------------------+-------------------------+--------------------+
Figure 7 privileged pods and their containers
In this case, the pod pod-with-host-access has a privileged container named host-access-container. It has the privileged flag set to true which means that whoever has access to this container can (in most cases) mount the whole file system of the host. Additionally, there are also volumes mounted from the host to the container (i.e., /host-data) to which an attacker who has access to the container will also have access. This is only one example for privileged pods; there can be other examples, such as pod with access to the host network.
Pods Secrets Volume and Environment
Pods might have access to secrets through a volume. In such cases, a use of a secrets manager is highly recommended. We can scan for those pods and show the path to their secrets volumes and their names by running the following command:
$ kubiscan –psv
We received two mounts: One is for a default token, but the second one is for a secret named mysecret we created:
Pods with access to secret data through volumes: +----------+-----------+----------------+----------------------------------------------------------------+ | Pod Name | Namespace | Container Name | Volume Mounted Secrets | +----------+-----------+----------------+----------------------------------------------------------------+ | mypod | default | c1 | 1. Mounted path: /var/run/secrets/kubernetes.io/serviceaccount | | | | | Secret name: kubiscan-sa2-token-5h2x7 | | | | | 2. Mounted path: /var/run/secrets/tokens | | | | | Secret name: mysecret | +----------+-----------+----------------+----------------------------------------------------------------+
Figure 8 Pods with secret data through volumes
Similar to “Secrets Volume”, there are pods that can access secrets through environment variables instead of volumes. By running the following command, we can find the pods and secrets:
$ kubiscan –pse
We received the list of secrets that were mounted as environment variables inside the container:
+------------+-----------+----------------+-------------------------------------------+ | Pod Name | Namespace | Container Name | Environment Mounted Secrets | +------------+-----------+----------------+-------------------------------------------+ | | my-pod-env | default | my-container | 1. Environemnt variable name: MY_USERNAME | | | | | Secret name: my-secret | | | | | 2. Environemnt variable name: MY_PASSWORD | | | | | Secret name: my-secret | +------------+-----------+----------------+-------------------------------------------+
Figure 9 Environment Mounted Secrets
CVE Scans
Kubiscan can also look for CVEs by running the following command:
$ kubiscan –cve
The results here may vary based on your Kubernetes cluster version. In our case, we were running Kubernetes version 1.25.0 and received a list of the CVEs affecting our cluster with the fixed versions of the cluster:
+----------+-----------+---------------+-------------------------------------------------------------------+---------------+ | Severity | CVE Grade | CVE | Description | FixedVersions | +----------+-----------+---------------+-------------------------------------------------------------------+---------------+ | Medium | 5.1 | CVE-2022-3172 | A security issue was discovered in kube-apiserver that allows an | 1.25.1 | | | | | aggregated API server to redirect client traffic to any URL. | 1.24.5 | | | | | This could lead to the client performing unexpected actions as | 1.23.11 | | | | | well as forwarding the client's API server credentials to third | 1.22.14 | | | | | parties. | | +----------+-----------+---------------+-------------------------------------------------------------------+---------------+
Figure 10 CVE Scan
Scanning Offline or “Static” Mode of Operation
As we mentioned in the intro, we’ve added a new feature that we hope will allow many of you, who would like to use KubiScan in a production environment to do so without running a community open-source project on a production setup.
There are two things you need:
1. Extract a JSON/YAML file from the environment you would like to test:
JSON file creation:
echo "[" > combined.json kubectl get roles --all-namespaces -o json >> combined.json echo "," >> combined.json kubectl get rolebindings --all-namespaces -o json >> combined.json echo "," >> combined.json kubectl get clusterroles -o json >> combined.json echo "," >> combined.json kubectl get clusterrolebindings -o json >> combined.json echo "," >> combined.json kubectl get secrets --all-namespaces -o json >> combined.json echo "," >> combined.json kubectl get pods --all-namespaces -o json >> combined.json echo "]" >> combined.json
Figure 11 JSON file creation
YAML file creation:
echo "---" > combined.yaml kubectl get roles --all-namespaces -o yaml>> combined.yaml echo "---" >> combined.yaml kubectl get rolebindings --all-namespaces -o yaml>> combined.yaml echo "---" >> combined.yaml kubectl get clusterroles -o yaml>> combined.yaml echo "---" >> combined.yaml kubectl get clusterrolebindings -o yaml>> combined.yaml echo "---" >> combined.yaml kubectl get secrets --all-namespaces -o yaml>> combined.yaml echo "---" >> combined.yaml kubectl get pods --all-namespaces -o yaml>> combined.yaml
Figure 12 YAML file creation
2. Use the “-f” operation with the extracted file
A Note on Risky Roles YAML
One last thing before we part.
Internally, KubiScan uses a file named risky_roles.yaml. This file contains templates for risky roles and their rules (the permissions of the role) and priority (so the highest severity security issues will appear first).
# Risk: Viewing specific secrets # Verb: get # Resources: secrets # Example: kubectl get secrets items: - kind: Role metadata: namespace: default name: risky-get-secrets priority: CRITICAL rules: - apiGroups: ["*"] resources: ["secrets"] verbs: ["get"]
Figure 13 Snippet from risky_roles.yaml
Although the YAML defines the kind in each role as “Role”, these templates will be compared against any Role\ClusterRole in the cluster.
KubiScan will go over each Role\ClusterRole in the cluster and check if it contains the rules from the risky role. If it does, it will be marked as risky.
We added all the risky roles we found. While you might have different insights and considerations, you can modify the file by adding or removing risky roles you think are risky.
Stop the Clock…
RBAC in Kubernetes does an excellent job in security, but the management of user and pod permissions can be a real headache. That’s where KubiScan steps in to make things easier. The tool will only get better as more people contribute to the project. Since we first mentioned it in 2018, CVE scan and other K8s environments support were added. Click here to see the project on our GitHub page.
We would like to encourage you to help us and everyone in the community by contributing to this project.
Natan Tunik Is an associate software engineer at CyberArk.