Skip to main content

Kubernetes Audit Logs

All Kubernetes activities, whether initiated manually with the kubectl tool or programmatically, result in one or more API calls to the Kubernetes API server. By setting up a Kubernetes audit log integration, you can monitor and record some or all of these API calls and associated metadata with Lacework.

Lacework ingests Kubernetes logs to monitor user activities (e.g., kubectl exec and port-forward), the deployment of new resources such as workloads, Kubernetes roles and role bindings, and the deletion of resources, authentication issues, and forbidden API calls, for example.

Lacework can ingest millions of logs and surface the most important events, such as the execution of rogue containers, the deployment of misconfigured workloads, the addition of dangerous roles, and manual logins to containers. Lacework provides anomaly detection and the option to run custom policies tailored to your special use cases.

Audit Logs Ingestion

Lacework can ingest Kubernetes audit logs, also called Control Plane logs, from Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). Lacework provides an easy way to ingest logs from a cluster using Cloudformation (for EKS) or Terraform. For more information, see Amazon EKS Audit Log Integration or Kubernetes Audit Logs for GKE.

Polygraph for Kubernetes Audit Logs

You can view an overview of Kubernetes audit log activity in the Lacework Console by clicking Workloads > Kubernetes. The Kubernetes dashboard includes the Kubernetes inventory, behavior, health, and related events.

The Behavior section presents a visual depiction of observed events, such as the creation and deletion of resources, as in the following example:

Kubernetes polygraph details

The Polygraph shows the different element of a Kubernetes API call. From left to right, they are:

  • Source IP address: Internal IP or external IP
  • Kubernetes groups of the user initiating the API call
  • AWS username (when available) initiating the API call
  • Kubernetes username initiating the API call
  • Kubernetes user impersonated (when available)
  • Type of resource targeted: Workload (daemonset, deployment, pod), RBAC (roles or bindings), Storage (persistent volumes or CSI drivers).
  • Action: CreateJob, PatchPod, or DeleteNamespace.
  • Status of the API call: Success (200 OK) or Failure

Hover each node in the polygraph to get more information about the element, such as the list of IPs or the list of Kubernetes groups. Click the + icon to expand nodes.

You can search the Polygraph for terms such as specific actions (such as DeletePods), namespace names, and resource names.

For example, to view events in which users ran the kubectl exec command, type exec in the Polygraph search box. This narrows the Polygraph to any element that contains exec, including the exec action. If you have resources (e.g. workloads or roles) that include exec in the name, you can filter them out by adding the corresponding -pod or -role to the search.

Kubernetes exec search results

This example shows that a user ran kubectl exec four times. You can hover over the user to get information about that user and over each exec code to see the full command executed in the container.

API Calls and Error Events

You can view the data that underlies the Polygraph in table format by clicking the following links:

  • API calls - Lists relevant audit logs, with some internal Kubernetes activities excluded, such as leases updates.
  • Error events - Lists API calls that resulted in an error.

You can search the data by available fields. For example, a search for delete shows the 120 logs that appeared in the Polygraph search, 60 logs for deleting jobs and 60 logs for deleting pods. You can find additional information such as resource details, the exact time of the event, or the user agent.

Kubernetes API call search

You can customize the displayed columns; some are hidden by default.

The Error Events table shows all API calls that resulted in an error due to issues such as authentication issues or wrong permissions. If the Kubernetes API server is open to the internet, there may be a lot of API calls from unauthenticated users (system:anonymous Kubernetes users) probing the API server. Most HTTP calls are random requests from public IPs that result in an invalid API call.

Anomaly Detection for Kubernetes Audit Logs

The Platform continuously monitors the audit logs to detect abnormal behaviors, such as new workloads, container registries used for the first time, and overly permissive new roles and cluster roles.

Anomalies look at new events across all clusters for the last 90 days. For example, the anomaly looking for a new workload will trigger only one event if the same workload is deployed multiple times in different clusters or if it is deployed in a different namespace.

You can view Kubernetes anomaly policies on the Policies page by selecting Kubernetes as the domain filter and Audit Log Anomalies as the type filter.

You can customize each anomaly to include or exclude information such as namespaces or clusters.

For more information about Kubernetes behavior anomaly policies, see Default Policies.

Kubernetes Alerts

To view alerts related to Kubernetes, go to the Alerts page and choose Kubernetes as the source filter. You can use the Alert category and Alert subcategory filters to further refine your results.