Tech Deep Dives

Project Trinidad: Merging Security with Modern Application Observability – Part 2: Zero Instrumentation for Full Visibility

The complexity and heterogeneous nature of modern workloads make securing and in-depth monitoring a struggle for DevOps and Security teams. In the first earlier post in this series, we looked into why this is the case and introduced Project Trinidad, which helps DevOps and Security teams secure their clusters.

In this post, we dive into the details behind our monitoring and security solution and show how modern workloads can be monitored using our zero-instrumentation approach to data collection.

Zero-Instrumentation Data Collection

Given the heterogeneity and complexity of existing systems, it is easy to see that collecting workload data to get a comprehensive view is problematic. Instrumenting – or even changing – the deployed systems “just” for security is most likely a no-go in most production scenarios. Furthermore, we certainly do not want to incur additional complex dependencies into this already overly complicated system. Instead, we want a system that is Kubernetes native and able to deploy and scale with our workload cluster.

To address these issues, Project Trinidad uses a zero-instrumentation approach to data collection: we build on eBPF to capture network traffic on Kubernetes worker nodes. More precisely, we leverage open technologies and standards for accomplishing the task. First, we partner with the folks at Pixie to deploy a lightweight data collector (called Stirling). Second, we use the OpenTelemetry format and tools for encoding and processing data. And last but not least, we orchestrate different tools, such as Jaeger-Tracing and OpenSearch, for data storage, query, and visualization.

By building on eBPF as the foundation of our data collection, we can instrument workloads running on almost all modern Linux kernels without changing a single line of code in the application. Furthermore, leveraging open formats and standards like OpenTelemetry allows us to integrate with new ways of capturing data in the future. Our analysis and processing pipelines are intentionally agnostic of where data comes from, allowing us to integrate with any third-party tool that uses the OpenTelemetry format. This consolidates security and observability behind a single pane of glass.

Data Capturing

The Project Trinidad data processing pipeline is split into two separate components: first, the Trinidad Sensor is responsible for data collection, filtering/aggregating/annotating, and uploading data from a monitored Kubernetes cluster (hosted by the Trinidad user); second, data is uploaded to the Trinidad Backend which takes care of traffic analysis, anomaly detection, and data visualization.
The sensor deployment architecture is shown in Figure 1:

Figure 1: Project Trinidad Sensor deployed as DaemonSet in a monitored Kubernetes cluster

We use a Kubernetes DaemonSet to deploy the eBPF-based traffic-capturing components on every single Kubernetes worker node. This allows us to ensure that we automatically capture data on all nodes – even newly provisioned ones (e.g., after horizontally scaling the cluster). Additionally, this means that we can capture and efficiently process data as close to the source as possible (e.g., filter unwanted data without sending it across nodes of the cluster).

Installation

Since the Project Trinidad Sensor consists of only the capturing and processing components, installing it is as simple as provisioning a Kubernetes namespace in which the resources will be contained, downloading the Sensor manifests from the SaaS-backend service, and deploying it onto the cluster:

$ curl -o trinidad.yaml \
    'https://<vmware-saas-portal>/api/v1/manifests/sensor-saas/<sensor-uuid>'
$ kubectl create namespace trinidad 
$ kubectl --namespace trinidad apply --filename trinidad.yaml

...
deployment.apps/trinidad-otel-collector created
horizontalpodautoscaler.autoscaling/trinidad-otel-collector created
secret/asap-server-cert created
secret/asap-user-cert created
clusterrole.rbac.authorization.k8s.io/trinidad-otel-co
created
daemonset.apps
created
daemonset.apps/trinidad-sniffer created

The Sensor uses Mutual TLS (mTLS) to securely authenticate with – and send data to – the cloud backend; the required SSL certificates are stored as secrets within the namespace. Furthermore, we install a ClusterRole to augment network traffic with Kubernetes metadata (such as namespaces or services of the communicating pods). Last, we configure a HorizontalPodAutoscaler (HPA) to ensure that the data processing scales with the size and load of the cluster.

Data Capturing Privileges

Anyone familiar with eBPF internals might now be asking: how do a containerized sensor (i.e., a strongly isolated container within a Kubernetes pod) and eBPF go together? And it is indeed something that needs addressing: the Project Trinidad Sensor container running the network traffic capturing process requires elevated permissions to do its work.

To capture network traffic from any process on the Linux system, the Sensor process uses eBPF to install kernel hooks into Linux system calls dealing with networking, such as send and recv. Clearly, this requires elevated privileges that we must grant to the capturing process (and to only that container, nothing else):

$ kubectl --namespace trinidad \
get daemonset trinidad-sniffer \
-o jsonpath='{.spec.template.spec.containers[0].securityContext}' | jq
{
"privileged": true,
"runAsGroup": 0,
"runAsNonRoot": false,
"runAsUser": 0
}

The inserted kernel hooks will be called whenever any Linux process on the node performs network activity. The hook logic, in turn, copies the required data into a buffer shared with the user-mode sniffing component, which will take over further processing: it will reassemble and format the raw network data before sending it to the next component, which runs in a significantly less privileged container within the same pod.

Another elevated privilege used by the sniffing container is needed to identify which kernel we are operating on. Installing kernel hooks requires knowing precisely which version of the kernel is to be hooked because even minor patches to a kernel – or changes in the kernel compilation flags – can influence the location or offsets within kernel code structures (diving into these nuances is well beyond the scope of this post, but a blog post on BTFGen does an excellent job of covering the problem in more detail).

To be able to fingerprint the running kernel correctly, we require (read-only) access to parts of the Linux host file system running the Kubernetes worker node.

Last, but not least, some types of network communication are tough to handle from the kernel level (such as encrypted traffic or traffic using gRPC – more details on this below). To handle these, the Sensor will inject eBPF user-mode hooks into other processes running on the system. For this to work, the container requires access to the host’s process namespace (in addition to the privileges already described above):

$ kubectl --namespace trinidad \
    get daemonset trinidad-sniffer \
    -o jsonpath='{.spec.template.spec}' | jq
{
  ...
  "hostPID": true,
  ...
}

Handling Encryption and Encoding In User-Mode

Some types of network communication are inherently difficult to handle from the context of a kernel hook. Good examples of this are TLS (encrypted) traffic or gRPC connections.

For inspecting encrypted traffic, we would need to extract encryption key information from the user-mode application that is monitored, and we would need to duplicate the state of the encrypted transactions. To avoid this complexity and overhead, we leverage user-mode hooks to intercept plain-text data before it is encrypted within the application. We also capture it after a received packet has been decrypted. For the interested readers: our colleagues at Pixie have written an excellent post about the nitty-gritty details of how this works in Debugging with eBPF Part 3: Tracing SSL/TLS connections.

Similarly to TLS, understanding gRPC-encoded messages is problematic when we do not have a precise understanding of the protocol specifications, which are typically implemented in user-mode. We use a similar approach also here: user-mode hooks are placed within a monitored application to intercept data before it is encoded and after it has been decoded, providing us with structured data that we can analyze. Once more, for the technical details, we defer to the technical post on Pushing the envelope: monitoring gRPC-C with eBPF written by our friends over at Pixie.

Performance Overhead

The Project Trinidad components for capturing network traffic are designed to have minimal overhead on the cluster and the running workload. Nevertheless, as with any system, there is some overhead. This can be divided into three categories – namely, the impact on the workload applications, on the Kubernetes worker node load, and the Kubernetes cluster load overall.

First, the inserted eBPF probes incur a small (typically <1%) slowdown on the application because we must interrupt the application and copy network data for later processing. Second, as part of processing this raw data, we reconstruct the application (layer 7) protocol information. The overhead from this step heavily depends on the type of workload we monitor, but it typically averages at 5% additional load on the Kubernetes worker node. Last, we annotate, batch, and upload data as part of the OpenTelemetry Collector pod. The HPA auto-scales these pods to scale with the volume of network data, and a single CPU is typically sufficient for most small-to-medium clusters and workloads.

Data Visualization

The last piece of Project Trinidad’s architecture lives in a cloud-hosted backend. Figure 2 shows this in more detail:

Figure 2: Project Trinidad cloud-hosted backend data processing pipelines

The Sensor uses mTLS to authenticate to the cloud backend and regularly uploads filtered, augmented, and batched data using a Kafka stream. From there, data is processed by our ML pipeline (for details, refer to an upcoming next post) and stored in an OpenSearch database for later analysis and visualization.

Project Trinidad is currently in tech-preview, and data visualization is a field of heavy development (an area where we are looking for design partners to actively participate in the future direction of the project). Today, the captured data is annotated with anomalies found during processing and can be visualized/queried using the Jaegertracing UI support for OpenTelemetry-formatted messages:

Figure 3: Project Trinidad cloud-hosted backend UI

Wrapping Up

In this post, we showed how Project Trinidad leverages eBPF to monitor modern workloads without changing a single line of application code. This provides DevOps and Security teams with in-depth insights into their clusters – an essential first step to securing production clusters.

In an upcoming third post in this series, we will explain how we can leverage the collected network traffic to automatically learn the expected behavior of applications deployed in our clusters. We can use this behavior not only to get a clear understanding of all the services in our workloads but also to alert us when the observed traffic deviates from the expected behavior in a way that might indicate an ongoing attack against our services.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *