Kubermatic branding element

As organizations increasingly rely on cloud infrastructure and shared environments, securing data when it is being processed—has become a major security concern. The Confidential Containers (CoCo) project addresses this challenge by offering a hardware-enforced security layer that Platform Engineers can automate, simplifying security compliance for developers and reducing the operational burden on SRE teams.

Why We Need to Protect Data-in-Use

Data security is traditionally approached by focusing on two states: data at rest (long-term storage) and data in transit (network movement). We have mature, often out-of-the-box, solutions for these states, such as TLS for transit and storage encryption for rest.

However, the third state—data in use— is often overlooked. At runtime, the application as well as the data processed by it is loaded in the memory on infrastructure which may not be owned by the user (like rented cloud servers). The core threat that confidential computing attempts to neutralize is the memory dump. A malicious or compromised cloud administrator who has access to the device can take a memory dump of the processing application, potentially viewing sensitive data in plaintext, leading to a data breach.

The primary goal of Confidential Computing is to ensure the confidentiality and integrity of data, specifically when it is being processed.

Hardware-Level Isolation with TEEs

Confidential Computing relies on specialized processor technology to solve this problem.

The core technology is the Trusted Execution Environment (TEE). A TEE can be viewed as a “safe space” or “encrypted vault” within the main processor designed to run code and process data in isolation from the rest of the system.

The key feature of the TEE is encrypted memory. Even if a compromised administrator successfully executes a memory dump of the application, they will only see encrypted gibberish, not the actual plaintext data. The application running inside the TEE, however, can process and view the unencrypted data.

While early TEE offerings were process-based, modern TEEs are often available at the VM level. VM-based TEEs are more popular because it is easier to lift-and-shift applications across different infrastructure providers.

Attestation: Establishing Cryptographic Trust

TEEs are not blindly trusted; they are attestable. When a TEE is created, the hardware vendor provides an attestation report that contains an initial measurement of the hardware. Application owners must take this report and attest it against an Attestation Service. This provides a cryptographic guarantee that the TEE has not been tampered with. Sensitive code or data is only released into the TEE once this verification passes.

Introducing the Confidential Containers (CoCo) Project

Confidential Containers (CoCo) is a CNCF project providing the necessary software stack to run confidential workloads effectively on Kubernetes. The central approach of CoCo is to encapsulate each Kubernetes Pod inside its own Trusted Execution Environment. This drastically limits the scope of trust required for the workload.

For developers, running a confidential workload requires minimal changes to the existing Kubernetes workflow. The only change needed is specifying the appropriate runtimeClassName in the deployment or pod definition.

How it Works Under the Hood

The CoCo components handle the heavy lifting:

  1. The CoCo Operator: The Confidential Container Operator (CC operator) reads a custom resource (CC runtime) and is responsible for making specific runtime classes available based on the underlying hardware. It manages the configuration at the container runtime level (e.g., ContainerD), downloading binaries like the kata shim and updating container runtime configurations.
  2. Kata and MicroVMs: CoCo leverages Kata, an open-source project that runs pods inside microVMs—stripped-down, barebone VMs optimized for containers. In the case of CoCo, the microVM is virtualized from hardware that supports confidential computing, effectively placing the entire pod inside a TEE. Inside the TEE, essential guest components are deployed, including the Kata Agent (managing the workload), the Attestation Agent (for verification), and the Confidential Data Hub (CDH) (for key and secret management).

Using Public Cloud with The Peer Pod Approach

A common challenge for Platform Engineers is deploying TEEs in public cloud environments where the Kubernetes worker nodes themselves are standard VMs without direct confidential computing support. CoCo solves this using the Peer Pod or Cloud API Adaptor approach.

In this model, the virtualization layer of the node is replaced by the cloud provider. Instead of the Kata runtime talking to a local hypervisor on the worker node, it communicates with a Cloud API Adaptor (provided by CoCo). The Adaptor then provisions the confidential microVM using the cloud provider’s TEE-enabled hardware (e.g., AWS EC2 M6A instances powered by AMD’s SEV-SNP). The rest of the workflow—including attestation and interactions with KBS—remains the same.

Lazy Attestation and Key Management

For SREs and security specialists, understanding the workflow for secret retrieval is critical, as it confirms that no data is ever processed without verification.

CoCo utilizes lazy attestation, meaning the attestation process is only triggered when a secret (such as an encrypted container image key, credentials of a database from where sensitive data will be read for processing etc) is actually needed.

Here is the essential workflow:

  1. Secret Request: The Kata agent or the application requests a secret via the Confidential Data Hub (CDH), which is running inside the guest TEE.
  2. Attestation Trigger: The CDH notifies the Attestation Agent to fire the attestation request.
  3. Report Submission: The Attestation Agent sends the TEE’s attestation report to the Key Broker Service (KBS). The location of the KBS is provided via initialization data in the pod annotation.
  4. Verification: The KBS sends the report to an Attestation Service to confirm that the TEE is genuine and untampered (“verification check passed”).
  5. Key Release: Only upon successful verification does the KBS release the encryption key back to the application or agent inside the TEE.

This process ensures that sensitive data is only released to attested secure environments.

Reduced Trusted Compute Base (TCB)

For Enterprise Architects and SREs, the primary security benefit of Confidential Containers is the drastic reduction of the Trusted Compute Base (TCB).

The TCB is defined as the entire set of hardware, software, and firmware components that an application must implicitly trust. In a traditional Kubernetes deployment, the TCB is high; the application trusts the worker node’s Host OS, kernel, Containerd, kubelet, and various host components from different vendors. If any of these components are compromised, the application can be compromised.

With CoCo, the application no longer trusts the worker node on which it is running. The TCB is reduced to the specialized hardware providing the TEE and the minimal software components running inside that TEE (like the CDH and Kata agent). Even the container image is pulled directly inside the trusted execution environment, not on the untrusted node. This shift places the root of trust primarily on the hardware itself, which is significantly harder to compromise than software.

Operational Considerations

While offering immense security benefits, Platform Engineers should note two key performance impacts:

  1. Startup time increases because the microVM must be provisioned and launched before the pod can start running.
  2. Impact on network performance varies depending upon the environment, but usually, overhead is less than 10% in most cases.

Conclusion: Scaling Security, Not Your Team

Confidential Containers are essential for organizations dealing with highly sensitive data or strict regulatory requirements, such as DORA (Digital Operational Resiliency Act) compliance for financial entities requiring protection of data in use, or protecting proprietary data used in modern AI/ML workloads.

By implementing and automating the CoCo stack, the platform team provides an incredibly powerful, hardware-enforced security guarantee as a simple, self-service option (a mere runtimeClassName) for developers. You are automating security and compliance at the highest level, enabling developers to innovate rapidly without ever compromising on the integrity and confidentiality of processing data. You can deploy confidential workloads on the Cloud provider of your choice the Kubermatic Kubernetes Platform (KKP).

You can learn more about Confidential Containers from my talk at the Container Days 2026 conference in Hamburg, where I showed confidential containers in action.

Akash Gautam

Akash Gautam

Kubernetes Consultant

Kubermatic named in the 2025 Gartner® Magic Quadrant™ for Container Management

Access the Report