Intel Confidential Containers Guide
Introduction¶
Kubernetes is a popular open-source platform for automating deployment, scaling, and managing containerized applications (pods). The Confidential Containers (CoCo) open-source project aims to establish a standardized approach to Confidential Computing within Kubernetes pods. It utilizes the power of TEE, like Intel® TDX, to deploy secure containerized applications without requiring in-depth understanding of the Confidential Computing technology.
Intended audience¶
This guide is intended for engineers and technical staff from Cloud Service Providers (CSPs), System Integrators (SIs), on-premises enterprises involved in cloud feature integration, as well as cloud guest users (i.e., end users).
About this guide¶
This guide provides step-by-step instructions on configuring Confidential Containers on an Ubuntu 24.04 system within a Kubernetes environment. Our intention is to give you a quick start guide to deploy Intel TDX-protected applications in a Kubernetes cluster, so that you can work on implementing this technology in your environment.
We assume that you have basic knowledge about Kubernetes concepts and that a Kubernetes cluster is already set up and running. Refer to the Kubernetes documentation for more information on setting up a Kubernetes cluster. We tested the guide on a single-node Kubernetes cluster. There might be some differences in the steps if you are using a multi-node cluster.
This guide also assumes that you have already enabled and configured Intel® TDX on each platform you wish to use as a worker node for your Kubernetes cluster. The master node (aka control plane) does not need to have Intel® TDX enabled. All provided steps should be executed on the master node if not specified otherwise.
This guide is divided into the following sections:
- Infrastructure Setup: This section provides instructions on setting up the infrastructure in an existing Kubernetes cluster to be able to run Intel TDX-protected applications.
- Demo Workload Deployment: This section provides instructions on deploying a sample Intel TDX-protected application in the configured Kubernetes cluster.
- Troubleshooting: This section provides instructions on troubleshooting common issues that may arise following the steps in this guide.
Scope¶
This guide covers the following operating system:
- Ubuntu 24.04
The guide was tested on the following hardware:
- 4th Gen Intel® Xeon® Scalable processors
- 5th Gen Intel® Xeon® Scalable processors
- 6th Gen Intel® Xeon® Scalable processors
Further reading¶
For more information on the projects mentioned in this guide, refer to the following resources:
Infrastructure Setup¶
On this page, we will set up the infrastructure required to run Confidential Containers with Intel® Trust Domain Extensions (Intel® TDX) in a Kubernetes environment. This chapter is intended for the administrator of the Kubernetes cluster.
In detail, we cover the following tasks:
-
We introduce the necessary prerequisites that we assume for the infrastructure setup.
-
Install Confidential Containers
We explore how to deploy Kata Containers, a lightweight container runtime, to allow running containers as lightweight VMs, or Intel TDX-protected VMs (i.e., TDs). We use Helm charts to manage the Confidential Containers Runtime on a Kubernetes cluster.
-
Install Attestation Components
We discuss how to deploy attestation components that ensure that the pods are running the expected workloads, that the pods are protected by Intel TDX on a genuine Intel platform, that the platform is patched to a certain level, and that certain other security relevant information is as expected. As an example, we show how to integrate different attestation services into the Confidential Containers Key Broker Service (KBS): Intel® Trust Authority and an Intel® DCAP-based attestation service.
-
We provide commands to remove the deployed components step by step from the Kubernetes cluster.
Prerequisites¶
This section describes the prerequisites that we assume for the following steps regarding installed software and optionally access to an Intel Trust Authority API Key.
Installed Software¶
Ensure that your infrastructure meets the following requirements:
- Kubernetes,
- Kubernetes cluster with at least one node - serving as master and worker node,
- containerd 1.7.29 or newer,
- Helm - 3.8 or newer,
- Worker nodes configured on registered Intel platforms with Intel TDX Module.
Intel TDX Enabling
The registration of Intel platform referred above does not yet fully cover Ubuntu 24.04. For additional details, refer to Canonical's guide to configure Intel TDX. Especially, the remote attestation chapter provides details about the configuration of remote attestation.
Intel Trust Authority API Key¶
Note
This is optional step only if you want to use Intel Trust Authority as an attestation service.
To enable remote attestation of applications as explained in the following chapter, you need to have access to an Intel Trust Authority API Key (later referred to as ITA_API_KEY).
If you do not yet have such a key, you will find instructions on the Intel Trust Authority website. In particular, you will find the option to start a free trial.
Install Confidential Containers¶
In this section, we will deploy all required components to run containers as lightweight Intel TDX-protected VMs (i.e., TDs). In particular, we use Helm charts to deploy and manage the Confidential Containers Runtime on a Kubernetes clusters.
For more details, see the complete instruction in the CoCo Quick Start and CoCo Getting Started.
Preparation¶
-
Ensure your cluster's node is labeled:
kubectl label node $(kubectl get nodes | awk 'NR!=1 { print $1 }') \ node.kubernetes.io/worker= -
Set the environment variable
HELM_CHARTS_RELEASE_VERSIONto the version of the Helm chart that should be used for the Confidential Containers deployment. All available versions can be found on the corresponding GitHub page.Note
This guide was tested with the version
v0.18.0.export HELM_CHARTS_RELEASE_VERSION=0.18.0 -
Set the environment variable
HELM_COCO_CHART_NAMEto give a name to the Confidential Containers deployment:export HELM_COCO_CHART_NAME=coco
Installation¶
-
Create a Helm values file (
tdx-values.yaml) with the following content to enable all runtimes and shims relevant for Intel TDX:kata-as-coco-runtime: shims: disableAll: true qemu-tdx: enabled: true qemu-nvidia-gpu-tdx: enabled: true qemu-dev: enabled: true defaultShim: amd64: qemu-tdx -
Install the CoCo runtime using a Helm chart with the created values file:
helm install ${HELM_COCO_CHART_NAME} oci://ghcr.io/confidential-containers/charts/confidential-containers \ --version ${HELM_CHARTS_RELEASE_VERSION} \ -f tdx-values.yaml \ --namespace coco-system \ --create-namespaceNote
If your network requires the usage of a proxy, you have to configure it in one of two ways:
- Add the following to your Helm values file (
tdx-values.yaml):kata-as-coco-runtime: shims: qemu-tdx: agent: httpsProxy: "${HTTPS_PROXY}" noProxy: "${NO_PROXY}" -
Specify an overwrite by adding the following line to the above
helm installcommand:--set kata-as-coco-runtime.shims.qemu-tdx.agent.httpsProxy="${HTTPS_PROXY}" --set kata-as-coco-runtime.shims.qemu-tdx.agent.noProxy="${NO_PROXY}"
HTTPS_PROXYandNO_PROXYenvironment variables should be set according to the requirements of the machine where the Kubernetes cluster is deployed. - Add the following to your Helm values file (
-
Wait until all pods are ready, which can be checked with the following command:
kubectl -n coco-system wait --for=condition=Ready pods --all --timeout=5mExpected output:
pod/kata-as-coco-runtime-75gbh condition met -
Check that the Confidential Containers runtime classes exist:
kubectl get runtimeclassExpected output:
NAME HANDLER AGE kata-qemu-coco-dev kata-qemu-coco-dev 19s kata-qemu-coco-dev-runtime-rs kata-qemu-coco-dev-runtime-rs 19s kata-qemu-nvidia-gpu-tdx kata-qemu-nvidia-gpu-tdx 19s kata-qemu-tdx kata-qemu-tdx 19s
Customization¶
Based on your environment, you might want to customize the Confidential Containers installation, e.g., specify which runtimes to enable, configure shims, and set default runtimes. This can be done in following ways:
- Provide a Helm values file to the
helm installcommand, which was used in the instructions above. - Specify value overrides in the
helm installcommand.
In the following sub-sections, we provide more details on these options, which also can be combined. Value overrides take precedence over the values in the values file.
Notes
- Node Selectors: When setting node selectors with dots in the key, escape them,
node-role\.kubernetes\.io/worker, - Architecture: The default architecture is
x86_64. Other architectures must be explicitly specified, - Comma Escaping: When using
--setwith values containing commas, escape them, i.e. use\,.
Configuration via Helm values file¶
For complex configurations, it is recommended to create a Helm values file and pass it helm install using the -f option.
To download latest available configuration options for the chart, use below command:
helm show values oci://ghcr.io/confidential-containers/charts/confidential-containers > values.yaml
The Confidential Container project provides one file containing multiple examples for Helm values files.
Configuration via value overrides¶
For ad-hoc configurations, it is recommended use value overrides in helm install using the --set option.
For example, to only enable the Kata Containers runtime with Intel TDX support and disable all other runtimes, you can use the following value overrides:
--set kata-as-coco-runtime.shims.disableAll=true \
--set kata-as-coco-runtime.shims.qemu-tdx.enabled=true \
--set kata-as-coco-runtime.shims.qemu-nvidia-gpu-tdx.enabled=true \
--set kata-as-coco-runtime.defaultShim.amd64=qemu-tdx
More information about available configuration options can be found on the Customization page of the Confidential Containers documentation.
Install Attestation Components¶
In this section, we explore how to deploy attestation components that ensure that the pods are running the expected workloads, that the pods are protected by Intel TDX on a genuine Intel platform, that the platform is patched to a certain level, and that certain other security relevant information is as expected.
As an example, we show how to integrate different attestation services into Trustee, specifically:
Steps:
-
Clone the Confidential Containers Trustee repository using the following command:
Note
This guide was tested with the version
v0.17.0, but newer versions might be available.git clone -b v0.17.0 https://github.com/confidential-containers/trustee cd trustee/kbs/config/kubernetes/ -
If you are behind proxy, update Trustee deployment configuration:
No additional steps needed.
Set the following environmental variable according to requirements of the machine where the Kubernetes cluster is deployed:
https_proxy: value to your proxy URL.
Run below command to apply proxy settings to Trustee deployment:
sed -i "s|^\(\s*\)volumes:|\1 env:\n\1 - name: https_proxy\n\1 value: \"$https_proxy\"\n\1volumes:|" base/deployment.yaml -
Configure Trustee according to the used attestation service variant:
To configure Trustee to use Intel Trust Authority as an attestation service, set the environment variable
DEPLOYMENT_DIRas follows:export DEPLOYMENT_DIR=itaSet your Intel Trust Authority API Key in Trustee configuration:
sed -i 's/api_key =.*/api_key = "'${ITA_API_KEY}'"/g' $DEPLOYMENT_DIR/kbs-config.tomlTo configure the Trustee to use Intel DCAP as an attestation service, set the environment variable
DEPLOYMENT_DIRas follows:export DEPLOYMENT_DIR=custom_pccs -
Update your secret key that is required during deployment:
echo "This is my super secret" > overlays/key.bin -
Deploy Trustee:
./deploy-kbs.shValidate whether Trustee pod is running:
kubectl get pods -n coco-tenantExpected output:
NAME READY STATUS RESTARTS AGE kbs-5f4696986b-64ljx 1/1 Running 0 12s -
Retrieve
KBS_ADDRESSfor future use in pod's yaml file:export KBS_ADDRESS=http://$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}') echo $KBS_ADDRESSExpected output:
<protocol>://<address>:<port>For example:
http://192.168.0.1:32556Now you can proceed to the next chapter to deploy your pod. See demo workload deployment.
Cleanup¶
This section provides commands to remove the deployed components step by step from the Kubernetes cluster. First, you should uninstall Trustee, then uninstall Confidential Containers.
Uninstall Trustee¶
Depending on what attestation service you have used, you can uninstall the Trustee by following the steps below:
-
Set
DEPLOYMENT_DIRvariable depending on the attestation service used during deployment:export DEPLOYMENT_DIR=itaexport DEPLOYMENT_DIR=custom_pccs -
Delete the Trustee:
kubectl delete -k "$DEPLOYMENT_DIR"
Uninstall Confidential Containers¶
To uninstall Confidential Containers, you can delete the deployed Helm release using the following commands:
-
List deployed Confidential Containers Helm charts:
export HELM_COCO_CHART_NAME=$(helm list -n coco-system --short) -
Delete Confidential Containers related Helm chart:
helm uninstall ${HELM_COCO_CHART_NAME} --namespace coco-system -
Delete Confidential Containers related namespace:
kubectl delete namespace coco-system
Demo Workload Deployment¶
In this chapter, we will present how workloads can be deployed as a Kubernetes pod with gradually increasing security levels:
- Regular Kubernetes pod.
- Pod isolated by Kata Containers.
- Pod isolated by Kata Containers and protected by Intel TDX.
- Pod isolated by Kata Containers, protected with Intel TDX, and Quote verified using a KBS with an attestation service.
For now, we use nginx as a workload example. Further workloads might be added later.
Disclaimer
The provided deployment files are only for demonstration purposes, which can be used in a development environment. You are responsible to properly set up your production environment.
nginx Deployment in Pods of Increasing Security Levels¶
The following subsections describe how to deploy nginx in pods with the gradually increasing security levels listed in the introduction. Finally, we will provide instructions on how to clean up all pods.
Regular Kubernetes Pod¶
To start nginx in a regular Kubernetes pod and to verify the cluster setup, perform the following steps:
-
Save the provided pod configuration as
nginx.yaml:apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.27.4 ports: - containerPort: 80 imagePullPolicy: Always -
Start nginx:
kubectl apply -f nginx.yaml -
Check the pod status:
kubectl get podsExpected output:
NAME READY STATUS RESTARTS AGE nginx 1/1 Running (...)
Pod isolated by Kata Containers¶
To isolate nginx using a Kata Container pod and to be sure that the Kata Containers runtime is working, perform the following steps:
-
Save the provided pod configuration as
nginx-vm.yaml:apiVersion: v1 kind: Pod metadata: name: nginx-vm spec: runtimeClassName: kata-qemu containers: - name: nginx image: nginx:1.27.4 ports: - containerPort: 80 imagePullPolicy: AlwaysCompared to the last security level, the only difference in the pod configuration is the pod name and the usage of
kata-qemuas runtime class. -
Start nginx:
kubectl apply -f nginx-vm.yaml -
Check the pod status:
kubectl get podsExpected output:
NAME READY STATUS RESTARTS AGE nginx-vm 1/1 Running (...)
Pod Isolated by Kata Containers and Protected by Intel TDX¶
To isolate nginx using a Kata Container and to protect it using Intel TDX, perform the following steps:
-
Save the provided pod configuration as
nginx-td.yamlfor this setup:apiVersion: v1 kind: Pod metadata: name: nginx-td annotations: io.containerd.cri.runtime-handler: kata-qemu-tdx spec: runtimeClassName: kata-qemu-tdx containers: - name: nginx image: nginx:1.27.4 ports: - containerPort: 80 imagePullPolicy: AlwaysCompared to the last security level, the only difference in the pod configuration is the pod name and the usage of
kata-qemu-tdxas runtime class. -
Start nginx:
kubectl apply -f nginx-td.yaml -
Check the pod status:
kubectl get podsExpected output for success:
NAME READY STATUS RESTARTS AGE nginx-td 1/1 Running (...)In case the pods are not in
Runningstate, refer to the Troubleshooting section.
Pod Isolated by Kata Containers, Protected with Intel TDX, and Quote Verified using a KBS with an attestation service¶
Finally, we explore how to isolate nginx using Kata Containers, how to protect nginx using Intel TDX, and how to verify the nginx deployment using attestation - for which we use the attestation service integration into the Confidential Containers KBS.
To deploy and verify a protected nginx, follow the steps below:
-
Create pod's yaml file
nginx-td-attestation.yamlfor this setup:apiVersion: v1 kind: Pod metadata: name: nginx-td-attestation annotations: io.containerd.cri.runtime-handler: kata-qemu-tdx io.katacontainers.config.hypervisor.kernel_params: "agent.guest_components_rest_api=all agent.aa_kbc_params=cc_kbc::${KBS_ADDRESS}" spec: runtimeClassName: kata-qemu-tdx initContainers: - name: init-attestation image: storytel/alpine-bash-curl:latest command: ["/bin/sh","-c"] args: - | echo starting; (curl http://127.0.0.1:8006/aa/token\?token_type\=kbs | grep -iv "get token failed" | grep -iv "error" | grep -i token && echo "ATTESTATION COMPLETED SUCCESSFULLY") || (echo "ATTESTATION FAILED" && exit 1); containers: - name: nginx image: nginx:1.27.4 ports: - containerPort: 80 imagePullPolicy: AlwaysCompared to the last security level, the differences in the pod configuration are:
- The name of the pod.
- Additional annotation to enable the attestation component in Kata Containers.
- Additional
initcontainer to trigger the attestation and ensure that nginx container is started only if the attestation is successful.
For details about the used parameters and available parameters, see the following documentation:
-
Set the KBS address to be used during deployment:
export KBS_ADDRESS=http://$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}') -
Start nginx using the exported
KBS_ADDRESS:envsubst < nginx-td-attestation.yaml | kubectl apply -f - -
Check the pod status:
kubectl get pods-
If the output reports the pod in the
Runningstate, it means that Intel TDX attestation completed successfully:NAME READY STATUS RESTARTS AGE nginx-td-attestation 1/1 Running (...) -
If the pod is not in
Runningstate for a few minutes, you can review the attestation logs to identify the issue:kubectl logs pod/nginx-td-attestation -c init-attestation -
Expected output for success:
starting (...) {"token":"<TOKEN>","tee_keypair":"<TEE_KEYPAIR>"} ATTESTATION COMPLETED SUCCESSFULLYIn case of attestation failure, refer to the troubleshooting section.
-
Cleanup All Pods¶
Warning
If necessary, backup your work before proceeding with the cleanup.
To remove the deployed components from the Kubernetes cluster, execute the following commands to remove the pods:
kubectl delete -f nginx.yaml
kubectl delete -f nginx-vm.yaml
kubectl delete -f nginx-td.yaml
kubectl delete -f nginx-td-attestation.yaml
Additional features¶
Refer to the features section in official Confidential Containers guide for additional features such as authenticated registries, encrypted images, and more.
Troubleshooting¶
This section provides instructions on troubleshooting common issues that may arise during the deployment of workload applications in a Kubernetes cluster, protected with Intel TDX and verified using attestation.
If the below guide does not resolve your issue, refer to Confidential Containers Troubleshooting Guide for more information.
Pods Failed to Start¶
This section provides guidance on how to resolve the issue when pods fail to start due to missing parent snapshot. Such a problem might occur when containerd's plugin (Nydus Snapshotter) failed to clean the images correctly.
To see if your pod is affected by this issue, run the following command:
kubectl describe pod <pod name>
Below kind of errors with containerd's plugin (Nydus Snapshotter) will be indicated by the following error message:
failed to create containerd container: create snapshot: missing parent \"k8s.io/2/sha256:961e...\" bucket: not found
failed to create containerd container: error unpacking image: failed to extract layer sha256:<hash1>: failed to get reader from content store: content digest sha256:<hash2>: not found
Error: failed to create containerd container: error unpacking image: failed to extract layer sha256:<SHA>: failed to get reader from content store: content digest sha256:<SHA>: not found
To resolve the issue, try the following procedure:
-
Remove your pod:
kubectl delete pod <pod name> -
Clear the Kubernetes images cache:
# Remove cache for the image causing problems sudo crictl rmi <image name with tag> # Remove all unused cached images sudo crictl rmi --prune -
Remove all data collected by containerd's plugin (Nydus Snapshotter):
sudo ctr -n k8s.io images rm $(sudo ctr -n k8s.io images ls -q) sudo ctr -n k8s.io content rm $(sudo ctr -n k8s.io content ls -q) sudo ctr -n k8s.io snapshots rm $(sudo ctr -n k8s.io snapshots --snapshotter nydus ls | awk 'NR>1 {print $1}') -
Disable optimized disk usage enabled in containerd:
sudo sed -i 's/discard_unpacked_layers = true/discard_unpacked_layers = false/' /etc/containerd/config.toml sudo grep discard_unpacked_layers /etc/containerd/config.toml sudo systemctl restart containerd -
Re-deploy Confidential Containers-related runtime classes using simplified commands based on the install Confidential Containers instruction:
helm uninstall coco --namespace coco-system helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \ --namespace coco-system \ --create-namespace -
Re-deploy your pod:
kubectl apply -f <pod yaml>
Attestation Failure¶
This section pinpoints the most common reasons for attestation failure and provides guidance on how to resolve them.
An attestation failure is indicated by the fact that pod is in Init:Error state and ATTESTATION FAILED message is present in the logs of the pod.
Note
The example outputs presented in the following might differ from your output, because of different names of the pods/deployments or different IP addresses.
To identify if you encounter an attestation failure, follow the steps below:
-
Retrieve the status of the
nginx-td-attestationpod:kubectl get podsSample output with nginx-td-attestation pod is in
Init:Errorstate:NAME READY STATUS RESTARTS AGE nginx-td-attestation 0/1 Init:Error 0 1m -
Get the logs of the
init-attestationcontainer in thenginx-td-attestationpod:kubectl logs pod/nginx-td-attestation -c init-attestationSample output indicating the
ATTESTATION FAILEDmessage:NAME READY STATUS RESTARTS AGE starting (...) ATTESTATION FAILED
In case of attestation failure, follow the steps below to troubleshoot the issue:
-
If you have configured KBS with Intel® Trust Authority, check that API key is correct and KBS was deployed with this value:
kubectl describe configmap kbs-config -n coco-tenant | grep -i api_keyExpected output:
api_key = "<YOUR_ITA_API_KEY>" -
Check if the KBS pod is running and accessible:
echo $(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}')Expected output:
<protocol>://<address>:<port> -
Check KBS logs for any errors:
kubectl logs deploy/kbs -n coco-tenantAn
HTTP 400 Bad Requesterror might suggest that platform is not registered correctly. Refer to the platform registration section of the Intel TDX Enabling Guide for details. -
Check for errors in Intel PCCS service:
systemctl status pccsUse the following command to get more logs:
sudo journalctl -u pccs -
Check for errors in Intel TDX Quote Generation Service:
systemctl status qgsdUse the following command to get more logs:
sudo journalctl -u qgsdThe following error occurs if the platform is not registered correctly.
[QPL] No certificate data for this platform.Refer to the platform registration section of the Intel TDX Enabling Guide for details.