Intel Confidential Containers Guide
Introduction¶
Kubernetes is a popular open-source platform for automating deployment, scaling, and managing containerized applications (pods). The Confidential Containers (CoCo) open-source project aims to establish a standardized approach to Confidential Computing within Kubernetes pods. It utilizes the power of TEE, like Intel® TDX, to deploy secure containerized applications without requiring in-depth understanding of the Confidential Computing technology.
Intended audience¶
This guide is intended for engineers and technical staff from Cloud Service Providers (CSPs), System Integrators (SIs), on-premises enterprises involved in cloud feature integration, as well as cloud guest users (i.e., end users).
About this guide¶
This guide provides step-by-step instructions on configuring Confidential Containers on an Ubuntu 24.04 system within a Kubernetes environment. Our intention is to give you a quick start guide to deploy Intel TDX-protected applications in a Kubernetes cluster, so that you can work on implementing this technology in your environment.
We assume that you have basic knowledge about Kubernetes concepts and that a Kubernetes cluster is already set up and running. Refer to the Kubernetes documentation for more information on setting up a Kubernetes cluster. We tested the guide on a single-node Kubernetes cluster. There might be some differences in the steps if you are using a multi-node cluster.
This guide also assumes that you have already enabled and configured Intel® TDX on each platform you wish to use as a worker node for your Kubernetes cluster. The master node (aka control plane) does not need to have Intel® TDX enabled. All provided steps should be executed on the master node if not specified otherwise.
Intel TDX Enabling
The Intel TDX Enabling Guide referred to above does not yet fully cover Ubuntu 24.04. For additional details, refer to Canonical's guide to configure Intel TDX. Especially, the remote attestation chapter provides details about the configuration of remote attestation.
This guide is divided into the following sections:
- Infrastructure Setup: This section provides instructions on setting up the infrastructure in an existing Kubernetes cluster to be able to run Intel TDX-protected applications.
- Demo Workload Deployment: This section provides instructions on deploying a sample Intel TDX-protected application in the configured Kubernetes cluster.
- Troubleshooting: This section provides instructions on troubleshooting common issues that may arise following the steps in this guide.
Scope¶
This guide covers the following operating system:
- Ubuntu 24.04
The guide was tested on the following hardware:
- 4th Gen Intel® Xeon® Scalable processors
- 5th Gen Intel® Xeon® Scalable processors
Further reading¶
For more information on the projects mentioned in this guide, refer to the following resources:
Infrastructure Setup¶
On this page, we will set up the infrastructure required to run Confidential Containers with Intel® Trust Domain Extensions (Intel® TDX) in a Kubernetes environment. This chapter is intended for the administrator of the Kubernetes cluster.
In detail, we cover the following tasks:
-
We introduce the necessary prerequisites that we assume for the infrastructure setup.
-
Install Confidential Containers Operator
We explore how to deploy Kata Containers, a lightweight container runtime, to allow running containers as lightweight VMs, or VMs with Intel TDX protection (i.e., TDs). We can achieve this by installation of the Confidential Containers operator, which provides a means to deploy and manage Confidential Containers Runtime on Kubernetes cluster.
-
Install Attestation Components
We discuss how to deploy attestation components that ensure that the pods are running the expected workloads, that the pods are protected by Intel TDX on a genuine Intel platform, that the platform is patched to a certain level, and that certain other security relevant information is as expected. As an example, we show how to integrate Intel® Trust Authority capabilities into the Confidential Containers Key Broker Service (KBS).
-
We provide commands to remove the deployed components step by step from the Kubernetes cluster.
Prerequisites¶
This section describes the prerequisites that we assume for the following steps regarding installed software and access to an Intel Trust Authority API Key.
Installed Software¶
Ensure that your infrastructure meets the following requirements:
- Kubernetes - 1.30.3 or newer.
- Kubernetes cluster with at least one node - serving as master and worker node.
- containerd 1.7.12 or newer.
-
Worker nodes configured on registered Intel platforms with Intel TDX Module version 1.5.
Intel TDX Enabling
The registration of Intel platform referred above does not yet fully cover Ubuntu 24.04. For additional details, refer to Canonical's guide to configure Intel TDX. Especially, the remote attestation chapter provides details about the configuration of remote attestation.
Intel Trust Authority API Key¶
To enable remote attestation of applications as explained in the following chapter, you need to have access to an Intel Trust Authority API Key (later referred to as ITA_API_KEY
).
If you do not yet have such a key, you will find instructions on the Intel Trust Authority website. In particular, you will find the option to start a free trial.
Install Confidential Containers Operator¶
In this section, we will deploy all required components to run containers as lightweight VMs with Intel TDX protection (i.e., TDs). In particular, we install the Confidential Containers operator, which is used to deploy and manage the Confidential Containers Runtime on Kubernetes clusters. For more details, see the complete instruction in the CoCo Operator Quick Start.
Steps:
-
Ensure your cluster's node is labeled:
kubectl label node $(kubectl get nodes | awk 'NR!=1 { print $1 }') node.kubernetes.io/worker=
-
Set the environment variable
OPERATOR_RELEASE_VERSION
to the version of the Confidential Containers operator that you want to use. All available versions can be found on the corresponding GitHub page. Note that we tested this guide with the versionv0.10.0
.export OPERATOR_RELEASE_VERSION=v0.10.0
-
Deploy the Confidential Containers operator:
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=$OPERATOR_RELEASE_VERSION
-
Create Confidential Containers related runtime classes:
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=$OPERATOR_RELEASE_VERSION
Set the following environmental variables:
https_proxy
: value to your proxy URL.no_proxy
: value to exclude traffic from using the proxy.
mkdir -p /tmp/proxy-overlay; \ pushd /tmp/proxy-overlay cat <<EOF > kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=$OPERATOR_RELEASE_VERSION patches: - patch: |- - op: add path: /spec/config/environmentVariables/- value: name: AGENT_HTTPS_PROXY value: ${https_proxy} - op: add path: /spec/config/environmentVariables/- value: name: AGENT_NO_PROXY value: ${no_proxy} target: kind: CcRuntime name: ccruntime-sample EOF popd kubectl apply -k /tmp/proxy-overlay rm -rf /tmp/proxy-overlay
-
Wait until Confidential Containers operator pods are ready:
kubectl -n confidential-containers-system wait --for=condition=Ready pods --all --timeout=5m
Expected output:
pod/cc-operator-controller-manager-b6dcb65fb-7lmz8 condition met pod/cc-operator-daemon-install-2n6sq condition met pod/cc-operator-pre-install-daemon-9xvzf condition met
-
Check that the Confidential Containers runtime classes exist:
kubectl get runtimeclass | grep -i kata
Expected output:
kata kata-qemu 12s kata-clh kata-clh 12s kata-qemu kata-qemu 12s kata-qemu-coco-dev kata-qemu-coco-dev 12s kata-qemu-sev kata-qemu-sev 12s kata-qemu-snp kata-qemu-snp 12s kata-qemu-tdx kata-qemu-tdx 12s
Install Attestation Components¶
In this section, we explore how to deploy attestation components that ensure that the pods are running the expected workloads, that the pods are protected by Intel TDX on a genuine Intel platform, that the platform is patched to a certain level, and that certain other security relevant information is as expected. As an example, we show how to integrate Intel® Trust Authority capabilities into the Confidential Containers Key Broker Service (KBS). Note that the Confidential Containers KBS also works with other verification backends, e.g., Intel DCAP.
Steps:
-
Clone the Confidential Containers Trustee repository using the following command. Note that this guide was tested with version v0.10.1, but newer versions might be available.
git clone -b v0.10.1 https://github.com/confidential-containers/trustee cd trustee/kbs/config/kubernetes/
-
Configure Key Broker Service (KBS):
-
To configure the Key Broker Services to use Intel Trust Authority as an attestation service, set the environment variable
DEPLOYMENT_DIR
toita
:export DEPLOYMENT_DIR=ita
-
Set your Intel Trust Authority (ITA) API Key in KBS configuration:
sed -i 's/api_key =.*/api_key = "'${ITA_API_KEY}'"/g' $DEPLOYMENT_DIR/kbs-config.toml
-
Update your secret key that is required during deployment:
echo "This is my super secret" > overlays/$(uname -m)/key.bin
Configure KBS behind a proxy
If your network requires the usage of a proxy to access the Intel® Trust Authority service, you may need to set the
HTTPS_PROXY
environment variable in KBS deployment. This can be done with the following command:sed -i 's/^\(\s*\)volumes:/\1 env:\n\1 - name: https_proxy\n\1 value: "$https_proxy"\n\1volumes:/' base/deployment.yaml
-
-
Deploy Key Broker Service:
./deploy-kbs.sh
Validate whether KBS pod is running:
kubectl get pods -n coco-tenant
Expected output:
NAME READY STATUS RESTARTS AGE kbs-5f4696986b-64ljx 1/1 Running 0 12s
-
Retrieve
KBS_ADDRESS
for future use in pod's yaml file:export KBS_ADDRESS=http://$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}') echo $KBS_ADDRESS
Expected output:
<protocol>://<address>:<port>
For example:
http://192.168.0.1:32556
Now you can proceed to the next chapter to deploy your pod. See demo workload deployment
Cleanup¶
This section provides commands to remove the deployed components step by step from the Kubernetes cluster. After uninstalling Key Broker Service, follow uninstalling Confidential Containers Operator.
Uninstall Key Broker Service¶
kubectl delete -k "$DEPLOYMENT_DIR"
Uninstall Confidential Containers Operator¶
-
Set environment variable
OPERATOR_RELEASE_VERSION
to installed operator version:export OPERATOR_RELEASE_VERSION=v0.10.0
-
Delete Confidential Containers-related runtime classes:
kubectl delete -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=$OPERATOR_RELEASE_VERSION
-
Delete the Confidential Containers operator:
kubectl delete -k github.com/confidential-containers/operator/config/release?ref=$OPERATOR_RELEASE_VERSION
Demo Workload Deployment¶
In this chapter, we will present how workloads can be deployed as a Kubernetes pod with gradually increasing security levels:
- Regular Kubernetes pod.
- Pod isolated by Kata Containers.
- Pod isolated by Kata Containers and protected by Intel TDX.
- Pod isolated by Kata Containers, protected with Intel TDX, and Quote verified using Intel Trust Authority.
For now, we use nginx as a workload example. Further workloads might be added later.
Disclaimer
The provided deployment files are only for demonstration purposes, which can be used in a development environment. You are responsible to properly set up your production environment.
nginx Deployment in Pods of Increasing Security Levels¶
The following subsections describe how to deploy nginx in pods with the gradually increasing security levels listed in the introduction. Finally, we will provide instructions on how to clean up all pods.
Regular Kubernetes Pod¶
To start nginx in a regular Kubernetes pod and to verify the cluster setup, perform the following steps:
-
Save the provided pod configuration as
nginx.yaml
:apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.27.0 ports: - containerPort: 80
-
Start nginx:
kubectl apply -f nginx.yaml
-
Check the pod status:
kubectl get pods
Expected output:
NAME READY STATUS RESTARTS AGE nginx 1/1 Running (...)
Pod isolated by Kata Containers¶
To isolate nginx using a Kata Container pod and to be sure that the Kata Containers runtime is working, perform the following steps:
-
Save the provided pod configuration as
nginx-vm.yaml
:apiVersion: v1 kind: Pod metadata: name: nginx-vm spec: runtimeClassName: kata-qemu containers: - name: nginx image: nginx:1.27.0 ports: - containerPort: 80
Compared to the last security level, the only difference in the pod configuration is the pod name and the usage of
kata-qemu
as runtime class. -
Start nginx:
kubectl apply -f nginx-vm.yaml
-
Check the pod status:
kubectl get pods
Expected output:
NAME READY STATUS RESTARTS AGE nginx-vm 1/1 Running (...)
Pod Isolated by Kata Containers and Protected by Intel TDX¶
To isolate nginx using a Kata Container and to protect it using Intel TDX, perform the following steps:
-
Save the provided pod configuration as
nginx-td.yaml
for this setup:apiVersion: v1 kind: Pod metadata: name: nginx-td spec: runtimeClassName: kata-qemu-tdx containers: - name: nginx image: nginx:1.27.0 ports: - containerPort: 80
Compared to the last security level, the only difference in the pod configuration is the pod name and the usage of
kata-qemu-tdx
as runtime class. -
Start nginx:
kubectl apply -f nginx-td.yaml
-
Check the pod status:
kubectl get pods
Expected output for success:
NAME READY STATUS RESTARTS AGE nginx-td 1/1 Running (...)
In case the pods are not in
Running
state, refer to the Troubleshooting section.
Pod Isolated by Kata Containers, Protected with Intel TDX, and Quote Verified using Intel Trust Authority¶
Finally, we explore how to isolate nginx using Kata Containers, how to protect nginx using Intel TDX, and how to verify the nginx deployment using attestation - for which we use the Intel® Trust Authority integration into the Confidential Containers KBS.
To deploy and verify a protected nginx, follow the steps below:
-
Set the KBS address to be used in pod's yaml file:
export KBS_ADDRESS=http://$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}')
-
Create pod's yaml file
nginx-td-attestation.yaml
for this setup:cat <<EOF > nginx-td-attestation.yaml apiVersion: v1 kind: Pod metadata: name: nginx-td-attestation annotations: io.katacontainers.config.hypervisor.kernel_params: "agent.guest_components_rest_api=all agent.aa_kbc_params=cc_kbc::${KBS_ADDRESS}" spec: runtimeClassName: kata-qemu-tdx initContainers: - name: init-attestation image: storytel/alpine-bash-curl:latest command: ["/bin/sh","-c"] args: - | echo starting; (curl http://127.0.0.1:8006/aa/token\?token_type\=kbs | grep -iv "get token failed" | grep -iv "error" | grep -i token && echo "ATTESTATION COMPLETED SUCCESSFULLY") || (echo "ATTESTATION FAILED" && exit 1); containers: - name: nginx image: nginx:1.27.0 ports: - containerPort: 80 EOF
Compared to the last security level, the differences in the pod configuration are:
- The name of the pod.
- Additional annotation to enable the attestation component in Kata Containers.
- Additional
init
container to trigger the attestation and ensure that nginx container is started only if the attestation is successful.
For details about the used parameters and available parameters, see the following documentation:
-
Start nginx:
kubectl apply -f nginx-td-attestation.yaml
-
Check the pod status:
kubectl get pods
-
If the output reports the pod in the
Running
state, it means that Intel TDX attestation completed successfully:NAME READY STATUS RESTARTS AGE nginx-td-attestation 1/1 Running (...)
-
If the pod is not in
Running
state for a few minutes, you can review the attestation logs to identify the issue:kubectl logs pod/nginx-td-attestation -c init-attestation
-
Expected output for success:
starting (...) {"token":"<TOKEN>","tee_keypair":"<TEE_KEYPAIR>"} ATTESTATION COMPLETED SUCCESSFULLY
In case of attestation failure, refer to the troubleshooting section.
-
Cleanup All Pods¶
Warning
If necessary, backup your work before proceeding with the cleanup.
To remove the deployed components from the Kubernetes cluster, execute the following commands to remove the pods:
kubectl delete -f nginx.yaml
kubectl delete -f nginx-vm.yaml
kubectl delete -f nginx-td.yaml
kubectl delete -f nginx-td-attestation.yaml
Troubleshooting¶
This section provides instructions on troubleshooting common issues that may arise during the deployment of workload applications in a Kubernetes cluster, protected with Intel TDX and verified using attestation.
Pods Failed to Start¶
This section provides guidance on how to resolve the issue when pods fail to start due to missing parent snapshot. Such a problem might occur when containerd's plugin (Nydus Snapshotter) failed to clean the images correctly.
To see if your pod is affected by this issue, run the following command:
kubectl describe pod nginx-td-attestation
An error with containerd's plugin (Nydus Snapshotter) will be indicated by the following error message:
failed to create containerd container: create snapshot: missing parent \"k8s.io/2/sha256:961e...\" bucket: not found
To resolve the issue, try the following procedure:
- Uninstall Confidential Containers Operator as described in the uninstall Confidential Containers Operator section.
-
Remove all data collected by containerd's plugin (Nydus Snapshotter):
sudo ctr -n k8s.io images rm $(sudo ctr -n k8s.io images ls -q) sudo ctr -n k8s.io content rm $(sudo ctr -n k8s.io content ls -q) sudo ctr -n k8s.io snapshots rm $(sudo ctr -n k8s.io snapshots ls | awk 'NR>1 {print $1}')
-
Re-install Confidential Containers Operator using the instructions provided in the install Confidential Containers Operator section.
- Re-deploy your workloads.
Attestation Failure¶
This section pinpoints the most common reasons for attestation failure and provides guidance on how to resolve them.
An attestation failure is indicated by the fact that pod is in Init:Error
state and ATTESTATION FAILED
message is present in the logs of the pod.
Note
The example outputs presented in the following might differ from your output, because of different names of the pods/deployments or different IP addresses.
To identify if you encounter an attestation failure, follow the steps below:
-
Retrieve the status of the
nginx-td-attestation
pod:kubectl get pods
Sample output with nginx-td-attestation pod is in
Init:Error
state:NAME READY STATUS RESTARTS AGE nginx-td-attestation 0/1 Init:Error 0 1m
-
Get the logs of the
init-attestation
container in thenginx-td-attestation
pod:kubectl logs pod/nginx-td-attestation -c init-attestation
Sample output indicating the
ATTESTATION FAILED
message:NAME READY STATUS RESTARTS AGE starting (...) ATTESTATION FAILED
In case of attestation failure, follow the steps below to troubleshoot the issue:
-
Check if the Intel® Trust Authority API key is correct and KBS was deployed with this value:
kubectl describe configmap kbs-config -n coco-tenant | grep -i api_key
Expected output:
api_key = "<YOUR_ITA_API_KEY>"
-
Check if the KBS pod is running and accessible:
echo $(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get svc kbs -n coco-tenant -o jsonpath='{.spec.ports[0].nodePort}')
Expected output:
<protocol>://<address>:<port>
-
Check KBS logs for any errors:
kubectl logs pod/kbs-85b8548d76-k7pcj -n coco-tenant
An
HTTP 400 Bad Request
error might suggest that platform is not registered correctly. Refer to the platform registration section of the Intel TDX Enabling Guide for details. -
Check for errors in Intel PCCS service:
systemctl status pccs
Use the following command to get more logs:
sudo journalctl -u pccs
-
Check for errors in Intel TDX Quote Generation Service:
systemctl status qgsd
Use the following command to get more logs:
sudo journalctl -u qgsd
The following error occurs if the platform is not registered correctly.
[QPL] No certificate data for this platform.
Refer to the platform registration section of the Intel TDX Enabling Guide for details.