docs: add release v2.6.0

This commit is contained in:
derpsteb 2023-03-08 15:05:59 +00:00 committed by Thomas Tendyck
parent 8c87bba755
commit 02694c0648
58 changed files with 7275 additions and 0 deletions

View file

@ -0,0 +1,43 @@
# Feature status of clouds
What works on which cloud? Currently, Confidential VMs (CVMs) are available in varying quality on the different clouds and software stacks.
For Constellation, the ideal environment provides the following:
1. Ability to run arbitrary software and images inside CVMs
2. CVMs based on AMD SEV-SNP (available in EPYC CPUs since the Milan generation) or, in the future, Intel TDX (available in Xeon CPUs from the Sapphire Rapids generation onward)
3. Ability for CVM guests to obtain raw attestation statements directly from the CPU, ideally via a TPM-like interface
4. Reviewable, open-source firmware inside CVMs
(1) is a functional must-have. (2)--(4) are required for remote attestation that fully keeps the infrastructure/cloud out. Constellation can work without them or with approximations, but won't protect against certain privileged attackers anymore.
The following table summarizes the state of features for different infrastructures as of September 2022.
| **Feature** | **Azure** | **GCP** | **AWS** | **OpenStack (Yoga)** |
|-------------------------------|-----------|---------|---------|----------------------|
| **1. Custom images** | Yes | Yes | No | Yes |
| **2. SEV-SNP or TDX** | Yes | No | No | Depends on kernel/HV |
| **3. Raw guest attestation** | Yes | No | No | Depends on kernel/HV |
| **4. Reviewable firmware** | No* | No | No | Depends on kernel/HV |
## Microsoft Azure
With its [CVM offering](https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview), Azure provides the best foundations for Constellation. Regarding (3), Azure provides direct access to remote-attestation statements. However, regarding (4), the standard CVMs still include closed-source firmware running in VM Privilege Level (VMPL) 0. This firmware is signed by Azure. The signature is reflected in the remote-attestation statements of CVMs. Thus, the Azure closed-source firmware becomes part of Constellation's trusted computing base (TCB).
\* Recently, Azure [announced](https://techcommunity.microsoft.com/t5/azure-confidential-computing/azure-confidential-vms-using-sev-snp-dcasv5-ecasv5-are-now/ba-p/3573747) the *limited preview* of CVMs with customizable firmware. With this CVM type, (4) switches from *No* to *Yes*. Constellation will support customizable firmware on Azure in the future.
## Google Cloud Platform (GCP)
The [CVMs available in GCP](https://cloud.google.com/compute/confidential-vm/docs/create-confidential-vm-instance) are based on AMD SEV but don't have SNP features enabled. This impacts attestation capabilities. Currently, GCP doesn't offer CVM-based attestation at all. Instead, GCP provides attestation statements based on its regular [vTPM](https://cloud.google.com/blog/products/identity-security/virtual-trusted-platform-module-for-shielded-vms-security-in-plaintext), which is managed by the hypervisor. On GCP, the hypervisor is thus currently part of Constellation's TCB.
## Amazon Web Services (AWS)
AWS currently doesn't offer CVMs. AWS proprietary Nitro Enclaves offer some related features but [are explicitly not designed to keep AWS itself out](https://aws.amazon.com/blogs/security/confidential-computing-an-aws-perspective/). Besides, they aren't suitable for running entire Kubernetes nodes inside them. Therefore, Constellation uses regular EC2 instances on AWS [Nitro](https://aws.amazon.com/ec2/nitro/) without runtime encryption. Attestation is based on the [NitroTPM](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nitrotpm.html), which is a vTPM managed by the Nitro hypervisor. Hence, the hypervisor is currently part of Constellation's TCB.
## OpenStack
OpenStack is an open-source cloud and infrastructure management software. It's used by many smaller CSPs and datacenters. In the latest *Yoga* version, OpenStack has basic support for CVMs. However, much depends on the employed kernel and hypervisor. Features (2)--(4) are likely to be a *Yes* with Linux kernel version 6.2. Thus, going forward, OpenStack on corresponding AMD or Intel hardware will be a viable underpinning for Constellation.
## Conclusion
The different clouds and software like the Linux kernel and OpenStack are in the process of building out their support for state-of-the-art CVMs. Azure has already most features in place. For Constellation, the status quo means that the TCB has different shapes on different infrastructures. With broad SEV-SNP support coming to the Linux kernel, we soon expect a normalization of features across infrastructures.

View file

@ -0,0 +1,42 @@
# Confidential Kubernetes
We use the term *Confidential Kubernetes* to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
1. **Workload shielding**: the confidentiality and integrity of all workload-related data and code are enforced.
2. **Control plane shielding**: the confidentiality and integrity of the cluster's control plane, state, and workload configuration are enforced.
3. **Attestation and verifiability**: the two properties above can be verified remotely based on hardware-rooted cryptographic certificates.
Each of the above properties is equally important. Only with all three in conjunction, an entire cluster can be shielded without gaps.
## Constellation security features
Constellation implements the Confidential Kubernetes concept with the following security features.
* **Runtime encryption**: Constellation runs all Kubernetes nodes inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster.
* **Network and storage encryption**: Constellation augments this with transparent encryption of the [network](../architecture/networking.md) and [persistent storage](../architecture/encrypted-storage.md). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime.
* **Transparent key management**: Constellation manages the corresponding [cryptographic keys](../architecture/keys.md) inside CVMs.
* **Node attestation and verification**: Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster.
* **Confidential computing-optimized images**: A node is "good" if it's running a signed Constellation [node image](../architecture/images.md) inside a CVM and is in the expected state. (Node images are hardware-measured during boot. The measurements are reflected in the attestation statements that are produced by nodes and verified by Constellation.)
* **"Whole cluster" attestation**: Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified.
With the above, Constellation wraps an entire cluster into one coherent and verifiable *confidential context*. The concept is depicted in the following.
![Confidential Kubernetes](../_media/concept-constellation.svg)
## Contrast: Managed Kubernetes with CVMs
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, *Node A* has no means to verify if *Node B* is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
![Concept: Managed Kubernetes plus CVMs](../_media/concept-managed.svg)
The following table highlights the key differences in terms of features.
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|-------------------------------------|------------------------------|--------------------------------------------|
| Runtime encryption | Partial (data plane only)| **Yes** |
| Node image verification | No | **Yes** |
| Full cluster attestation | No | **Yes** |
| Transparent network encryption | No | **Yes** |
| Transparent storage encryption | No | **Yes** |
| Confidential key management | No | **Yes** |
| Cloud agnostic / multi-cloud | No | **Yes** |

View file

@ -0,0 +1,23 @@
# License
## Source code
Constellation's source code is available on [GitHub](https://github.com/edgelesssys/constellation) under the [GNU Affero General Public License v3.0](https://github.com/edgelesssys/constellation/blob/main/LICENSE).
## Binaries
Edgeless Systems provides ready-to-use and [signed](../architecture/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/images.md).
These binaries may be used free of charge within the bounds of Constellation's [**Community License**](#community-license). An [**Enterprise License**](#enterprise-license) can be purchased from Edgeless Systems.
The Constellation CLI displays relevant license information when you initialize your cluster. You are responsible for staying within the bounds of your respective license. Constellation doesn't enforce any limits so as not to endanger your cluster's availability.
### Community License
You are free to use the Constellation binaries provided by Edgeless Systems to create services for internal consumption, evaluation purposes, or non-commercial use. You must not use the Constellation binaries to provide commercial hosted services to third parties. Edgeless Systems gives no warranties and offers no support.
### Enterprise License
Enterprise Licenses don't have the above limitations and come with support and additional features. Find out more at the [product website](https://www.edgeless.systems/products/constellation/).
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/orchestration.md#workspaces) in a file named `constellation.license`.

View file

@ -0,0 +1,108 @@
# Performance
This section analyzes the performance of Constellation.
## Performance impact from runtime encryption
All nodes in a Constellation cluster run inside Confidential VMs (CVMs). Thus, Constellation's performance is directly affected by the performance of CVMs.
AMD and Azure jointly released a [performance benchmark](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796) for CVMs based on 3rd Gen AMD EPYC processors (Milan) with SEV-SNP. With a range of mostly compute-intensive benchmarks like SPEC CPU 2017 and CoreMark, they found that CVMs only have a small (2%--8%) performance degradation compared to standard VMs. You can expect to see similar performance for compute-intensive workloads running on Constellation.
## Performance analysis with K-Bench
To assess the overall performance of Constellation, we benchmarked Constellation v2.0.0 using [K-Bench](https://github.com/vmware-tanzu/k-bench). K-Bench is a configurable framework to benchmark Kubernetes clusters in terms of storage I/O, network performance, and creating/scaling resources.
As a baseline, we compare Constellation with the non-confidential managed Kubernetes offerings on Microsoft Azure and Google Cloud Platform (GCP). These are AKS on Azure and GKE on GCP.
### Configurations
We used the following configurations for the benchmarks.
#### Constellation and GKE on GCP
- Nodes: 3
- Machines: `n2d-standard-4`
- CVM: `true`
- Zone: `europe-west3-b`
#### Constellation and AKS on Azure
- Nodes: 3
- Machines: `DC4as_v5`
- CVM: `true`
- Region: `West Europe`
- Zone: `2`
#### K-Bench
Using the default [K-Bench test configurations](https://github.com/vmware-tanzu/k-bench/tree/master/config), we ran the following tests on the clusters:
- `default`
- `dp_network_internode`
- `dp_network_intranode`
- `dp_fio`
### Results
#### Kubernetes API Latency
At its core, the Kubernetes API is the way to query and modify a cluster's state. Latency matters here. Hence, it's vital that even with the additional level of security from Constellation's network the API latency doesn't spike.
K-Bench's `default` test performs calls to the API to create, update, and delete cluster resources.
The three graphs below compare the API latencies (lower is better) in milliseconds for pods, services, and deployments.
![API Latency - Pods](../_media/benchmark_api_pods.png)
Pods: Except for the `Pod Update` call, Constellation is faster than AKS and GKE in terms of API calls.
![API Latency - Services](../_media/benchmark_api_svc.png)
Services: Constellation has lower latencies than AKS and GKE except for service creation on AKS.
![API Latency - Deployments](../_media/benchmark_api_dpl.png)
Deployments: Constellation has the lowest latency for all cases except for scaling deployments on GKE and creating deployments on AKS.
#### Network
There are two main indicators for network performance: intra-node and inter-node transmission speed.
K-Bench provides benchmark tests for both, configured as `dp_network_internode` and `dp_network_intranode`. The tests use [`iperf`](https://iperf.fr/) to measure the bandwidth available.
##### Inter-node
Inter-node communication is the network transmission between different Kubernetes nodes.
The first test (`dp_network_internode`) measures the throughput between nodes. Constellation has an inter-node throughput of around 816 Mbps on Azure to 872 Mbps on GCP. While that's faster than the average throughput of AKS at 577 Mbps, GKE provides faster networking at 9.55 Gbps.
The difference can largely be attributed to Constellation's [network encryption](../architecture/networking.md) that protects data in-transit.
##### Intra-node
Intra-node communication happens between pods running on the same node.
The connections directly pass through the node's OS layer and never hit the network.
The benchmark evaluates how the [Constellation's node OS image](../architecture/images.md) and runtime encryption influence the throughput.
Constellation's bandwidth for both sending and receiving is at 31 Gbps on Azure and 22 Gbps on GCP. AKS achieves 26 Gbps and GKE achieves about 27 Gbps in the tests.
![](../_media/benchmark_net.png)
#### Storage I/O
Azure and GCP offer persistent storage for their Kubernetes services AKS and GKE via the Container Storage Interface (CSI). CSI storage in Kubernetes is available via `PersistentVolumes` (PV) and consumed via `PersistentVolumeClaims` (PVC).
Upon requesting persistent storage through a PVC, GKE and AKS will provision a PV as defined by a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
Constellation provides persistent storage on Azure and GCP [that's encrypted on the CSI layer](../architecture/encrypted-storage.md).
Similarly, Constellation will provision a PV via a default storage class upon a PVC request.
The K-Bench [`fio`](https://fio.readthedocs.io/en/latest/fio_doc.html) benchmark consists of several tests.
We selected four different tests that perform asynchronous access patterns because we believe they most accurately depict real-world I/O access for most applications.
The following graph shows I/O throughput in MiB/s (higher is better).
![I/O benchmark graph](../_media/benchmark_io.png)
Comparing Constellation on GCP with GKE, you see that Constellation offers similar read/write speeds in all scenarios.
Constellation on Azure and AKS, however, partially differ. In read-write mixes, Constellation on Azure outperforms AKS in terms of I/O. On full-write access, Constellation and AKS have the same speed.
## Conclusion
Despite providing substantial [security benefits](./security-benefits.md), Constellation overall only has a slight performance overhead over the managed Kubernetes offerings AKS and GKE. Constellation is on par in most benchmarks, but is slightly slower in certain scenarios due to network and storage encryption. When it comes to API latencies, Constellation even outperforms the less security-focused competition.

View file

@ -0,0 +1,11 @@
# Product features
Constellation is a Kubernetes engine that aims to provide the best possible data security in combination with enterprise-grade scalability and reliability features---and a smooth user experience.
From a security perspective, Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and corresponding security features, which shield your entire cluster from the underlying infrastructure.
From an operational perspective, Constellation provides the following key features:
* **Native support for different clouds**: Constellation works on Microsoft Azure, Google Cloud Platform (GCP), and Amazon Web Services (AWS). Support for OpenStack-based environments is coming with a future release. Constellation securely interfaces with the cloud infrastructure to provide [cluster autoscaling](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), [dynamic persistent volumes](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), and [service load balancing](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
* **High availability**: Constellation uses a [multi-master architecture](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) with a [stacked etcd topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) to ensure high availability.
* **Integrated Day-2 operations**: Constellation lets you securely [upgrade](../workflows/upgrade.md) your cluster to a new release. It also lets you securely [recover](../workflows/recovery.md) a failed cluster. Both with a single command.

View file

@ -0,0 +1,22 @@
# Security benefits and threat model
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the *cloud & datacenter infrastructure* and the *physical hosts*, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify-cluster.md).
![TCB comparison](../_media/tcb.svg)
Given this background, the following describes the concrete threat classes that Constellation addresses.
## Insider access
Employees and third-party contractors of cloud service providers (CSPs) have access to different layers of the cloud infrastructure.
This opens up a large attack surface where workloads and data can be read, copied, or manipulated. With Constellation, Kubernetes deployments are shielded from the infrastructure and thus such accesses are prevented.
## Infrastructure-based attacks
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and access data over a longer period. Analogously to the *insider access* scenario, Constellation also prevents access to a deployment's data in this scenario.
## Supply chain attacks
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses [remote attestation](../architecture/attestation.md) in conjunction with public [transparency logs](../workflows/verify-cli.md) to prevent this.
In the future, Constellation will extend this feature to customer workloads. This will enable cluster owners to create auditable policies that precisely define which containers can run in a given deployment.