mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-07-27 09:15:22 -04:00
Add docs to repo (#38)
This commit is contained in:
parent
50d3f3ca7f
commit
b95f3dbc91
180 changed files with 13401 additions and 67 deletions
122
docs/docs/overview/benchmarks.md
Normal file
122
docs/docs/overview/benchmarks.md
Normal file
|
@ -0,0 +1,122 @@
|
|||
# Performance
|
||||
|
||||
Security and performance are generally considered to be in a tradeoff relationship.
|
||||
A tradeoff is a situation that involves losing one quality or aspect of something in return for gaining another quality or aspect.
|
||||
Encryption is one of the common suspects for such a tradeoff that's inevitable for upholding the confidentiality and privacy of data during cloud transformation.
|
||||
Constellation provides encryption [of data at rest](../architecture/encrypted-storage.md), [in-cluster transit](../architecture/networking.md), and [in-use](confidential-kubernetes.md) of a Kubernetes cluster.
|
||||
This article elaborates on the performance impact for applications deployed in Constellation versus standard Kubernetes clusters.
|
||||
|
||||
AMD and Azure have collaboratively released a [performance benchmark](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796) for the runtime encryption of the 3rd Gen AMD EPYC processors with its SEV-SNP capability enabled.
|
||||
They found that Confidential VMs have minimal performance differences on common benchmarks as compared with general-purpose VMs.
|
||||
The overhead being in the single digits, runtime memory encryption will affect only the compute-heavy applications in their performance.
|
||||
Confidential VMs such as AMD SEV-SNP are the foundation for Constellation, hence, the same performance results can be expected in terms of runtime overhead.
|
||||
|
||||
We performed additional benchmarking tests on Constellation clusters to assess more Kubernetes-specific intra-cluster network throughput, storage I/O, and Kubernetes API latencies.
|
||||
|
||||
## Test Setup
|
||||
|
||||
We benchmarked Constellation release v1.3.0 using [K-Bench](https://github.com/vmware-tanzu/k-bench). K-Bench is a configurable framework to benchmark Kubernetes clusters in terms of storage I/O, network performance, and creating/scaling resources.
|
||||
|
||||
As a baseline, we compared Constellation with the Constellation-supported cloud providers' managed Kubernetes offerings.
|
||||
|
||||
Throughout this article, you will find the comparison of Constellation on GCP with GKE and on Azure with AKS.
|
||||
We can't provide an accurate intercloud meta-comparison at this point due to different Confidential VM machine types.
|
||||
|
||||
The benchmark ran with the following machines and configurations:
|
||||
|
||||
### Constellation on GCP / GKE
|
||||
|
||||
- Nodes: 3
|
||||
- Machines: `n2d-standard-2`
|
||||
- Kubernetes version: `1.23.6-gke.2200`
|
||||
- Zone: `europe-west3-b`
|
||||
|
||||
### Constellation on Azure / AKS
|
||||
|
||||
- Nodes: 3
|
||||
- Machines: `D2a_v4`
|
||||
- Kubernetes version: `1.23.5`
|
||||
- Region: `North Europe`
|
||||
- Zone: `2`
|
||||
|
||||
### K-Bench
|
||||
|
||||
Using the default [K-Bench test configurations](https://github.com/vmware-tanzu/k-bench/tree/master/config), we ran the following tests on the clusters:
|
||||
|
||||
- `default`
|
||||
- `dp_netperf_internode`
|
||||
- `dp_network_internode`
|
||||
- `dp_network_intranode`
|
||||
- `dp_fio`
|
||||
|
||||
## Results
|
||||
|
||||
### Kubernetes API Latency
|
||||
|
||||
At its core, the Kubernetes API is the way to query and modify a cluster's state. Latency matters here. Hence, it's vital that even with the additional level of security from Constellation's network the API latency doesn't spike.
|
||||
K-Bench's `default` test performs calls to the API to create, update and delete cluster resources.
|
||||
|
||||
The three graphs below compare the API latencies (lower is better) in milliseconds for pods, services, and deployments.
|
||||
|
||||

|
||||
|
||||
Pods: Except for the `Pod Update` call, Constellation is faster than AKS and GKE in terms of API calls.
|
||||
|
||||

|
||||
|
||||
Services: Constellation has lower latencies than AKS and GKE except for service creation on AKS.
|
||||
|
||||

|
||||
|
||||
Deployments: Constellation has the lowest latency for all cases except for scaling deployments on GKE and creating deployments on AKS.
|
||||
|
||||
### Network
|
||||
|
||||
When it comes to network performance, there are two main indicators we need to differentiate: intra-node and inter-node transmission speed.
|
||||
K-Bench provides benchmark tests for both, configured as `dp_netperf_internode`, `dp_network_internode`, `dp_network_intranode`.
|
||||
|
||||
#### Inter-node
|
||||
|
||||
K-Bench has two benchmarks to evaluate the network performance between different nodes.
|
||||
|
||||
The first test (`dp_netperf_internode`) uses [`netperf`](https://hewlettpackard.github.io/netperf/) to measure the throughput. Constellation has a slightly lower network throughput than AKS and GKE.
|
||||
This can largely be attributed to [Constellation's network encryption](../architecture/networking.md).
|
||||
|
||||
#### Intra-node
|
||||
|
||||
Intra-node communication happens between pods running on the same node.
|
||||
The connections directly pass through the node's OS layer and never hit the network.
|
||||
The benchmark evaluates how the [Constellation's node OS image](../architecture/images.md) and runtime encryption influence the throughput.
|
||||
|
||||
The K-Bench tests `dp_network_internode` and `dp_network_intranode`. The tests use [`iperf`](https://iperf.fr/) to measure the bandwidth available.
|
||||
Constellation's bandwidth for both sending and receiving is at 20 Gbps while AKS achieves slightly higher numbers and GKE achieves about 30 Gbps in our tests.
|
||||
|
||||

|
||||
|
||||
### Storage I/O
|
||||
|
||||
Azure and GCP offer persistent storage for their Kubernetes services AKS and GKE via the Container Storage Interface (CSI). CSI storage in Kubernetes is available via `PersistentVolumes` (`PV`) and consumed via `PersistentVolumeClaims` (`PVC`).
|
||||
Upon requesting persistent storage through a PVC, GKE and AKS will provision a PV as defined by a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
Constellation provides persistent storage on Azure and GCP that's encrypted on the CSI layer. Read more about this in [how Constellation encrypts data at rest](../architecture/encrypted-storage.md).
|
||||
Similarly, Constellation will provision a PV via a default storage class upon a PVC request.
|
||||
|
||||
The K-Bench [`fio`](https://fio.readthedocs.io/en/latest/fio_doc.html) benchmark consists of several tests.
|
||||
We selected four different tests that perform asynchronous access patterns because we believe they most accurately depict real-world I/O access for most applications.
|
||||
|
||||
In the graph below, you will find the I/O throughput in `MiB/s` - where higher is better.
|
||||
|
||||

|
||||
|
||||
Comparing Constellation on GCP with GKE, we see that Constellation offers similar read/write speeds in all scenarios.
|
||||
|
||||
Constellation on Azure and AKS, however, differ in sometimes. As you can see, only for the full write mix, Constellation and AKS have similar storage access speeds. In the 70/30 mix, AKS outperforms Constellation.
|
||||
|
||||
Note: For the sequential reads with a 0/100 read-write mix, no data could be measured on AKS, hence the missing data bar.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Constellation can help transform the way organizations process data in the cloud by delivering high-performance Kubernetes while preserving confidentiality and privacy.
|
||||
As demonstrated in our tests above, Constellation provides a Kubernetes cluster with minimal performance impact compared to the managed Kubernetes offerings AKS and GKE.
|
||||
While enabling always encrypted processing of data, the network and storage encryption comes at a minimal price.
|
||||
Constellation holds up in most benchmarks but in certain scenarios can be slightly lower in terms of storage and network throughput.
|
||||
Kubernetes API latencies aren’t affected, and Constellation even outperforms AKS and GKE in this aspect.
|
29
docs/docs/overview/confidential-kubernetes.md
Normal file
29
docs/docs/overview/confidential-kubernetes.md
Normal file
|
@ -0,0 +1,29 @@
|
|||
# Confidential Kubernetes
|
||||
|
||||
We use the term *Confidential Kubernetes* to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
|
||||
|
||||
1. **Workload shielding**: the confidentiality and integrity of all workload-related data and code are enforced.
|
||||
2. **Control plane shielding**: the confidentiality and integrity of the cluster's control plane, state, and workload configuration are enforced.
|
||||
3. **Attestation and verifiability**: the two properties above can be verified remotely based on hardware-rooted cryptographic certificates.
|
||||
|
||||
Each of the above properties is equally important. Only with all three in conjunction, an entire cluster can be shielded without gaps. This is what Constellation is about.
|
||||
|
||||
Constellation's approach is to run all nodes of the Kubernetes cluster inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster. Constellation augments this with transparent encryption of the [network](../architecture/keys.md#network-encryption) and [storage](../architecture/encrypted-storage.md). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime. Constellation manages the corresponding [cryptographic keys](../architecture/keys.md) inside CVMs. Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster. (A node is "good" if it's running a signed Constellation image inside a CVM and is in the expected state.) Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified. As a result, Constellation wraps an entire cluster into one coherent *confidential context*. The concept is depicted in the following.
|
||||
|
||||

|
||||
|
||||
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, *Node A* has no means to verify if *Node B* is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
|
||||
|
||||

|
||||
|
||||
The following table highlights the key differences in terms of features:
|
||||
|
||||
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|
||||
|-------------------------------------|------------------------------|--------------------------------------------|
|
||||
| Runtime encryption | Partial (data plane only)| **Yes** |
|
||||
| Node image verification | No | **Yes** |
|
||||
| Full cluster attestation | No | **Yes** |
|
||||
| Transparent network encryption | No | **Yes** |
|
||||
| Transparent storage encryption | No | **Yes** |
|
||||
| Confidential key management | No | **Yes** |
|
||||
| Cloud agnostic / multi-cloud | No | **Yes** |
|
25
docs/docs/overview/license.md
Normal file
25
docs/docs/overview/license.md
Normal file
|
@ -0,0 +1,25 @@
|
|||
# License
|
||||
|
||||
## Source code
|
||||
|
||||
Constellation's source code is available on [GitHub](https://github.com/edgelesssys/constellation) under the permissive [GNU Affero General Public License (AGPL)](https://www.gnu.org/licenses/agpl-3.0.en.html).
|
||||
|
||||
## Binaries
|
||||
|
||||
Edgeless Systems provides ready-to-use and [signed](../architecture/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/images.md).
|
||||
|
||||
These binaries may be used free of charge within the bounds of Constellation's [**Community License**](#community-license). An [**Enterprise License**](#enterprise-license) can be purchased from Edgeless Systems.
|
||||
|
||||
The Constellation CLI displays relevant license information when you initialize your cluster. You are responsible for staying within the bounds of your respective license. Constellation doesn't enforce any limits so as not to endanger your cluster's availability.
|
||||
|
||||
### Community License
|
||||
|
||||
You are free to use the Constellation binaries provided by Edgeless Systems to create services for internal consumption. You must not use the Constellation binaries to provide hosted services of any type to third parties. Edgeless Systems gives no warranties and offers no support.
|
||||
|
||||
These terms may be different for future releases.
|
||||
|
||||
### Enterprise License
|
||||
|
||||
Enterprise Licenses don't have the above limitations and come with support and additional features. Find out more [here](https://www.edgeless.systems/products/constellation/).
|
||||
|
||||
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/orchestration.md#workspaces) in a file named `constellation.license`.
|
21
docs/docs/overview/product.md
Normal file
21
docs/docs/overview/product.md
Normal file
|
@ -0,0 +1,21 @@
|
|||
# Product features
|
||||
|
||||
Constellation is a confidential orchestration platform, designed to be the most secure way to run Kubernetes.
|
||||
It leverages confidential computing to isolate entire Kubernetes deployments and all workloads from the infrastructure.
|
||||
From the inside, a Constellation cluster feels 100% like Kubernetes as you know it.
|
||||
But for everyone else, from the outside, it’s runtime-encrypted VMs talking over encrypted channels and writing encrypted data.
|
||||
|
||||
Constellation provides confidential computing enhancements to Kubernetes, including the following:
|
||||
|
||||
* Leveraging confidential VMs (CVMs) available in all major clouds to isolate and encrypt the Kubernetes control-plane and worker nodes.
|
||||
* Node attestation including a [verified boot](../architecture/images.md#measured-boot) that roots in hardware-measured attestation provided by CVM technologies.
|
||||
* Operating a [container network interface (CNI) plugin](../architecture/networking.md) between CVMs for encrypted network communications in your cluster. Enabling TLS offloading.
|
||||
* [CVM-level persistent volume encryption](../architecture//encrypted-storage.md) ensures the confidentiality and integrity of persistent data outside of the Kubernetes cluster.
|
||||
* [Confidential key management](../architecture//keys.md).
|
||||
* Verifiable, measured, and authenticated [updates](../architecture/orchestration.md#upgrades) of node OS images and Kubernetes components.
|
||||
|
||||
Constellation provides an enterprise-ready Kubernetes environment with key features such as:
|
||||
|
||||
* Multi-cloud deployments. You can deploy Constellation clusters to all major cloud platforms for a consistent confidential orchestration platform.
|
||||
* Highly available (HA) Confidential Kubernetes cluster with [stacked etcd topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology).
|
||||
* Integrating with the Kubernetes cloud controller manager (CCM) to securely provide cloud services such as [cluster autoscaling](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), [dynamic persistent volumes](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), and [service load balancing](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
|
22
docs/docs/overview/security-benefits.md
Normal file
22
docs/docs/overview/security-benefits.md
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Security benefits and threat model
|
||||
|
||||
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the *cloud & datacenter infrastructure* and the *physical hosts*, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify.md).
|
||||
|
||||

|
||||
|
||||
Given this background, the following describes the concrete threat classes that Constellation addresses.
|
||||
|
||||
## Insider access
|
||||
|
||||
Employees and third-party contractors of cloud service providers (CSPs) have access to different layers of the cloud infrastructure.
|
||||
This opens up a large attack surface where workloads and data can be read, copied, or manipulated. With Constellation, Kubernetes deployments are shielded from the infrastructure and thus such accesses are prevented.
|
||||
|
||||
## Infrastructure-based attacks
|
||||
|
||||
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and repeatedly access data over a longer period. Analogously to the *insider access* scenario, Constellation also prevents access to a deployment's data in this scenario.
|
||||
|
||||
## Supply chain attacks
|
||||
|
||||
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses remote attestation in conjunction with public transparency logs to prevent this. The approach is detailed [here](../architecture/attestation.md).
|
||||
|
||||
In the future, Constellation will extend this feature to customer workloads. This will enable cluster owners to create auditable policies that precisely define which containers can run in a given deployment.
|
Loading…
Add table
Add a link
Reference in a new issue