mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-02-23 08:20:15 -05:00
restructuring cntd
This commit is contained in:
parent
2dcd37e0b6
commit
7ad885cea0
Binary file not shown.
Before Width: | Height: | Size: 48 KiB |
@ -1,6 +1,6 @@
|
||||
# Architecture
|
||||
|
||||
This section of the documentation provides a comprehensive overview of Constellation's inner workings. It explains the chain of trust between the different components and how they ensure robust protection for your workloads. The main chapters include:
|
||||
This section of the documentation provides a comprehensive overview of Constellation's inner workings. It explains the chain of trust between the different components and how they ensure robust protection of your workloads. The main chapters include:
|
||||
|
||||
- [**Protocol overview**](./overview.md): The best **starting point** for exploring the architecture documentation. This chapter provides an overview of Constellation's architecture and explains the underlying security protocol used to achieve confidentiality.
|
||||
- [**Key components**](./components/cli.md): This chapter outlines Constellation's key components, their main purposes, and how users interact with them.
|
||||
|
@ -3,7 +3,18 @@
|
||||
The security of Constellation is based on a set of protocols.
|
||||
The protocols are outlined in the following.
|
||||
The following diagram sketches the basic trust relationships between the entities in a Constellation cluster.
|
||||

|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[User]-- "verifies" -->B[CLI]
|
||||
B[CLI]-- "verifies" -->C([Runtime measurements])
|
||||
D[Edgeless Systems]-- "signs" -->B[CLI]
|
||||
D[Edgeless Systems]-- "signs" -->C([Runtime measurements])
|
||||
B[CLI]-- "verifies (remote attestation)" -->E[First node]
|
||||
E[First node]-- "verifies (remote attestation)" -->F[Other nodes]
|
||||
C([Runtime measurements]) -.-> E[First node]
|
||||
C([Runtime measurements]) -.-> F[Other nodes]
|
||||
```
|
||||
|
||||
## Software components
|
||||
|
||||
|
@ -3,8 +3,8 @@
|
||||
A local cluster lets you deploy and test Constellation without a cloud subscription.
|
||||
You have two options:
|
||||
|
||||
* Use MiniConstellation to automatically deploy a two-node cluster.
|
||||
* For more fine-grained control, create the cluster using the QEMU provider.
|
||||
- Use MiniConstellation to automatically deploy a two-node cluster.
|
||||
- For more fine-grained control, create the cluster using the QEMU provider.
|
||||
|
||||
Both options use virtualization to create a local cluster with control-plane nodes and worker nodes. They **don't** require hardware with Confidential VM (CVM) support. For attestation, they currently use a software-based vTPM provided by KVM/QEMU.
|
||||
|
||||
@ -13,17 +13,17 @@ You can use a VM, but it needs nested virtualization.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Machine requirements:
|
||||
* An x86-64 CPU with at least 4 cores (6 cores are recommended)
|
||||
* At least 4 GB RAM (6 GB are recommended)
|
||||
* 20 GB of free disk space
|
||||
* Hardware virtualization enabled in the BIOS/UEFI (often referred to as Intel VT-x or AMD-V/SVM) / nested-virtualization support when using a VM
|
||||
* Software requirements:
|
||||
* Linux OS with [KVM kernel module](https://www.linux-kvm.org/page/Main_Page)
|
||||
* Recommended: Ubuntu 22.04 LTS
|
||||
* [Docker](https://docs.docker.com/engine/install/)
|
||||
* [xsltproc](https://gitlab.gnome.org/GNOME/libxslt/-/wikis/home)
|
||||
* (Optional) [virsh](https://www.libvirt.org/manpages/virsh.html) to observe and access your nodes
|
||||
- Machine requirements:
|
||||
- An x86-64 CPU with at least 4 cores (6 cores are recommended)
|
||||
- At least 4 GB RAM (6 GB are recommended)
|
||||
- 20 GB of free disk space
|
||||
- Hardware virtualization enabled in the BIOS/UEFI (often referred to as Intel VT-x or AMD-V/SVM) / nested-virtualization support when using a VM
|
||||
- Software requirements:
|
||||
- Linux OS with [KVM kernel module](https://www.linux-kvm.org/page/Main_Page)
|
||||
- Recommended: Ubuntu 22.04 LTS
|
||||
- [Docker](https://docs.docker.com/engine/install/)
|
||||
- [xsltproc](https://gitlab.gnome.org/GNOME/libxslt/-/wikis/home)
|
||||
- (Optional) [virsh](https://www.libvirt.org/manpages/virsh.html) to observe and access your nodes
|
||||
|
||||
### Software installation on Ubuntu
|
||||
|
||||
@ -49,7 +49,9 @@ sudo iptables -P FORWARD ACCEPT
|
||||
<TabItem value="mini" label="MiniConstellation">
|
||||
|
||||
<!-- vale off -->
|
||||
|
||||
With the `constellation mini` command, you can deploy and test Constellation locally. This mode is called MiniConstellation. Conceptually, MiniConstellation is similar to [MicroK8s](https://microk8s.io/), [K3s](https://k3s.io/), and [minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
|
||||
<!-- vale on -->
|
||||
|
||||
:::caution
|
||||
@ -71,7 +73,7 @@ The following creates your MiniConstellation cluster (may take up to 10 minutes
|
||||
constellation mini up
|
||||
```
|
||||
|
||||
This will configure your current directory as the [workspace](../architecture/orchestration.md#workspaces) for this cluster.
|
||||
This will configure your current directory as the [workspace](../architecture/components/cli.md#workspaces) for this cluster.
|
||||
All `constellation` commands concerning this cluster need to be issued from this directory.
|
||||
|
||||
</TabItem>
|
||||
@ -94,56 +96,56 @@ attaching persistent storage, or autoscaling aren't available.
|
||||
|
||||
1. To set up your local cluster, you need to create a configuration file for Constellation first.
|
||||
|
||||
```bash
|
||||
constellation config generate qemu
|
||||
```
|
||||
```bash
|
||||
constellation config generate qemu
|
||||
```
|
||||
|
||||
This creates a [configuration file](../workflows/config.md) for QEMU called `constellation-conf.yaml`. After that, your current folder also becomes your [workspace](../architecture/orchestration.md#workspaces). All `constellation` commands for your cluster need to be executed from this directory.
|
||||
This creates a [configuration file](../workflows/config.md) for QEMU called `constellation-conf.yaml`. After that, your current folder also becomes your [workspace](../architecture/components/cli.md#workspaces). All `constellation` commands for your cluster need to be executed from this directory.
|
||||
|
||||
2. Now you can create your cluster and its nodes. `constellation apply` uses the options set in `constellation-conf.yaml`.
|
||||
|
||||
```bash
|
||||
constellation apply -y
|
||||
```
|
||||
```bash
|
||||
constellation apply -y
|
||||
```
|
||||
|
||||
The Output should look like the following:
|
||||
The Output should look like the following:
|
||||
|
||||
```shell-session
|
||||
$ constellation apply -y
|
||||
Checking for infrastructure changes
|
||||
The following Constellation cluster will be created:
|
||||
3 control-plane nodes of type 2-vCPUs will be created.
|
||||
1 worker node of type 2-vCPUs will be created.
|
||||
Creating
|
||||
Cloud infrastructure created successfully.
|
||||
Your Constellation master secret was successfully written to ./constellation-mastersecret.json
|
||||
Connecting
|
||||
Initializing cluster
|
||||
Installing Kubernetes components
|
||||
Your Constellation cluster was successfully initialized.
|
||||
```shell-session
|
||||
$ constellation apply -y
|
||||
Checking for infrastructure changes
|
||||
The following Constellation cluster will be created:
|
||||
3 control-plane nodes of type 2-vCPUs will be created.
|
||||
1 worker node of type 2-vCPUs will be created.
|
||||
Creating
|
||||
Cloud infrastructure created successfully.
|
||||
Your Constellation master secret was successfully written to ./constellation-mastersecret.json
|
||||
Connecting
|
||||
Initializing cluster
|
||||
Installing Kubernetes components
|
||||
Your Constellation cluster was successfully initialized.
|
||||
|
||||
Constellation cluster identifier g6iMP5wRU1b7mpOz2WEISlIYSfdAhB0oNaOg6XEwKFY=
|
||||
Kubernetes configuration constellation-admin.conf
|
||||
Constellation cluster identifier g6iMP5wRU1b7mpOz2WEISlIYSfdAhB0oNaOg6XEwKFY=
|
||||
Kubernetes configuration constellation-admin.conf
|
||||
|
||||
You can now connect to your cluster by executing:
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
You can now connect to your cluster by executing:
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
The cluster's identifier will be different in your output.
|
||||
Keep `constellation-mastersecret.json` somewhere safe.
|
||||
This will allow you to [recover your cluster](../workflows/recovery.md) in case of a disaster.
|
||||
The cluster's identifier will be different in your output.
|
||||
Keep `constellation-mastersecret.json` somewhere safe.
|
||||
This will allow you to [recover your cluster](../workflows/recovery.md) in case of a disaster.
|
||||
|
||||
:::info
|
||||
:::info
|
||||
|
||||
Depending on your setup, `constellation apply` may take 10+ minutes to complete.
|
||||
Depending on your setup, `constellation apply` may take 10+ minutes to complete.
|
||||
|
||||
:::
|
||||
:::
|
||||
|
||||
3. Configure kubectl
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
@ -158,7 +160,7 @@ NAME STATUS ROLES AGE VERSION
|
||||
control-plane-0 Ready control-plane 66s v1.24.6
|
||||
```
|
||||
|
||||
Additional nodes will request to join the cluster shortly. Before each additional node is allowed to join the cluster, its state is verified using remote attestation by the [JoinService](../architecture/microservices.md#joinservice).
|
||||
Additional nodes will request to join the cluster shortly. Before each additional node is allowed to join the cluster, its state is verified using remote attestation by the [JoinService](../architecture/components/microservices.md#joinservice).
|
||||
If verification passes successfully, the new node receives keys and certificates to join the cluster.
|
||||
|
||||
You can follow this process by viewing the logs of the JoinService:
|
||||
@ -184,18 +186,18 @@ worker-0 Ready <none> 32s v1.24.6
|
||||
|
||||
1. Deploy the [emojivoto app](https://github.com/BuoyantIO/emojivoto)
|
||||
|
||||
```bash
|
||||
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
|
||||
```
|
||||
```bash
|
||||
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
|
||||
```
|
||||
|
||||
2. Expose the frontend service locally
|
||||
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
|
||||
kubectl -n emojivoto port-forward svc/web-svc 8080:80 &
|
||||
curl http://localhost:8080
|
||||
kill %1
|
||||
```
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
|
||||
kubectl -n emojivoto port-forward svc/web-svc 8080:80 &
|
||||
curl http://localhost:8080
|
||||
kill %1
|
||||
```
|
||||
|
||||
## Terminate your cluster
|
||||
|
||||
|
@ -14,19 +14,19 @@ For Constellation, the ideal environment provides the following:
|
||||
|
||||
The following table summarizes the state of features for different infrastructures.
|
||||
|
||||
| **Feature** | **AWS** | **Azure** | **GCP** | **STACKIT** | **OpenStack (Yoga)** |
|
||||
|-----------------------------------|---------|-----------|---------|--------------|----------------------|
|
||||
| **1. Custom images** | Yes | Yes | Yes | Yes | Yes |
|
||||
| **2. SEV-SNP or TDX** | Yes | Yes | Yes | No | Depends on kernel/HV |
|
||||
| **3. Raw guest attestation** | Yes | Yes | Yes | No | Depends on kernel/HV |
|
||||
| **4. Reviewable firmware** | Yes | No | No | No | Depends on kernel/HV |
|
||||
| **5. Confidential measured boot** | No | Yes | No | No | Depends on kernel/HV |
|
||||
| **Feature** | **AWS** | **Azure** | **GCP** | **STACKIT** | **OpenStack (Yoga)** |
|
||||
| --------------------------------- | ------- | --------- | ------- | ----------- | -------------------- |
|
||||
| **1. Custom images** | Yes | Yes | Yes | Yes | Yes |
|
||||
| **2. SEV-SNP or TDX** | Yes | Yes | Yes | No | Depends on kernel/HV |
|
||||
| **3. Raw guest attestation** | Yes | Yes | Yes | No | Depends on kernel/HV |
|
||||
| **4. Reviewable firmware** | Yes | No | No | No | Depends on kernel/HV |
|
||||
| **5. Confidential measured boot** | No | Yes | No | No | Depends on kernel/HV |
|
||||
|
||||
## Amazon Web Services (AWS)
|
||||
|
||||
Amazon EC2 [supports AMD SEV-SNP](https://aws.amazon.com/de/about-aws/whats-new/2023/04/amazon-ec2-amd-sev-snp/).
|
||||
Regarding (3), AWS provides direct access to attestation statements.
|
||||
However, regarding (5), attestation is partially based on the [NitroTPM](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nitrotpm.html) for [measured boot](../architecture/attestation.md#measured-boot), which is a vTPM managed by the Nitro hypervisor.
|
||||
However, regarding (5), attestation is partially based on the [NitroTPM](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nitrotpm.html) for [measured boot](../architecture/security/attestation.md#measured-boot), which is a vTPM managed by the Nitro hypervisor.
|
||||
Hence, the hypervisor is currently part of Constellation's TCB.
|
||||
Regarding (4), the [firmware is open source](https://github.com/aws/uefi) and can be reproducibly built.
|
||||
|
||||
@ -44,7 +44,7 @@ Thus, the Azure closed-source firmware becomes part of Constellation's trusted c
|
||||
|
||||
The [CVMs Generally Available in GCP](https://cloud.google.com/confidential-computing/confidential-vm/docs/confidential-vm-overview#technologies) are based on AMD SEV-ES or SEV-SNP.
|
||||
Regarding (3), with their SEV-SNP offering Google provides direct access to attestation statements.
|
||||
However, regarding (5), attestation is partially based on the [Shielded VM vTPM](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#vtpm) for [measured boot](../architecture/attestation.md#measured-boot), which is a vTPM managed by Google's hypervisor.
|
||||
However, regarding (5), attestation is partially based on the [Shielded VM vTPM](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#vtpm) for [measured boot](../architecture/security/attestation.md#measured-boot), which is a vTPM managed by Google's hypervisor.
|
||||
Hence, the hypervisor is currently part of Constellation's TCB.
|
||||
Regarding (4), the CVMs still include closed-source firmware.
|
||||
|
||||
@ -57,7 +57,7 @@ With it, Constellation would have a similar TCB and attestation flow as with the
|
||||
|
||||
## OpenStack
|
||||
|
||||
OpenStack is an open-source cloud and infrastructure management software. It's used by many smaller CSPs and datacenters. In the latest *Yoga* version, OpenStack has basic support for CVMs. However, much depends on the employed kernel and hypervisor. Features (2)--(4) are likely to be a *Yes* with Linux kernel version 6.2. Thus, going forward, OpenStack on corresponding AMD or Intel hardware will be a viable underpinning for Constellation.
|
||||
OpenStack is an open-source cloud and infrastructure management software. It's used by many smaller CSPs and datacenters. In the latest _Yoga_ version, OpenStack has basic support for CVMs. However, much depends on the employed kernel and hypervisor. Features (2)--(4) are likely to be a _Yes_ with Linux kernel version 6.2. Thus, going forward, OpenStack on corresponding AMD or Intel hardware will be a viable underpinning for Constellation.
|
||||
|
||||
## Conclusion
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Confidential Kubernetes
|
||||
|
||||
We use the term *Confidential Kubernetes* to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
|
||||
We use the term _Confidential Kubernetes_ to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
|
||||
|
||||
1. **Workload shielding**: the confidentiality and integrity of all workload-related data and code are enforced.
|
||||
2. **Control plane shielding**: the confidentiality and integrity of the cluster's control plane, state, and workload configuration are enforced.
|
||||
@ -12,31 +12,31 @@ Each of the above properties is equally important. Only with all three in conjun
|
||||
|
||||
Constellation implements the Confidential Kubernetes concept with the following security features.
|
||||
|
||||
* **Runtime encryption**: Constellation runs all Kubernetes nodes inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster.
|
||||
* **Network and storage encryption**: Constellation augments this with transparent encryption of the [network](../architecture/networking.md), [persistent storage](../architecture/encrypted-storage.md), and other managed storage like [AWS S3](../architecture/encrypted-storage.md#encrypted-s3-object-storage). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime.
|
||||
* **Transparent key management**: Constellation manages the corresponding [cryptographic keys](../architecture/keys.md) inside CVMs.
|
||||
* **Node attestation and verification**: Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster.
|
||||
* **Confidential computing-optimized images**: A node is "good" if it's running a signed Constellation [node image](../architecture/images.md) inside a CVM and is in the expected state. (Node images are hardware-measured during boot. The measurements are reflected in the attestation statements that are produced by nodes and verified by Constellation.)
|
||||
* **"Whole cluster" attestation**: Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified.
|
||||
- **Runtime encryption**: Constellation runs all Kubernetes nodes inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster.
|
||||
- **Network and storage encryption**: Constellation augments this with transparent encryption of the [network](../architecture/security/encrypted-networking.md), [persistent storage](../architecture/security/encrypted-storage.md), and other managed storage like [AWS S3](../architecture/security/encrypted-storage.md#encrypted-s3-object-storage). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime.
|
||||
- **Transparent key management**: Constellation manages the corresponding [cryptographic keys](../architecture/security/keys.md) inside CVMs.
|
||||
- **Node attestation and verification**: Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/security/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster.
|
||||
- **Confidential computing-optimized images**: A node is "good" if it's running a signed Constellation [node image](../architecture/components/node-images.md) inside a CVM and is in the expected state. (Node images are hardware-measured during boot. The measurements are reflected in the attestation statements that are produced by nodes and verified by Constellation.)
|
||||
- **"Whole cluster" attestation**: Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified.
|
||||
|
||||
With the above, Constellation wraps an entire cluster into one coherent and verifiable *confidential context*. The concept is depicted in the following.
|
||||
With the above, Constellation wraps an entire cluster into one coherent and verifiable _confidential context_. The concept is depicted in the following.
|
||||
|
||||

|
||||
|
||||
## Contrast: Managed Kubernetes with CVMs
|
||||
|
||||
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, *Node A* has no means to verify if *Node B* is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
|
||||
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, _Node A_ has no means to verify if _Node B_ is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
|
||||
|
||||

|
||||
|
||||
The following table highlights the key differences in terms of features.
|
||||
|
||||
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|
||||
|-------------------------------------|------------------------------|--------------------------------------------|
|
||||
| Runtime encryption | Partial (data plane only)| **Yes** |
|
||||
| Node image verification | No | **Yes** |
|
||||
| Full cluster attestation | No | **Yes** |
|
||||
| Transparent network encryption | No | **Yes** |
|
||||
| Transparent storage encryption | No | **Yes** |
|
||||
| Confidential key management | No | **Yes** |
|
||||
| Cloud agnostic / multi-cloud | No | **Yes** |
|
||||
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|
||||
| ------------------------------ | ---------------------------- | ----------------------------------------- |
|
||||
| Runtime encryption | Partial (data plane only) | **Yes** |
|
||||
| Node image verification | No | **Yes** |
|
||||
| Full cluster attestation | No | **Yes** |
|
||||
| Transparent network encryption | No | **Yes** |
|
||||
| Transparent storage encryption | No | **Yes** |
|
||||
| Confidential key management | No | **Yes** |
|
||||
| Cloud agnostic / multi-cloud | No | **Yes** |
|
||||
|
@ -6,7 +6,7 @@ Constellation's source code is available on [GitHub](https://github.com/edgeless
|
||||
|
||||
## Binaries
|
||||
|
||||
Edgeless Systems provides ready-to-use and [signed](../architecture/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/images.md).
|
||||
Edgeless Systems provides ready-to-use and [signed](../architecture/security/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/components/node-images.md).
|
||||
|
||||
These binaries may be used free of charge within the bounds of Constellation's [**Community License**](#community-license). An [**Enterprise License**](#enterprise-license) can be purchased from Edgeless Systems.
|
||||
|
||||
@ -26,7 +26,7 @@ You are free to use the Constellation binaries provided by Edgeless Systems to c
|
||||
|
||||
Enterprise Licenses don't have the above limitations and come with support and additional features. Find out more at the [product website](https://www.edgeless.systems/products/constellation/).
|
||||
|
||||
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/orchestration.md#workspaces) in a file named `constellation.license`.
|
||||
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/components/cli.md#workspaces) in a file named `constellation.license`.
|
||||
|
||||
## CSP Marketplaces
|
||||
|
||||
|
@ -60,7 +60,7 @@ The benchmark measured the bandwidth of pod-to-pod and pod-to-service connection
|
||||
|
||||
GKE and Constellation on GCP had a maximum network bandwidth of [10 Gbps](https://cloud.google.com/compute/docs/general-purpose-machines#n2d_machines).
|
||||
AKS with `Standard_D4as_v5` machines a maximum network bandwidth of [12.5 Gbps](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series).
|
||||
The Confidential VM equivalent `Standard_DC4as_v5` currently has a network bandwidth of [1.25 Gbps](https://learn.microsoft.com/en-us/azure/virtual-machines/dcasv5-dcadsv5-series#dcasv5-series-products).
|
||||
The Confidential VM equivalent `Standard_DC4as_v5` currently has a network bandwidth of [1.25 Gbps](https://learn.microsoft.com/en-us/azure/virtual-machines/dcasv5-dcadsv5-series#dcasv5-series-products).
|
||||
Therefore, to make the test comparable, both AKS and Constellation on Azure were running with `Standard_DC4as_v5` machines and 1.25 Gbps bandwidth.
|
||||
|
||||
Constellation on Azure and AKS used an MTU of 1500.
|
||||
@ -68,7 +68,7 @@ Constellation on GCP used an MTU of 8896. GKE used an MTU of 1450.
|
||||
|
||||
The difference in network bandwidth can largely be attributed to two factors.
|
||||
|
||||
- Constellation's [network encryption](../../architecture/networking.md) via Cilium and WireGuard, which protects data in-transit.
|
||||
- Constellation's [network encryption](../../architecture/security/encrypted-networking.md) via Cilium and WireGuard, which protects data in-transit.
|
||||
- [AMD SEV using SWIOTLB bounce buffers](https://lore.kernel.org/all/20200204193500.GA15564@ashkalra_ubuntu_server/T/) for all DMA including network I/O.
|
||||
|
||||
#### Pod-to-Pod
|
||||
@ -125,7 +125,7 @@ Similarly, when comparing Constellation on Azure with AKS using CVMs, Constellat
|
||||
|
||||
Azure and GCP offer persistent storage for their Kubernetes services AKS and GKE via the Container Storage Interface (CSI). CSI storage in Kubernetes is available via `PersistentVolumes` (PV) and consumed via `PersistentVolumeClaims` (PVC).
|
||||
Upon requesting persistent storage through a PVC, GKE and AKS will provision a PV as defined by a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
Constellation provides persistent storage on Azure and GCP [that's encrypted on the CSI layer](../../architecture/encrypted-storage.md).
|
||||
Constellation provides persistent storage on Azure and GCP [that's encrypted on the CSI layer](../../architecture/security/encrypted-storage.md).
|
||||
Similarly, upon a PVC request, Constellation will provision a PV via a default storage class.
|
||||
|
||||
For Constellation on Azure and AKS, the benchmark ran with Azure Disk storage [Standard SSD](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) of 400 GiB size.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Security benefits and threat model
|
||||
|
||||
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the *cloud & datacenter infrastructure* and the *physical hosts*, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify-cluster.md).
|
||||
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the _cloud & datacenter infrastructure_ and the _physical hosts_, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify-cluster.md).
|
||||
|
||||

|
||||
|
||||
@ -13,10 +13,10 @@ This opens up a large attack surface where workloads and data can be read, copie
|
||||
|
||||
## Infrastructure-based attacks
|
||||
|
||||
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and access data over a longer period. Analogously to the *insider access* scenario, Constellation also prevents access to a deployment's data in this scenario.
|
||||
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and access data over a longer period. Analogously to the _insider access_ scenario, Constellation also prevents access to a deployment's data in this scenario.
|
||||
|
||||
## Supply chain attacks
|
||||
|
||||
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses [remote attestation](../architecture/attestation.md) in conjunction with public [transparency logs](../workflows/verify-cli.md) to prevent this.
|
||||
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses [remote attestation](../architecture/security/attestation.md) in conjunction with public [transparency logs](../workflows/verify-cli.md) to prevent this.
|
||||
|
||||
In the future, Constellation will extend this feature to customer workloads. This will enable cluster owners to create auditable policies that precisely define which containers can run in a given deployment.
|
||||
|
@ -15,10 +15,10 @@ The subdirectories are created on the first Constellation CLI action that uses T
|
||||
|
||||
Currently, these subdirectories are:
|
||||
|
||||
* `constellation-terraform` - Terraform state files for the resources of the Constellation cluster
|
||||
* `constellation-iam-terraform` - Terraform state files for IAM configuration
|
||||
- `constellation-terraform` - Terraform state files for the resources of the Constellation cluster
|
||||
- `constellation-iam-terraform` - Terraform state files for IAM configuration
|
||||
|
||||
As with all commands, commands that work with these files (e.g., `apply`, `terminate`, `iam`) have to be executed from the root of the cluster's [workspace directory](../architecture/orchestration.md#workspaces). You usually don't need and shouldn't manipulate or delete the subdirectories manually.
|
||||
As with all commands, commands that work with these files (e.g., `apply`, `terminate`, `iam`) have to be executed from the root of the cluster's [workspace directory](../architecture/components/cli.md#workspaces). You usually don't need and shouldn't manipulate or delete the subdirectories manually.
|
||||
|
||||
## Interacting with Terraform manually
|
||||
|
||||
@ -27,11 +27,12 @@ Manual interaction with Terraform state created by Constellation (i.e., via the
|
||||
## Terraform debugging
|
||||
|
||||
To debug Terraform issues, the Constellation CLI offers the `tf-log` flag. You can set it to any of [Terraform's log levels](https://developer.hashicorp.com/terraform/internals/debugging):
|
||||
* `JSON` (JSON-formatted logs at `TRACE` level)
|
||||
* `TRACE`
|
||||
* `DEBUG`
|
||||
* `INFO`
|
||||
* `WARN`
|
||||
* `ERROR`
|
||||
|
||||
- `JSON` (JSON-formatted logs at `TRACE` level)
|
||||
- `TRACE`
|
||||
- `DEBUG`
|
||||
- `INFO`
|
||||
- `WARN`
|
||||
- `ERROR`
|
||||
|
||||
The log output is written to the `terraform.log` file in the workspace directory. The output is appended to the file on each run.
|
||||
|
@ -19,7 +19,7 @@ The most significant ones are:
|
||||
You can use the `--skip-phases` flag to skip specific phases of the process.
|
||||
For example, if you created the infrastructure manually, you can skip the cloud resource creation phase.
|
||||
|
||||
See the [architecture](../architecture/orchestration.md) section for details on the inner workings of this process.
|
||||
See the [architecture](../architecture/components/cli.md) section for details on the inner workings of this process.
|
||||
|
||||
:::tip
|
||||
If you don't have a cloud subscription, you can also set up a [local Constellation cluster using virtualization](../getting-started/first-steps-local.md) for testing.
|
||||
@ -34,7 +34,7 @@ Before you create the cluster, make sure to have a [valid configuration file](./
|
||||
constellation apply
|
||||
```
|
||||
|
||||
`apply` stores the state of your cluster's cloud resources in a [`constellation-terraform`](../architecture/orchestration.md#cluster-creation-process) directory in your workspace.
|
||||
`apply` stores the state of your cluster's cloud resources in a [`constellation-terraform`](../architecture/components/cli.md#cluster-creation-process) directory in your workspace.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="self-managed" label="Self-managed">
|
||||
@ -45,7 +45,7 @@ It's recommended to use Terraform for infrastructure management, but you can use
|
||||
|
||||
:::info
|
||||
|
||||
When using Terraform, you can use the [Constellation Terraform provider](./terraform-provider.md) to manage the entire Constellation cluster lifecycle.
|
||||
When using Terraform, you can use the [Constellation Terraform provider](./terraform-provider.md) to manage the entire Constellation cluster lifecycle.
|
||||
|
||||
:::
|
||||
|
||||
@ -56,12 +56,12 @@ management tooling of your choice. You need to keep the essential functionality
|
||||
|
||||
:::info
|
||||
|
||||
On Azure, a manual update to the MAA provider's policy is necessary.
|
||||
You can apply the update with the following command after creating the infrastructure, with `<URL>` being the URL of the MAA provider (i.e., `$(terraform output attestation_url | jq -r)`, when using the minimal Terraform configuration).
|
||||
On Azure, a manual update to the MAA provider's policy is necessary.
|
||||
You can apply the update with the following command after creating the infrastructure, with `<URL>` being the URL of the MAA provider (i.e., `$(terraform output attestation_url | jq -r)`, when using the minimal Terraform configuration).
|
||||
|
||||
```bash
|
||||
constellation maa-patch <URL>
|
||||
```
|
||||
```bash
|
||||
constellation maa-patch <URL>
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
|
@ -2,10 +2,10 @@
|
||||
|
||||
Recovery of a Constellation cluster means getting it back into a healthy state after too many concurrent node failures in the control plane.
|
||||
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/images.md#state-disk).
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/components/node-images.md#state-disk).
|
||||
|
||||
Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes.
|
||||
The `constellation recover` command securely connects to all nodes in need of recovery using [attested TLS](../architecture/attestation.md#attested-tls-atls) and provides them with the keys to decrypt their state disks and continue booting.
|
||||
The `constellation recover` command securely connects to all nodes in need of recovery using [attested TLS](../architecture/security/attestation.md#attested-tls-atls) and provides them with the keys to decrypt their state disks and continue booting.
|
||||
|
||||
## Identify unhealthy clusters
|
||||
|
||||
@ -19,12 +19,12 @@ In the following, you'll find detailed descriptions for identifying clusters stu
|
||||
<Tabs groupId="csp">
|
||||
<TabItem value="aws" label="AWS">
|
||||
|
||||
First, open the AWS console to view all Auto Scaling Groups (ASGs) in the region of your cluster. Select the ASG of the control plane `<cluster-name>-<UID>-control-plane` and check that enough members are in a *Running* state.
|
||||
First, open the AWS console to view all Auto Scaling Groups (ASGs) in the region of your cluster. Select the ASG of the control plane `<cluster-name>-<UID>-control-plane` and check that enough members are in a _Running_ state.
|
||||
|
||||
Second, check the boot logs of these *Instances*. In the ASG's *Instance management* view, select each desired instance. In the upper right corner, select **Action > Monitor and troubleshoot > Get system log**.
|
||||
Second, check the boot logs of these _Instances_. In the ASG's _Instance management_ view, select each desired instance. In the upper right corner, select **Action > Monitor and troubleshoot > Get system log**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/components/node-images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
@ -33,7 +33,7 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
The node will then try to connect to the [_JoinService_](../architecture/components/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
@ -51,15 +51,15 @@ This means that you have to recover the node manually.
|
||||
<TabItem value="azure" label="Azure">
|
||||
|
||||
In the Azure portal, find the cluster's resource group.
|
||||
Inside the resource group, open the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>`.
|
||||
On the left, go to **Settings** > **Instances** and check that enough members are in a *Running* state.
|
||||
Inside the resource group, open the control plane _Virtual machine scale set_ `constellation-scale-set-controlplanes-<suffix>`.
|
||||
On the left, go to **Settings** > **Instances** and check that enough members are in a _Running_ state.
|
||||
|
||||
Second, check the boot logs of these *Instances*.
|
||||
In the scale set's *Instances* view, open the details page of the desired instance.
|
||||
Second, check the boot logs of these _Instances_.
|
||||
In the scale set's _Instances_ view, open the details page of the desired instance.
|
||||
On the left, go to **Support + troubleshooting** > **Serial console**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/components/node-images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:41Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"azure"}
|
||||
@ -68,7 +68,7 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
The node will then try to connect to the [_JoinService_](../architecture/components/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
@ -85,17 +85,17 @@ This means that you have to recover the node manually.
|
||||
</TabItem>
|
||||
<TabItem value="gcp" label="GCP">
|
||||
|
||||
First, check that the control plane *Instance Group* has enough members in a *Ready* state.
|
||||
First, check that the control plane _Instance Group_ has enough members in a _Ready_ state.
|
||||
In the GCP Console, go to **Instance Groups** and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
|
||||
|
||||
Second, check the status of the *VM Instances*.
|
||||
Second, check the status of the _VM Instances_.
|
||||
Go to **VM Instances** and open the details of the desired instance.
|
||||
Check the serial console output of that instance by opening the **Logs** > **Serial port 1 (console)** page:
|
||||
|
||||

|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/components/node-images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
@ -104,7 +104,7 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
The node will then try to connect to the [_JoinService_](../architecture/components/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
@ -121,12 +121,12 @@ This means that you have to recover the node manually.
|
||||
</TabItem>
|
||||
<TabItem value="stackit" label="STACKIT">
|
||||
|
||||
First, open the STACKIT portal to view all servers in your project. Select individual control plane nodes `<cluster-name>-<UID>-control-plane-<UID>-<index>` and check that enough members are in a *Running* state.
|
||||
First, open the STACKIT portal to view all servers in your project. Select individual control plane nodes `<cluster-name>-<UID>-control-plane-<UID>-<index>` and check that enough members are in a _Running_ state.
|
||||
|
||||
Second, check the boot logs of these servers. Click on a server name and select **Overview**. Find the **Machine Setup** section and click on **Web console** > **Open console**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/components/node-images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
@ -135,7 +135,7 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
The node will then try to connect to the [_JoinService_](../architecture/components/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
@ -156,8 +156,8 @@ This means that you have to recover the node manually.
|
||||
|
||||
Recovering a cluster requires the following parameters:
|
||||
|
||||
* The `constellation-state.yaml` file in your working directory or the cluster's endpoint
|
||||
* The master secret of the cluster
|
||||
- The `constellation-state.yaml` file in your working directory or the cluster's endpoint
|
||||
- The master secret of the cluster
|
||||
|
||||
A cluster can be recovered like this:
|
||||
|
||||
|
@ -7,9 +7,10 @@ With s3proxy, you can use S3 for storage in a confidential way without having to
|
||||
## Limitations
|
||||
|
||||
Currently, s3proxy has the following limitations:
|
||||
|
||||
- Only `PutObject` and `GetObject` requests are encrypted/decrypted by s3proxy.
|
||||
By default, s3proxy will block requests that may expose unencrypted data to S3 (e.g. UploadPart).
|
||||
The `allow-multipart` flag disables request blocking for evaluation purposes.
|
||||
By default, s3proxy will block requests that may expose unencrypted data to S3 (e.g. UploadPart).
|
||||
The `allow-multipart` flag disables request blocking for evaluation purposes.
|
||||
- Using the [Range](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html#API_GetObject_RequestSyntax) header on `GetObject` is currently not supported and will result in an error.
|
||||
|
||||
These limitations will be removed with future iterations of s3proxy.
|
||||
@ -18,6 +19,7 @@ If you want to use s3proxy but these limitations stop you from doing so, conside
|
||||
## Deployment
|
||||
|
||||
You can add the s3proxy to your Constellation cluster as follows:
|
||||
|
||||
1. Add the Edgeless Systems chart repository:
|
||||
```bash
|
||||
helm repo add edgeless https://helm.edgeless.systems/stable
|
||||
@ -31,16 +33,15 @@ You can add the s3proxy to your Constellation cluster as follows:
|
||||
|
||||
If you want to run a demo application, check out the [Filestash with s3proxy](../getting-started/examples/filestash-s3proxy.md) example.
|
||||
|
||||
|
||||
## Technical details
|
||||
|
||||
### Encryption
|
||||
|
||||
s3proxy relies on Google's [Tink Cryptographic Library](https://developers.google.com/tink) to implement cryptographic operations securely.
|
||||
The used cryptographic primitives are [NIST SP 800 38f](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-38F.pdf) for key wrapping and [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard)-[GCM](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Galois/counter_(GCM)) with 256 bit keys for data encryption.
|
||||
The used cryptographic primitives are [NIST SP 800 38f](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-38F.pdf) for key wrapping and [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard)-[GCM](<https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Galois/counter_(GCM)>) with 256 bit keys for data encryption.
|
||||
|
||||
s3proxy uses [envelope encryption](https://cloud.google.com/kms/docs/envelope-encryption) to encrypt objects.
|
||||
This means s3proxy uses a key encryption key (KEK) issued by the [KeyService](../architecture/microservices.md#keyservice) to encrypt data encryption keys (DEKs).
|
||||
This means s3proxy uses a key encryption key (KEK) issued by the [KeyService](../architecture/components/microservices.md#keyservice) to encrypt data encryption keys (DEKs).
|
||||
Each S3 object is encrypted with its own DEK.
|
||||
The encrypted DEK is then saved as metadata of the encrypted object.
|
||||
This enables key rotation of the KEK without re-encrypting the data in S3.
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
---
|
||||
|
||||
Constellation builds produce a [software bill of materials (SBOM)](https://www.ntia.gov/SBOM) for each generated [artifact](../architecture/microservices.md).
|
||||
Constellation builds produce a [software bill of materials (SBOM)](https://www.ntia.gov/SBOM) for each generated [artifact](../architecture/components/microservices.md).
|
||||
You can use SBOMs to make informed decisions about dependencies and vulnerabilities in a given application. Enterprises rely on SBOMs to maintain an inventory of used applications, which allows them to take data-driven approaches to managing risks related to vulnerabilities.
|
||||
|
||||
SBOMs for Constellation are generated using [Syft](https://github.com/anchore/syft), signed using [Cosign](https://github.com/sigstore/cosign), and stored with the produced artifact.
|
||||
|
@ -13,9 +13,9 @@ Constellation supports the available CSI-based storage options for Kubernetes en
|
||||
However, their encryption takes place in the storage backend and is managed by the CSP.
|
||||
Thus, using the default CSI drivers for these storage types means trusting the CSP with your persistent data.
|
||||
|
||||
To address this, Constellation provides CSI drivers for AWS EBS, Azure Disk, GCE PD, and OpenStack Cinder, offering [encryption on the node level](../architecture/keys.md#storage-encryption). They enable transparent encryption for persistent volumes without needing to trust the cloud backend. Plaintext data never leaves the confidential VM context, offering you confidential storage.
|
||||
To address this, Constellation provides CSI drivers for AWS EBS, Azure Disk, GCE PD, and OpenStack Cinder, offering [encryption on the node level](../architecture/security/keys.md#storage-encryption). They enable transparent encryption for persistent volumes without needing to trust the cloud backend. Plaintext data never leaves the confidential VM context, offering you confidential storage.
|
||||
|
||||
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
|
||||
For more details see [encrypted persistent storage](../architecture/security/encrypted-storage.md).
|
||||
|
||||
## CSI drivers
|
||||
|
||||
@ -65,17 +65,17 @@ If you don't need a CSI driver or wish to deploy your own, you can disable the a
|
||||
|
||||
AWS comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [SSDs of `gp3` type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [SSDs of `gp3` type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
- `encrypted-rwo`
|
||||
- Uses [SSDs of `gp3` type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- `integrity-encrypted-rwo`
|
||||
- Uses [SSDs of `gp3` type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/security/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
@ -94,17 +94,17 @@ Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
Azure comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [Standard SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [Premium SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
- `encrypted-rwo`
|
||||
- Uses [Standard SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- `integrity-encrypted-rwo`
|
||||
- Uses [Premium SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/security/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
@ -123,17 +123,17 @@ Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
GCP comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [standard persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [performance (SSD) persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
- `encrypted-rwo`
|
||||
- Uses [standard persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- `integrity-encrypted-rwo`
|
||||
- Uses [performance (SSD) persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/security/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
@ -152,17 +152,17 @@ Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
STACKIT comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [disks of `storage_premium_perf1` type](https://docs.stackit.cloud/stackit/en/service-plans-blockstorage-75137974.html)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [disks of `storage_premium_perf1` type](https://docs.stackit.cloud/stackit/en/service-plans-blockstorage-75137974.html)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
- `encrypted-rwo`
|
||||
- Uses [disks of `storage_premium_perf1` type](https://docs.stackit.cloud/stackit/en/service-plans-blockstorage-75137974.html)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- `integrity-encrypted-rwo`
|
||||
- Uses [disks of `storage_premium_perf1` type](https://docs.stackit.cloud/stackit/en/service-plans-blockstorage-75137974.html)
|
||||
- ext-4 filesystem
|
||||
- Encryption of all data written to disk
|
||||
- Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/security/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
@ -181,54 +181,54 @@ Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
1. Create a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
|
||||
|
||||
A [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) is a request for storage with certain properties.
|
||||
It can refer to a storage class.
|
||||
The following creates a persistent volume claim, requesting 20 GB of storage via the `encrypted-rwo` storage class:
|
||||
A [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) is a request for storage with certain properties.
|
||||
It can refer to a storage class.
|
||||
The following creates a persistent volume claim, requesting 20 GB of storage via the `encrypted-rwo` storage class:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-example
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: encrypted-rwo
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
EOF
|
||||
```
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-example
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: encrypted-rwo
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
2. Create a Pod with persistent storage
|
||||
|
||||
You can assign a persistent volume claim to an application in need of persistent storage.
|
||||
The mounted volume will persist restarts.
|
||||
The following creates a pod that uses the previously created persistent volume claim:
|
||||
You can assign a persistent volume claim to an application in need of persistent storage.
|
||||
The mounted volume will persist restarts.
|
||||
The following creates a pod that uses the previously created persistent volume claim:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: web-server
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: web-server
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-example
|
||||
readOnly: false
|
||||
EOF
|
||||
```
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: web-server
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: web-server
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-example
|
||||
readOnly: false
|
||||
EOF
|
||||
```
|
||||
|
||||
### Change the default storage class
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Verify your cluster
|
||||
|
||||
Constellation's [attestation feature](../architecture/attestation.md) allows you, or a third party, to verify the integrity and confidentiality of your Constellation cluster.
|
||||
Constellation's [attestation feature](../architecture/security/attestation.md) allows you, or a third party, to verify the integrity and confidentiality of your Constellation cluster.
|
||||
|
||||
## Fetch measurements
|
||||
|
||||
@ -21,48 +21,48 @@ The configuration file then contains a list of `measurements` similar to the fol
|
||||
```yaml
|
||||
# ...
|
||||
measurements:
|
||||
0:
|
||||
expected: "0f35c214608d93c7a6e68ae7359b4a8be5a0e99eea9107ece427c4dea4e439cf"
|
||||
warnOnly: false
|
||||
4:
|
||||
expected: "02c7a67c01ec70ffaf23d73a12f749ab150a8ac6dc529bda2fe1096a98bf42ea"
|
||||
warnOnly: false
|
||||
5:
|
||||
expected: "e6949026b72e5045706cd1318889b3874480f7a3f7c5c590912391a2d15e6975"
|
||||
warnOnly: true
|
||||
8:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
9:
|
||||
expected: "f0a6e8601b00e2fdc57195686cd4ef45eb43a556ac1209b8e25d993213d68384"
|
||||
warnOnly: false
|
||||
11:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
12:
|
||||
expected: "da99eb6cf7c7fbb692067c87fd5ca0b7117dc293578e4fea41f95d3d3d6af5e2"
|
||||
warnOnly: false
|
||||
13:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
14:
|
||||
expected: "d7c4cc7ff7933022f013e03bdee875b91720b5b86cf1753cad830f95e791926f"
|
||||
warnOnly: true
|
||||
15:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
0:
|
||||
expected: "0f35c214608d93c7a6e68ae7359b4a8be5a0e99eea9107ece427c4dea4e439cf"
|
||||
warnOnly: false
|
||||
4:
|
||||
expected: "02c7a67c01ec70ffaf23d73a12f749ab150a8ac6dc529bda2fe1096a98bf42ea"
|
||||
warnOnly: false
|
||||
5:
|
||||
expected: "e6949026b72e5045706cd1318889b3874480f7a3f7c5c590912391a2d15e6975"
|
||||
warnOnly: true
|
||||
8:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
9:
|
||||
expected: "f0a6e8601b00e2fdc57195686cd4ef45eb43a556ac1209b8e25d993213d68384"
|
||||
warnOnly: false
|
||||
11:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
12:
|
||||
expected: "da99eb6cf7c7fbb692067c87fd5ca0b7117dc293578e4fea41f95d3d3d6af5e2"
|
||||
warnOnly: false
|
||||
13:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
14:
|
||||
expected: "d7c4cc7ff7933022f013e03bdee875b91720b5b86cf1753cad830f95e791926f"
|
||||
warnOnly: true
|
||||
15:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
# ...
|
||||
```
|
||||
|
||||
Each entry specifies the expected value of the Constellation node, and whether the measurement should be enforced (`warnOnly: false`), or only a warning should be logged (`warnOnly: true`).
|
||||
By default, the subset of the [available measurements](../architecture/attestation.md#runtime-measurements) that can be locally reproduced and verified is enforced.
|
||||
By default, the subset of the [available measurements](../architecture/security/attestation.md#runtime-measurements) that can be locally reproduced and verified is enforced.
|
||||
|
||||
During attestation, the validating side (CLI or [join service](../architecture/microservices.md#joinservice)) compares each measurement reported by the issuing side (first node or joining node) individually.
|
||||
During attestation, the validating side (CLI or [join service](../architecture/components/microservices.md#joinservice)) compares each measurement reported by the issuing side (first node or joining node) individually.
|
||||
For mismatching measurements that have set `warnOnly` to `true` only a warning is emitted.
|
||||
For mismatching measurements that have set `warnOnly` to `false` an error is emitted and attestation fails.
|
||||
If attestation fails for a new node, it isn't permitted to join the cluster.
|
||||
|
||||
## The *verify* command
|
||||
## The _verify_ command
|
||||
|
||||
:::note
|
||||
The steps below are purely optional. They're automatically executed by `constellation apply` when you initialize your cluster. The `constellation verify` command mostly has an illustrative purpose.
|
||||
@ -76,9 +76,9 @@ constellation verify [--cluster-id ...]
|
||||
|
||||
From the attestation statement, the command verifies the following properties:
|
||||
|
||||
* The cluster is using the correct Confidential VM (CVM) type.
|
||||
* Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
|
||||
* The unique ID of the cluster matches the one from your `constellation-state.yaml` file or passed in via `--cluster-id`.
|
||||
- The cluster is using the correct Confidential VM (CVM) type.
|
||||
- Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
|
||||
- The unique ID of the cluster matches the one from your `constellation-state.yaml` file or passed in via `--cluster-id`.
|
||||
|
||||
Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape.
|
||||
|
||||
@ -86,9 +86,9 @@ Once the above properties are verified, you know that you are talking to the rig
|
||||
|
||||
The `verify` command also allows you to verify any Constellation deployment that you have network access to. For this you need the following:
|
||||
|
||||
* The IP address of a running Constellation cluster's [VerificationService](../architecture/microservices.md#verificationservice). The `VerificationService` is exposed via a `NodePort` service using the external IP address of your cluster. Run `kubectl get nodes -o wide` and look for `EXTERNAL-IP`.
|
||||
* The cluster's *clusterID*. See [cluster identity](../architecture/keys.md#cluster-identity) for more details.
|
||||
* A `constellation-conf.yaml` file with the expected measurements of the cluster in your working directory.
|
||||
- The IP address of a running Constellation cluster's [VerificationService](../architecture/components/microservices.md#verificationservice). The `VerificationService` is exposed via a `NodePort` service using the external IP address of your cluster. Run `kubectl get nodes -o wide` and look for `EXTERNAL-IP`.
|
||||
- The cluster's _clusterID_. See [cluster identity](../architecture/security/keys.md#cluster-identity) for more details.
|
||||
- A `constellation-conf.yaml` file with the expected measurements of the cluster in your working directory.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -22,6 +22,7 @@
|
||||
"@mdx-js/react": "^3.0.0",
|
||||
"asciinema-player": "^3.8.0",
|
||||
"clsx": "^2.0.0",
|
||||
"npm": "^10.9.0",
|
||||
"prism-react-renderer": "^2.3.0",
|
||||
"react": "^18.0.0",
|
||||
"react-dom": "^18.0.0",
|
||||
|
Loading…
x
Reference in New Issue
Block a user