Release docs for v2.1 (#222)
BIN
docs/versioned_docs/version-2.1/_media/benchmark_api_dpl.png
Normal file
After Width: | Height: | Size: 20 KiB |
BIN
docs/versioned_docs/version-2.1/_media/benchmark_api_pods.png
Normal file
After Width: | Height: | Size: 18 KiB |
BIN
docs/versioned_docs/version-2.1/_media/benchmark_api_svc.png
Normal file
After Width: | Height: | Size: 19 KiB |
BIN
docs/versioned_docs/version-2.1/_media/benchmark_io.png
Normal file
After Width: | Height: | Size: 22 KiB |
BIN
docs/versioned_docs/version-2.1/_media/benchmark_net.png
Normal file
After Width: | Height: | Size: 17 KiB |
657
docs/versioned_docs/version-2.1/_media/concept-constellation.svg
Normal file
After Width: | Height: | Size: 91 KiB |
847
docs/versioned_docs/version-2.1/_media/concept-managed.svg
Normal file
After Width: | Height: | Size: 104 KiB |
After Width: | Height: | Size: 16 KiB |
BIN
docs/versioned_docs/version-2.1/_media/example-emojivoto.jpg
Normal file
After Width: | Height: | Size: 138 KiB |
After Width: | Height: | Size: 257 KiB |
BIN
docs/versioned_docs/version-2.1/_media/product-overview-dark.png
Normal file
After Width: | Height: | Size: 154 KiB |
BIN
docs/versioned_docs/version-2.1/_media/product-overview.png
Normal file
After Width: | Height: | Size: 156 KiB |
After Width: | Height: | Size: 45 KiB |
1287
docs/versioned_docs/version-2.1/_media/tcb.svg
Normal file
After Width: | Height: | Size: 237 KiB |
237
docs/versioned_docs/version-2.1/architecture/attestation.md
Normal file
@ -0,0 +1,237 @@
|
||||
# Attestation
|
||||
|
||||
This page explains Constellation's attestation process and highlights the cornerstones of its trust model.
|
||||
|
||||
## Terms
|
||||
|
||||
The following lists a few terms and concepts that helps to understand the attestation concept of Constellation.
|
||||
|
||||
### Trusted Platform Module (TPM)
|
||||
|
||||
A TPM chip is a dedicated tamper-resistant crypto-processor.
|
||||
It can securely store artifacts such as passwords, certificates, encryption keys, or *runtime measurements* (more on this below).
|
||||
When a TPM is implemented in software, it's typically called a *virtual* TPM (vTPM).
|
||||
|
||||
### Runtime measurement
|
||||
|
||||
A runtime measurement is a cryptographic hash of the memory pages of a so called *runtime component*. Runtime components of interest typically include a system's bootloader or OS kernel.
|
||||
|
||||
### Platform Configuration Register (PCR)
|
||||
|
||||
A Platform Configuration Register (PCR) is a memory location in the TPM that has some unique properties.
|
||||
To store a new value in a PCR, the existing value is extended with a new value as follows:
|
||||
|
||||
```
|
||||
PCR[N] = HASHalg( PCR[N] || ArgumentOfExtend )
|
||||
```
|
||||
|
||||
The PCRs are typically used to store runtime measurements.
|
||||
The new value of a PCR is always an extension of the existing value.
|
||||
Thus, storing the measurements of multiple components into the same PCR irreversibly links them together.
|
||||
|
||||
### Measured boot
|
||||
|
||||
Measured boot builds on the concept of chained runtime measurements.
|
||||
Each component in the boot-chain loads and measures the next component into the PCR before executing it.
|
||||
By comparing the resulting PCR values against trusted reference values, the integrity of the entire boot chain and thereby, the running system can be ensured.
|
||||
|
||||
### Remote attestation (RA)
|
||||
|
||||
Remote attestation is the process of verifying certain properties of an application or platform, such as integrity and confidentiality, from a remote location.
|
||||
In the case of a measured boot, the goal is to obtain a signed attestation statement on the PCR values of the boot measurements.
|
||||
The statement can then be verified and compared to a set of trusted reference values.
|
||||
This way, the integrity of the platform can be ensured before sharing secrets with it.
|
||||
|
||||
### Confidential virtual machine (CVM)
|
||||
|
||||
Confidential computing (CC) is the protection of data in-use with hardware-based trusted execution environments (TEEs).
|
||||
With CVMs, TEEs encapsulate entire virtual machines and isolate them against the hypervisor, other VMs, and direct memory access.
|
||||
After loading the initial VM image into encrypted memory, the hypervisor calls for a secure processor to measure these initial memory pages.
|
||||
The secure processor locks these pages and generates an attestation report on the initial page measurements.
|
||||
CVM memory pages are encrypted with a key that resides inside the secure processor, which makes sure only the guest VM can access them.
|
||||
The attestation report is signed by the secure processor and can be verified using remote attestation via the certificate authority of the hardware vendor.
|
||||
Such an attestation statement guarantees the confidentiality and integrity of a CVM.
|
||||
|
||||
### Attested TLS (aTLS)
|
||||
|
||||
In a CC environment, attested TLS (aTLS) can be used to establish secure connections between two parties utilizing the remote attestation features of the CC components.
|
||||
|
||||
aTLS modifies the TLS handshake by embedding an attestation statement into the TLS certificate.
|
||||
Instead of relying on a certificate authority, aTLS uses this attestation statement to establish trust in the certificate.
|
||||
|
||||
The protocol can be used by clients to verify a server certificate, by a server to verify a client certificate, or for mutual verification (mutual aTLS).
|
||||
|
||||
## Overview
|
||||
|
||||
The challenge for Constellation is to lift a CVM's attestation statement to the Kubernetes software layer and make it end-to-end verifiable.
|
||||
From there, Constellation needs to expand the attestation from a single CVM to the entire cluster.
|
||||
|
||||
The [*JoinService*](components.md#joinservice) and [*VerificationService*](components.md#verificationservice) are where all runs together.
|
||||
Internally, the *JoinService* uses remote attestation to securely join CVM nodes to the cluster.
|
||||
Externally, the *VerificationService* provides an attestation statement for the cluster's CVMs and configuration.
|
||||
|
||||
The following explains the details of both steps.
|
||||
|
||||
## Node attestation
|
||||
|
||||
The idea is that Constellation nodes should have verifiable integrity from the CVM hardware measurement up to the Kubernetes software layer.
|
||||
The solution is a verifiable boot chain and an integrity-protected runtime environment.
|
||||
|
||||
Constellation uses measured boot within CVMs, measuring each component in the boot process before executing it.
|
||||
Outside of CC, it's usually implemented via TPMs.
|
||||
CVM technologies differ in how they implement runtime measurements, but the general concepts are similar to those of a TPM.
|
||||
For simplicity, we use TPM terminology like *PCR* in the following.
|
||||
|
||||
When a Constellation node image boots inside a CVM, it uses measured boot for all stages and components of the boot chain.
|
||||
This process goes up to the root filesystem.
|
||||
The root filesystem is mounted read-only with integrity protection, guaranteeing forward integrity.
|
||||
For the details on the image and boot stages see the [image architecture](../architecture/images.md) documentation.
|
||||
Any changes to the image will inevitably also change the measured boot's PCR values.
|
||||
To create a node attestation statement, the Constellation image obtains a CVM attestation statement from the hardware.
|
||||
This includes the runtime measurements and thereby binds the measured boot results to the CVM hardware measurement.
|
||||
|
||||
In addition to the image measurements, Constellation extends a PCR during the [initialization phase](../workflows/create.md#the-init-step) that irrevocably marks the node as initialized.
|
||||
The measurement is created using the [*clusterID*](../architecture/keys.md#cluster-identity), tying all future attestation statements to this ID.
|
||||
Thereby, an attestation statement is unique for every cluster and a node can be identified unambiguously as being initialized.
|
||||
|
||||
To verify an attestation, the hardware's signature and a statement are verified first to establish trust in the contained runtime measurements.
|
||||
If successful, the measurements are verified against the trusted values of the particular Constellation release version.
|
||||
Finally, the measurement of the *clusterID* can be compared by calculating it with the [master secret](keys.md#master-secret).
|
||||
|
||||
### Runtime measurements
|
||||
|
||||
Constellation utilizes runtime measurement to implement the measured boot approach.
|
||||
As stated above, the underlying hardware technology and guest firmware differ in their implementations of runtime measurements.
|
||||
The following gives a detailed description of the available measurements in the different cloud environments.
|
||||
|
||||
The runtime measurements contain two types of values.
|
||||
|
||||
First, are the ones produced by the cloud infrastructure and firmware of the CVM.
|
||||
Depending on the cloud environment they can contain non-reproducible values.
|
||||
Those correspond to measurements of closed-source firmware components and other values controlled by the cloud provider.
|
||||
While not being directly verifiable, they can be compared against previously observed values.
|
||||
As part of the [signed image measurements](#chain-of-trust), Constellation provides measurements that are known, previously observed values.
|
||||
Thereby, Constellation enables users to identify changes and deviations and allows them to act accordingly.
|
||||
See how to [fetch](../workflows/verify-cluster.md#fetch-measurements) the latest measurements and verify a cluster.
|
||||
|
||||
Second, are the measurements produced by the Constellation bootloader and boot chain itself.
|
||||
The Constellation Bootloader is the first part of the Constellation stack that takes over from the CVM firmware and measures the rest of the boot chain.
|
||||
Refer to [images](images.md) for more details on the Constellation boot chain.
|
||||
The Constellation [Bootstrapper](components.md#bootstrapper) is the first user mode component that runs in a Constellation image.
|
||||
It extends PCR registers with the [IDs](keys.md#cluster-identity) of the cluster marking a node as initialized.
|
||||
|
||||
Constellation allows to specify in the config which measurements should be enforced during the attestation process
|
||||
Enforcing non-reproducible measurements controlled by the cloud provider means that changes in these values require manual updates to the cluster's config.
|
||||
By default, Constellation only enforces measurements that are stable values produced by the infrastructure or by Constellation directly.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
Constellation leverages the [vTPM](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch#vtpm) feature of Azure CVMs for runtime measurements.
|
||||
The vTPM on Azure CVMs adheres to the [TPM 2.0](https://trustedcomputinggroup.org/resource/tpm-library-specification/) specification of the [Trusted Computing Group](https://trustedcomputinggroup.org/resource/trusted-platform-module-tpm-summary/).
|
||||
It provides a [measured boot](https://docs.microsoft.com/en-us/azure/security/fundamentals/measured-boot-host-attestation#measured-boot) verification that's based on the trusted launch feature of [Trusted Launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch).
|
||||
|
||||
The following table lists all PCR values of the vTPM and the measured components.
|
||||
It also lists what components of the boot chain did the measurements and if the value is reproducible and verifiable.
|
||||
The latter means that value can be generated offline and compared to the one in the vTPM
|
||||
|
||||
| PCR | Components | Measured by | Reproducible and Verifiable |
|
||||
|---------------|-------------------------------------|---------------------------------|-----------------------------|
|
||||
| 0 | Firmware | Azure | No |
|
||||
| 1 | Firmware | Azure | No |
|
||||
| 2 | Firmware | Azure | No |
|
||||
| 3 | Firmware | Azure | No |
|
||||
| 4 | Constellation Bootloader, GRUB | Azure, Constellation Bootloader | Yes |
|
||||
| 5 | Reserved | Azure | No |
|
||||
| 6 | VM Unique ID | Azure | No |
|
||||
| 7 | Secure Boot State | Azure, Constellation Bootloader | No |
|
||||
| 8 | Kernel command line, GRUB config | Constellation Bootloader | Yes |
|
||||
| 9 | Kernel, initramfs | Constellation Bootloader | Yes |
|
||||
| 10 | Reserved | - | No |
|
||||
| 11 | Reserved | Constellation Bootstrapper | Yes |
|
||||
| 12 | ClusterID | Constellation Bootstrapper | Yes |
|
||||
| 13–23 | Unused | - | - |
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
Constellation leverages the [vTPM](https://cloud.google.com/compute/confidential-vm/docs/about-cvm) feature of CVMs on GCP for runtime measurements.
|
||||
Note that the vTPM in GCP doesn't run inside the hardware-protected CVM context, but is emulated on the hypervisor level.
|
||||
|
||||
The vTPM on GCP CVMs adheres to the [TPM 2.0](https://trustedcomputinggroup.org/resource/tpm-library-specification/) specification of the [Trusted Computing Group](https://trustedcomputinggroup.org/resource/trusted-platform-module-tpm-summary/).
|
||||
It provides a [launch attestation report](https://cloud.google.com/compute/confidential-vm/docs/monitoring#about_launch_attestation_report_events) that's based on the Measured Boot feature of [Shielded VMs](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#measured-boot).
|
||||
|
||||
The following table lists all PCR values of the vTPM and the measured components.
|
||||
It also lists what components of the boot chain did the measurements and if the value is reproducible and verifiable.
|
||||
The latter means that value can be generated offline and compared to the one in the vTPM.
|
||||
|
||||
| PCR | Components | Measured by | Reproducible and Verifiable |
|
||||
|---------------|----------------------------------|-------------------------------|-----------------------------|
|
||||
| 0 | CVM constant string | GCP | No |
|
||||
| 1 | Reserved | GCP | No |
|
||||
| 2 | Reserved | GCP | No |
|
||||
| 3 | Reserved | GCP | No |
|
||||
| 4 | Constellation Bootloader, GRUB | GCP, Constellation Bootloader | Yes |
|
||||
| 5 | Disk GUID partition table | GCP | No |
|
||||
| 6 | Disk GUID partition table | GCP | No |
|
||||
| 7 | GCP Secure Boot Policy | GCP, Constellation Bootloader | No |
|
||||
| 8 | Kernel command line, GRUB config | Constellation Bootloader | Yes |
|
||||
| 9 | Kernel, initramfs | Constellation Bootloader | Yes |
|
||||
| 10 | Reserved | - | No |
|
||||
| 11 | Reserved | Constellation Bootstrapper | Yes |
|
||||
| 12 | ClusterID | Constellation Bootstrapper | Yes |
|
||||
| 13–23 | Unused |- | - |
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Cluster attestation
|
||||
|
||||
Cluster-facing, Constellation's [*JoinService*](components.md#joinservice) verifies each node joining the cluster given the configured ground truth runtime measurements.
|
||||
User-facing, the [*VerificationService*](components.md#verificationservice) provides an interface to verify a node using remote attestation.
|
||||
By verifying the first node during the [initialization](components.md#bootstrapper) and configuring the ground truth measurements that are subsequently enforced by the *JoinService*, the whole cluster is verified in a transitive way.
|
||||
|
||||
### Cluster-facing attestation
|
||||
|
||||
The *JoinService* is provided with the runtime measurements of the whitelisted Constellation image version as the ground truth.
|
||||
During the initialization and the cluster bootstrapping, each node connects to the *JoinService* using [aTLS](#attested-tls-atls).
|
||||
During the handshake, the node transmits an attestation statement including its runtime measurements.
|
||||
The *JoinService* verifies that statement and compares the measurements against the ground truth.
|
||||
For details of the initialization process check the [component descriptions](components.md).
|
||||
|
||||
After the initialization, every node updates its runtime measurements with the *clusterID* value, marking it irreversibly as initialized.
|
||||
When an initialized node tries to join another cluster, its measurements inevitably mismatch the measurements of an uninitialized node and it will be declined.
|
||||
|
||||
### User-facing attestation
|
||||
|
||||
The [*VerificationService*](components.md#verificationservice) provides an endpoint for obtaining its hardware-based remote attestation statement, which includes the runtime measurements.
|
||||
A user can [verify](../workflows/verify-cluster.md) this statement and compare the measurements against the configured ground truth and, thus, verify the identity and integrity of all Constellation components and the cluster configuration. Subsequently, the user knows that the entire cluster is in the expected state and is trustworthy.
|
||||
|
||||
## Chain of trust
|
||||
|
||||
So far, this page described how an entire Constellation cluster can be verified using hardware attestation capabilities and runtime measurements.
|
||||
The last missing link is how the ground truth in the form of runtime measurements can be securely distributed to the verifying party.
|
||||
|
||||
When building Constellation images the process entails creating the ground truth runtime measurements.
|
||||
The builds of Constellation images are reproducible and the measurements of an image can be recalculated and verified by everyone.
|
||||
With every release, Edgeless Systems publishes signed runtime measurements.
|
||||
|
||||
When installing the Constellation CLI, the release binary is also signed by Edgeless Systems.
|
||||
The [installation guide](../architecture/orchestration.md#verify-your-cli-installation) explains how this signature can be verified.
|
||||
|
||||
The CLI contains the public key required to verify signed runtime measurements from Edgeless Systems.
|
||||
When a new cluster is [created](../workflows/create.md) or updated, the CLI automatically verifies the measurements for the selected image.
|
||||
Alternatively, Constellation has the option to use and verify custom build images.
|
||||
For this, the CLI can be provided with a custom public key for verification.
|
||||
|
||||
Thus, we've a chain of trust based on cryptographic signatures, which goes from CLI to runtime measurements to images. This is illustrated in the following diagram.
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[Edgeless]-- "signs (cosign)" -->B[CLI]
|
||||
C[User]-- "verifies (cosign)" -->B[CLI]
|
||||
B[CLI]-- "contains" -->D["Public Key"]
|
||||
A[Edgeless]-- "signs" -->E["Runtime measurements"]
|
||||
D["Public Key"]-- "verifies" -->E["Runtime measurements"]
|
||||
E["Runtime measurements"]-- "verify" -->F["Constellation cluster"]
|
||||
```
|
81
docs/versioned_docs/version-2.1/architecture/components.md
Normal file
@ -0,0 +1,81 @@
|
||||
# Components
|
||||
|
||||
Constellation takes care of bootstrapping and initializing a Confidential Kubernetes cluster.
|
||||
During the lifetime of the cluster, it handles day 2 operations such as key management, remote attestation, and updates.
|
||||
These features are provided by several components:
|
||||
|
||||
* The [Bootstrapper](components.md#bootstrapper) initializes a Constellation node and bootstraps the cluster
|
||||
* The [JoinService](components.md#joinservice) joins new nodes to an existing cluster
|
||||
* The [VerificationService](components.md#verificationservice) provides remote attestation functionality
|
||||
* The [Key Management Service (KMS)](components.md#kms) manages Constellation-internal keys
|
||||
* The [AccessManager](components.md#accessmanager) manages node SSH access
|
||||
|
||||
The relations between components are shown in the following diagram:
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph admin [Admin's machine]
|
||||
A[Constellation CLI]
|
||||
end
|
||||
subgraph img [CoreOS image]
|
||||
B[CoreOS]
|
||||
C[Bootstrapper]
|
||||
end
|
||||
subgraph Kubernetes
|
||||
D[AccessManager]
|
||||
E[JoinService]
|
||||
F[KMS]
|
||||
G[VerificationService]
|
||||
end
|
||||
A -- deploys -->
|
||||
B -- starts --> C
|
||||
C -- deploys --> D
|
||||
C -- deploys --> E
|
||||
C -- deploys --> F
|
||||
C -- deploys --> G
|
||||
```
|
||||
|
||||
## Bootstrapper
|
||||
|
||||
The *Bootstrapper* is the first component launched after booting a Constellation node image.
|
||||
It sets up that machine as a Kubernetes node and integrates that node into the Kubernetes cluster.
|
||||
To this end, the *Bootstrapper* first downloads and [verifies](https://blog.sigstore.dev/kubernetes-signals-massive-adoption-of-sigstore-for-protecting-open-source-ecosystem-73a6757da73) the [Kubernetes components](https://kubernetes.io/docs/concepts/overview/components/) at the configured versions.
|
||||
The *Bootstrapper* tries to find an existing cluster and if successful, communicates with the [JoinService](components.md#joinservice) to join the node.
|
||||
Otherwise, it waits for an initialization request to create a new Kubernetes cluster.
|
||||
|
||||
## JoinService
|
||||
|
||||
The *JoinService* runs as [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) on each control-plane node.
|
||||
New nodes (at cluster start, or later through autoscaling) send a request to the service over [attested TLS (aTLS)](attestation.md#attested-tls-atls).
|
||||
The *JoinService* verifies the new node's certificate and attestation statement.
|
||||
If attestation is successful, the new node is supplied with an encryption key from the [*KMS*](components.md#kms) for its state disk, and a Kubernetes bootstrap token.
|
||||
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant New node
|
||||
participant JoinService
|
||||
New node->>JoinService: aTLS handshake (server side verification)
|
||||
JoinService-->>New node: #
|
||||
New node->>+JoinService: IssueJoinTicket(DiskUUID, NodeName, IsControlPlane)
|
||||
JoinService->>+KMS: GetDataKey(DiskUUID)
|
||||
KMS-->>-JoinService: DiskEncryptionKey
|
||||
JoinService-->>-New node: DiskEncryptionKey, KubernetesJoinToken, ...
|
||||
```
|
||||
|
||||
## VerificationService
|
||||
|
||||
The *VerificationService* runs as DaemonSet on each node.
|
||||
It provides user-facing functionality for remote attestation during the cluster's lifetime via an endpoint for [verifying the cluster](attestation.md#cluster-attestation).
|
||||
Read more about the hardware-based [attestation feature](attestation.md) of Constellation and how to [verify](../workflows/verify-cluster.md) a cluster on the client side.
|
||||
|
||||
## KMS
|
||||
|
||||
The *KMS* runs as DaemonSet on each control-plane node.
|
||||
It implements the key management for the [storage encryption keys](keys.md#storage-encryption) in Constellation. These keys are used for the [state disk](images.md#state-disk) of each node and the [transparently encrypted storage](encrypted-storage.md) for Kubernetes.
|
||||
Depending on wether the [constellation-managed](keys.md#constellation-managed-key-management) or [user-managed](keys.md#user-managed-key-management) mode is used, the *KMS* holds the key encryption key (KEK) directly or calls an external service for key derivation respectively.
|
||||
|
||||
## AccessManager
|
||||
|
||||
The *AccessManager* runs as DaemonSet on each node.
|
||||
It manages the user's SSH access to nodes as specified in the config.
|
@ -0,0 +1,57 @@
|
||||
# Encrypted persistent storage
|
||||
|
||||
Confidential VMs provide runtime memory encryption to protect data in use.
|
||||
In the context of Kubernetes, this is sufficient for the confidentiality and integrity of stateless services.
|
||||
Consider a front-end web server, for example, that keeps all connection information cached in main memory.
|
||||
No sensitive data is ever written to an insecure medium.
|
||||
However, many real-world applications need some form of state or data-lake service that's connected to a persistent storage device and requires encryption at rest.
|
||||
As described in [Use persistent storage](../workflows/storage.md), cloud service providers (CSPs) use the container storage interface (CSI) to make their storage solutions available to Kubernetes workloads.
|
||||
These CSI storage solutions often support some sort of encryption.
|
||||
For example, Google Cloud [encrypts data at rest by default](https://cloud.google.com/security/encryption/default-encryption), without any action required by the customer.
|
||||
|
||||
## Cloud provider-managed encryption
|
||||
|
||||
CSP-managed storage solutions encrypt the data in the cloud backend before writing it physically to disk.
|
||||
In the context of confidential computing and Constellation, the CSP and its managed services aren't trusted.
|
||||
Hence, cloud provider-managed encryption protects your data from offline hardware access to physical storage devices.
|
||||
It doesn't protect it from anyone with infrastructure-level access to the storage backend or a malicious insider in the cloud platform.
|
||||
Even with "bring your own key" or similar concepts, the CSP performs the encryption process with access to the keys and plaintext data.
|
||||
|
||||
In the security model of Constellation, securing persistent storage and thereby data at rest requires that all cryptographic operations are performed inside a trusted execution environment.
|
||||
Consequently, using CSP-managed encryption of persistent storage usually isn't an option.
|
||||
|
||||
## Constellation-managed encryption
|
||||
|
||||
Constellation provides CSI drivers for storage solutions in all major clouds with built-in encryption support.
|
||||
Block storage provisioned by the CSP is [mapped](https://guix.gnu.org/manual/en/html_node/Mapped-Devices.html) using the [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html), and optionally the [dm-integrity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html), kernel modules, before it's formatted and accessed by the Kubernetes workloads.
|
||||
All cryptographic operations happen inside the trusted environment of the confidential Constellation node.
|
||||
|
||||
Please note that for integrity-protected disks, [volume expansion](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/) isn't supported.
|
||||
|
||||
By default the driver uses data encryption keys (DEKs) issued by the Constellation [*KMS*](components.md#kms).
|
||||
The DEKs are in turn derived from the Constellation's key encryption key (KEK), which is directly derived from the [master secret](keys.md#master-secret).
|
||||
This is the recommended mode of operation, and also requires the least amount of setup by the cluster administrator.
|
||||
|
||||
Alternatively, the driver can be configured to use a key management system to store and access KEKs and DEKs.
|
||||
|
||||
Please refer to [keys and cryptography](keys.md) for more details on key management in Constellation.
|
||||
|
||||
Once deployed and configured, the CSI driver ensures transparent encryption and integrity of all persistent volumes provisioned via its storage class.
|
||||
Data at rest is secured without any additional actions required by the developer.
|
||||
|
||||
## Cryptographic algorithms
|
||||
|
||||
This section gives an overview of the libraries, cryptographic algorithms, and their configurations, used in Constellation's CSI drivers.
|
||||
|
||||
### dm-crypt
|
||||
|
||||
To interact with the dm-crypt kernel module, Constellation uses [libcryptsetup](https://gitlab.com/cryptsetup/cryptsetup/).
|
||||
New devices are formatted as [LUKS2](https://gitlab.com/cryptsetup/LUKS2-docs/-/tree/master) partitions with a sector size of 4096 bytes.
|
||||
The used key derivation function is [Argon2id](https://datatracker.ietf.org/doc/html/rfc9106) with the [recommended parameters for memory-constrained environments](https://datatracker.ietf.org/doc/html/rfc9106#section-7.4) of 3 iterations and 64 MiB of memory, utilizing 4 parallel threads.
|
||||
For encryption Constellation uses AES in XTS-Plain64. The key size is 512 bit.
|
||||
|
||||
### dm-integrity
|
||||
|
||||
To interact with the dm-integrity kernel module, Constellation uses [libcryptsetup](https://gitlab.com/cryptsetup/cryptsetup/).
|
||||
When enabled, the used data integrity algorithm is [HMAC](https://datatracker.ietf.org/doc/html/rfc2104) with SHA256 as the hash function.
|
||||
The tag size is 32 Bytes.
|
45
docs/versioned_docs/version-2.1/architecture/images.md
Normal file
@ -0,0 +1,45 @@
|
||||
# Constellation images
|
||||
|
||||
Constellation uses [Fedora CoreOS](https://docs.fedoraproject.org/en-US/fedora-coreos/) as the operating system running inside confidential VMs. This Linux distribution is optimized for containers and is designed to have an immutable filesystem.
|
||||
The Constellation images extend on that concept by leveraging measured boot and verification of the root filesystem.
|
||||
|
||||
## Measured boot
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
Firmware --> Bootloader
|
||||
Bootloader --> kernel
|
||||
Bootloader --> initramfs
|
||||
initramfs --> rootfs[root filesystem]
|
||||
```
|
||||
|
||||
Measured boot uses a Trusted Platform Module (TPM) to measure every part of the boot process. This allows for verification of the integrity of a running system at any point in time. To ensure correct measurements of every stage, each stage is responsible to measure the next stage before transitioning.
|
||||
|
||||
### Firmware
|
||||
|
||||
With confidential VMs, the firmware is the root of trust and is measured automatically at boot. After initialization, the firmware will load and measure the bootloader before executing it.
|
||||
|
||||
### Bootloader
|
||||
|
||||
The bootloader is the first modifiable part of the boot chain. The bootloader is tasked with loading the kernel, initramfs and setting the kernel command line. The Constellation bootloader measures these components before starting the kernel.
|
||||
|
||||
### initramfs
|
||||
|
||||
The initramfs is a small filesystem loaded to prepare the actual root filesystem. The Constellation initramfs maps the block device containing the root filesystem with [dm-verity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html). The initramfs then mounts the root filesystem from the mapped block device.
|
||||
|
||||
dm-verity provides integrity checking using a cryptographic hash tree. When a block is read, its integrity is checked by verifying the tree against a trusted root hash. The initramfs reads this root hash from the previously measured kernel command line. Thus, if any block of the root filesystem's device is modified on disk, trying to read the modified block will result in a kernel panic at runtime.
|
||||
|
||||
After mounting the root filesystem, the initramfs will switch over and start the `init` process of the integrity-protected root filesystem.
|
||||
|
||||
## State disk
|
||||
|
||||
In addition to the read-only root filesystem, each Constellation node has a disk for storing state data.
|
||||
This disk is mounted readable and writable by the initramfs and contains data that should persist across reboots.
|
||||
Such data can contain sensitive information and, therefore, must be stored securely.
|
||||
To that end, the state disk is protected by authenticated encryption.
|
||||
See the section on [keys and encryption](keys.md#storage-encryption) for more information on the cryptographic primitives in use.
|
||||
|
||||
## Kubernetes components
|
||||
|
||||
During initialization, the [*Bootstrapper*](components.md#bootstrapper) downloads and [verifies](https://blog.sigstore.dev/kubernetes-signals-massive-adoption-of-sigstore-for-protecting-open-source-ecosystem-73a6757da73) the [Kubernetes components](https://kubernetes.io/docs/concepts/overview/components/) as configured by the user.
|
||||
They're stored on the state partition and can be updated once new releases need to be installed.
|
127
docs/versioned_docs/version-2.1/architecture/keys.md
Normal file
@ -0,0 +1,127 @@
|
||||
# Key management and cryptographic primitives
|
||||
|
||||
Constellation protects and isolates your cluster and workloads.
|
||||
To that end, cryptography is the foundation that ensures the confidentiality and integrity of all components.
|
||||
Evaluating the security and compliance of Constellation requires a precise understanding of the cryptographic primitives and keys used.
|
||||
The following gives an overview of the architecture and explains the technical details.
|
||||
|
||||
## Confidential VMs
|
||||
|
||||
Confidential VM (CVM) technology comes with hardware and software components for memory encryption, isolation, and remote attestation.
|
||||
For details on the implementations and cryptographic soundness please refer to the hardware vendors' documentation and advisories.
|
||||
|
||||
## Master secret
|
||||
|
||||
The master secret is the cryptographic material used for deriving the [*clusterID*](#cluster-identity) and the *key encryption key (KEK)* for [storage encryption](#storage-encryption).
|
||||
It's generated during the bootstrapping of a Constellation cluster.
|
||||
It can either be managed by [Constellation](#constellation-managed-key-management) or an [external key management system](#user-managed-key-management)
|
||||
In case of [recovery](#recovery-and-migration), the master secret allows to decrypt the state and recover a Constellation cluster.
|
||||
|
||||
## Cluster identity
|
||||
|
||||
The identity of a Constellation cluster consists of two parts:
|
||||
|
||||
* *baseID:* The identity of a valid and measured, uninitialized Constellation node
|
||||
* *clusterID:* The identity unique to a single initialized Constellation cluster
|
||||
|
||||
Using the CVM's attestation mechanism and [measured boot up to the read-only root filesystem](images.md) guarantees *baseID*.
|
||||
The *clusterID* is derived from the master secret and a cryptographically random salt. It's unique for every Constellation cluster.
|
||||
The remote attestation statement of a Constellation cluster combines *baseID* and *clusterID* for a verifiable, unspoofable, unique identity.
|
||||
|
||||
## Network encryption
|
||||
|
||||
Constellation encrypts all cluster network communication using the [container network interface (CNI)](https://github.com/containernetworking/cni).
|
||||
See [network encryption](networking.md) for more details.
|
||||
|
||||
The Cilium agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes in the cluster.
|
||||
Each node automatically creates its own [Curve25519](http://cr.yp.to/ecdh.html) encryption key-pair and distributes its public key via Kubernetes.
|
||||
Each node’s public key is then used by other nodes to decrypt and encrypt traffic from and to Cilium-managed endpoints running on that node.
|
||||
Connections are always encrypted peer-to-peer using [ChaCha20](http://cr.yp.to/chacha.html) with [Poly1305](http://cr.yp.to/mac.html).
|
||||
WireGuard implements [forward secrecy with key rotation every 2 minutes](https://lists.zx2c4.com/pipermail/wireguard/2017-December/002141.html).
|
||||
Cilium supports [key rotation](https://docs.cilium.io/en/stable/gettingstarted/encryption-ipsec/#key-rotation) for the long-term node keys via Kubernetes secrets.
|
||||
|
||||
## Storage encryption
|
||||
|
||||
Constellation supports transparent encryption of persistent storage.
|
||||
The Linux kernel's device mapper-based encryption features are used to encrypt the data on the block storage level.
|
||||
Currently, the following primitives are used for block storage encryption:
|
||||
|
||||
* [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html)
|
||||
* [dm-integrity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html)
|
||||
|
||||
Adding primitives for integrity protection in the CVM attacker model are under active development and will be available in a future version of Constellation.
|
||||
See [encrypted storage](encrypted-storage.md) for more details.
|
||||
|
||||
As a cluster administrator, when creating a cluster, you can use the Constellation [installation program](orchestration.md) to select one of the following methods for key management:
|
||||
|
||||
* Constellation-managed key management
|
||||
* User-managed key management
|
||||
|
||||
### Constellation-managed key management
|
||||
|
||||
#### Key material and key derivation
|
||||
|
||||
During the creation of a Constellation, the cluster's master secret is used to derive a KEK.
|
||||
This means creating two clusters with the same master secret will yield the same KEK.
|
||||
Any data encryption key (DEK) is derived from the KEK via HKDF.
|
||||
Note that the master secret is recommended to be unique for every Constellation cluster and shouldn't be reused (except in case of [recovering](../workflows/recovery.md) a cluster).
|
||||
|
||||
#### State and storage
|
||||
|
||||
The KEK is derived from the master secret during the initialization.
|
||||
Subsequently, all other key material is derived from the KEK.
|
||||
Given the same KEK, any DEK can be derived deterministically from a given identifier.
|
||||
Hence, there is no need to store DEKs. They can be derived on demand.
|
||||
After the KEK was derived, it's stored in memory only and never leaves the CVM context.
|
||||
|
||||
#### Availability
|
||||
|
||||
Constellation-managed key management has the same availability as the underlying Kubernetes cluster.
|
||||
Therefore, the KEK is stored in the [distributed Kubernetes etcd storage](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/) to allow for unexpected but non-fatal (control-plane) node failure.
|
||||
The etcd storage is backed by the encrypted and integrity protected [state disk](images.md#state-disk) of the nodes.
|
||||
|
||||
#### Recovery
|
||||
|
||||
Constellation clusters can be recovered in the event of a disaster, even when all node machines have been stopped and need to be rebooted.
|
||||
For details on the process see the [recovery workflow](../workflows/recovery.md).
|
||||
|
||||
### User-managed key management
|
||||
|
||||
User-managed key management is under active development and will be available soon.
|
||||
In scenarios where constellation-managed key management isn't an option, this mode allows you to keep full control of your keys.
|
||||
For example, compliance requirements may force you to keep your KEKs in an on-prem key management system (KMS).
|
||||
|
||||
During the creation of a Constellation, you specify a KEK present in a remote KMS.
|
||||
This follows the common scheme of "bring your own key" (BYOK).
|
||||
Constellation will support several KMSs for managing the storage and access of your KEK.
|
||||
Initially, it will support the following KMSs:
|
||||
|
||||
* [AWS KMS](https://aws.amazon.com/kms/)
|
||||
* [GCP KMS](https://cloud.google.com/security-key-management)
|
||||
* [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/#product-overview)
|
||||
* [KMIP-compatible KMS](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=kmip)
|
||||
|
||||
Storing the keys in Cloud KMS of AWS, GCP, or Azure binds the key usage to the particular cloud identity access management (IAM).
|
||||
In the future, Constellation will support remote attestation-based access policies for Cloud KMS once available.
|
||||
Note that using a Cloud KMS limits the isolation and protection to the guarantees of the particular offering.
|
||||
|
||||
KMIP support allows you to use your KMIP-compatible on-prem KMS and keep full control over your keys.
|
||||
This follows the common scheme of "hold your own key" (HYOK).
|
||||
|
||||
The KEK is used to encrypt per-data "data encryption keys" (DEKs).
|
||||
DEKs are generated to encrypt your data before storing it on persistent storage.
|
||||
After being encrypted by the KEK, the DEKs are stored on dedicated cloud storage for persistence.
|
||||
Currently, Constellation supports the following cloud storage options:
|
||||
|
||||
* [AWS S3](https://aws.amazon.com/s3/)
|
||||
* [GCP Cloud Storage](https://cloud.google.com/storage)
|
||||
* [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/#overview)
|
||||
|
||||
The DEKs are only present in plaintext form in the encrypted main memory of the CVMs.
|
||||
Similarly, the cryptographic operations for encrypting data before writing it to persistent storage are performed in the context of the CVMs.
|
||||
|
||||
#### Recovery and migration
|
||||
|
||||
In the case of a disaster, the KEK can be used to decrypt the DEKs locally and subsequently use them to decrypt and retrieve the data.
|
||||
In case of migration, configuring the same KEK will provide seamless migration of data.
|
||||
Thus, only the DEK storage needs to be transferred to the new cluster alongside the encrypted data for seamless migration.
|
23
docs/versioned_docs/version-2.1/architecture/networking.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Network encryption
|
||||
|
||||
Constellation encrypts all pod communication using the [container network interface (CNI)](https://github.com/containernetworking/cni).
|
||||
To that end, Constellation deploys, configures, and operates the [Cilium](https://cilium.io/) CNI plugin.
|
||||
Cilium provides [transparent encryption](https://docs.cilium.io/en/stable/gettingstarted/encryption) for all cluster traffic using either IPSec or [WireGuard](https://www.wireguard.com/).
|
||||
Currently, Constellation only supports WireGuard as the encryption engine.
|
||||
You can read more about the cryptographic soundness of WireGuard [in their white paper](https://www.wireguard.com/papers/wireguard.pdf).
|
||||
|
||||
Cilium is actively working on implementing a feature called [`host-to-host`](https://github.com/cilium/cilium/pull/19401) encryption mode for WireGuard.
|
||||
With `host-to-host`, all traffic between nodes will be tunneled via WireGuard (host-to-host, host-to-pod, pod-to-host, pod-to-pod).
|
||||
Until the `host-to-host` feature is released, Constellation enables `pod-to-pod` encryption.
|
||||
This mode encrypts all traffic between Kubernetes pods using WireGuard tunnels.
|
||||
Constellation uses an extended version of `pod-to-pod` called *strict* mode.
|
||||
|
||||
When using Cilium in the default setup but with encryption enabled, there is a [known issue](https://docs.cilium.io/en/v1.12/gettingstarted/encryption/#egress-traffic-to-not-yet-discovered-remote-endpoints-may-be-unencrypted)
|
||||
that can cause pod-to-pod traffic to be unencrypted.
|
||||
Constellation uses strict mode to mitigates this issue.
|
||||
We change the default behavior of traffic that's destined for an unknown endpoint to not be send out in plaintext to instead being dropped.
|
||||
The strict mode can distinguish between traffic that's send to an pod from traffic that's destined for an cluster-external endpoint, since it knows the pod's CIDR range.
|
||||
|
||||
The last remaining traffic that's not encrypted is traffic originating from hosts.
|
||||
This mainly includes health checks from Kubernetes API server.
|
||||
Also, traffic proxied over the API server via e.g. `kubectl port-forward` isn't encrypted.
|
@ -0,0 +1,88 @@
|
||||
# Orchestrating Constellation clusters
|
||||
|
||||
You can use the CLI to create a cluster on the supported cloud platforms.
|
||||
The CLI provisions the resources in your cloud environment and initiates the initialization of your cluster.
|
||||
It uses a set of parameters and an optional configuration file to manage your cluster installation.
|
||||
The CLI is also used for updating your cluster.
|
||||
|
||||
## Workspaces
|
||||
|
||||
Each Constellation cluster has an associated *workspace*.
|
||||
The workspace is where the persistent data such as the Constellation state, config, and ID files are stored.
|
||||
Each workspace is associated with a single cluster and configuration.
|
||||
Currently, the CLI stores state in the local filesystem making the current directory the active workspace.
|
||||
Multiple clusters require multiple workspaces, hence, multiple directories.
|
||||
Note that every operation on a cluster always has to be performed from the directory associated with its workspace.
|
||||
|
||||
## Cluster creation process
|
||||
|
||||
Releases of Constellation are [published on GitHub](https://github.com/edgelesssys/constellation/releases).
|
||||
|
||||
To allow for fine-grained configuration of your cluster and cloud environment, Constellation supports an extensive configuration file with strong defaults.
|
||||
The CLI provides you with a good default configuration which can be generated with `constellation config generate`. Some cloud account-specific information is always required and to be set by the user.
|
||||
|
||||
The following files are generated during the creation of a Constellation cluster and stored in the current workspace:
|
||||
|
||||
* a configuration file
|
||||
* a state file
|
||||
* an ID file
|
||||
* a base64 encoded master secret
|
||||
* a Kubernetes `kubeconfig` file.
|
||||
|
||||
Constellation must store the state of its created infrastructure and configuration.
|
||||
This state is used by Constellation to map real-world resources to your configuration and keep track of metadata.
|
||||
This state is stored by default in a local file named `constellation-state.json`.
|
||||
|
||||
After the successful creation of your cluster, the CLI will provide you with a Kubernetes `kubeconfig` file.
|
||||
This file provides you with access to your Kubernetes cluster and configures the [kubectl](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) tool.
|
||||
In addition, the cluster's [identifier](orchestration.md#post-installation-configuration) is returned and stored in a file called `constellation-id.json`
|
||||
|
||||
### Creation process details
|
||||
|
||||
1. The CLI `create` command creates the confidential VMs (CVMs) resources in your cloud environment and configures the network
|
||||
2. Each CVM boots the Constellation node image and measures every component in the boot chain
|
||||
3. The first component launched in each node is the [*Bootstrapper*](components.md#bootstrapper)
|
||||
4. The *Bootstrapper* waits until it either receives an initialization request or discovers an initialized cluster
|
||||
5. The CLI `init` command connects to the *Bootstrapper* of a selected node, sends the configuration, and initiates the initialization of the cluster
|
||||
6. The *Bootstrapper* of **that** node [initializes the Kubernetes cluster](components.md#bootstrapper) and deploys the other Constellation [components](components.md) including the [*JoinService*](components.md#joinservice)
|
||||
7. Subsequently, the *Bootstrappers* of the other nodes discover the initialized cluster and send join requests to the *JoinService*
|
||||
8. As part of the join request each node includes an attestation statement of its boot measurements as a form of authentication
|
||||
9. The *JoinService* verifies the attestation statements and joins the nodes to the Kubernetes cluster
|
||||
10. This process is repeated for every node joining the cluster later (e.g., through autoscaling)
|
||||
|
||||
## Post-installation configuration
|
||||
|
||||
Post-installation the CLI provides a configuration for [accessing the cluster using the Kubernetes API](https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/).
|
||||
The `kubeconfig` file provides the credentials and configuration for connecting and authenticating to the API server.
|
||||
Once configured, orchestrate the Kubernetes cluster via `kubectl`.
|
||||
|
||||
Keep the state files in the workspace directory such as the `constellation-state.json` for the CLI to be able to manage your cluster.
|
||||
Without it, you won't be able to modify or terminate your cluster.
|
||||
|
||||
After the initialization, the CLI will present you with a couple of tokens:
|
||||
|
||||
* The [*master secret*](keys.md#master-secret) (stored in the `constellation-mastersecret.json` file by default)
|
||||
* The [*clusterID*](keys.md#cluster-identity) of your cluster in base64 format
|
||||
|
||||
You can read more about these values and their meaning in the guide on [cluster identity](keys.md#cluster-identity).
|
||||
|
||||
The *master secret* must be kept secret and can be used to [recover your cluster](../workflows/recovery.md).
|
||||
Instead of managing this secret manually, you can [use your key management solution of choice](keys.md#user-managed-key-management) with Constellation.
|
||||
|
||||
The *clusterID* uniquely identifies a cluster and can be used to [verify your cluster](../workflows/verify-cluster.md).
|
||||
|
||||
## Upgrades
|
||||
|
||||
Constellation images and components might need to be upgraded to new versions during the lifetime of a cluster.
|
||||
Constellation implements a rolling update mechanism ensuring no downtime of the control or data plane.
|
||||
You can upgrade a Constellation cluster with a single operation by using the CLI.
|
||||
For step-by-step instructions on how to do this, refer to [Upgrade Constellation](../workflows/upgrade.md).
|
||||
|
||||
### Attestation of upgrades
|
||||
|
||||
The new verification hashes (measurements) are released together with every image.
|
||||
During an update procedure, the CLI provides the new measurements to the [JoinService](components.md#joinservice) securely.
|
||||
New measurements for an updated image are automatically pulled and verified by the CLI following the [supply chain security concept](attestation.md#chain-of-trust) of Constellation.
|
||||
The [attestation section](attestation.md#cluster-facing-attestation) describes in detail how these measurements are then used by the JoinService for the attestation of nodes.
|
||||
|
||||
The updated measurements are reproducible based on the updated node images, hence, auditable by the customer.
|
23
docs/versioned_docs/version-2.1/architecture/overview.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Overview
|
||||
|
||||
Constellation is a cloud-based confidential orchestration platform.
|
||||
The foundation of Constellation is Kubernetes and therefore shares the same technology stack and architecture principles.
|
||||
To learn more about Constellation and Kubernetes, see [product overview](../overview/product.md).
|
||||
|
||||
## About installation and updates
|
||||
|
||||
As a cluster administrator, you can use the [Constellation CLI](orchestration.md) to install and deploy a cluster.
|
||||
|
||||
## About the components and attestation
|
||||
|
||||
Constellation manages the nodes and network in your cluster. All nodes are bootstrapped by the [*Bootstrapper*](components.md#bootstrapper). They're verified and authenticated by the [*JoinService*](components.md#joinservice) before being added to the cluster and the network. Finally, the entire cluster can be verified via the [*VerificationService*](components.md#verification-service) using [remote attestation](attestation.md).
|
||||
|
||||
## About node images and verified boot
|
||||
|
||||
Constellation comes with operating system images for Kubernetes control-plane and worker nodes.
|
||||
They're highly optimized for running containerized workloads and specifically prepared for running inside confidential VMs.
|
||||
You can learn more about [the images](images.md) and how verified boot ensures their integrity during boot and beyond.
|
||||
|
||||
## About key management and cryptographic primitives
|
||||
|
||||
Encryption of data at-rest, in-transit, and in-use is the fundamental building block for confidential computing and Constellation. Learn more about the [keys and cryptographic primitives](keys.md) used in Constellation and about [encrypted persistent storage](encrypted-storage.md).
|
17
docs/versioned_docs/version-2.1/architecture/versions.md
Normal file
@ -0,0 +1,17 @@
|
||||
# Versions and support policy
|
||||
|
||||
This page details which guarantees Constellation makes regarding versions, compatibility, and life cycle.
|
||||
|
||||
All released components of Constellation use a three-digit version number, of the form `v<MAJOR>.<MINOR>.<PATCH>`.
|
||||
|
||||
## Release cadence
|
||||
|
||||
All [components](components.md) of Constellation are released in lock step on the first Tuesday of every month. This release primarily introduces new features, but may also include security or performance improvements. The `MINOR` version will be incremented as part of this release.
|
||||
|
||||
Additional `PATCH` releases may be created on demand, to fix security issues or bugs before the next `MINOR` release window.
|
||||
|
||||
New releases are published on [GitHub](https://github.com/edgelesssys/constellation/releases).
|
||||
|
||||
### Kubernetes support policy
|
||||
|
||||
Constellation is aligned to the [version support policy of Kubernetes](https://kubernetes.io/releases/version-skew-policy/#supported-versions), and therefore supports the most recent three minor versions.
|
@ -0,0 +1,6 @@
|
||||
# Examples
|
||||
|
||||
After you [installed the CLI](install.md) and [created your first cluster](first-steps.md), you're ready to deploy applications. Why not start with one of the following examples?
|
||||
* [Emojivoto](examples/emojivoto.md): a simple but fun web application
|
||||
* [Online Boutique](examples/online-boutique.md): an e-commerce demo application by Google consisting of 11 separate microservices
|
||||
* [Horizontal Pod Autoscaling](examples/horizontal-scaling.md): an example demonstrating Constellation's autoscaling capabilities
|
@ -0,0 +1,22 @@
|
||||
# Emojivoto
|
||||
[Emojivoto](https://github.com/BuoyantIO/emojivoto) is a simple and fun application that's well suited to test the basic functionality of your cluster.
|
||||
|
||||
<!-- vale off -->
|
||||
|
||||
<img src={require("../../_media/example-emojivoto.jpg").default} alt="emojivoto - Web UI" width="552"/>
|
||||
|
||||
<!-- vale on -->
|
||||
|
||||
1. Deploy the application:
|
||||
```bash
|
||||
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
|
||||
```
|
||||
2. Wait until it becomes available:
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
|
||||
```
|
||||
3. Forward the web service to your machine:
|
||||
```bash
|
||||
kubectl -n emojivoto port-forward svc/web-svc 8080:80
|
||||
```
|
||||
4. Visit [http://localhost:8080](http://localhost:8080)
|
@ -0,0 +1,98 @@
|
||||
# Horizontal Pod Autoscaling
|
||||
This example demonstrates Constellation's autoscaling capabilities. It's based on the Kubernetes [HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). During the following steps, Constellation will spawn new VMs on demand, verify them, add them to the cluster, and delete them again when the load has settled down.
|
||||
|
||||
## Requirements
|
||||
The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/scale.md) to enable Constellation to assign new nodes dynamically.
|
||||
|
||||
Just for this example specifically, the cluster should have as few worker nodes in the beginning as possible. Start with a small cluster with only *one* low-powered node for the control-plane node and *one* low-powered worker node.
|
||||
|
||||
:::info
|
||||
We tested the example using instances of types `Standard_DC4as_v5` on Azure and `n2d-standard-4` on GCP.
|
||||
:::
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install the Kubernetes Metrics Server:
|
||||
```bash
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
|
||||
```
|
||||
|
||||
2. Deploy the HPA example server that's supposed to be scaled under load.
|
||||
|
||||
This manifest is similar to the one from the Kubernetes HPA walkthrough, but with increased CPU limits and requests to facilitate the triggering of node scaling events.
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: php-apache
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
run: php-apache
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: php-apache
|
||||
spec:
|
||||
containers:
|
||||
- name: php-apache
|
||||
image: registry.k8s.io/hpa-example
|
||||
ports:
|
||||
- containerPort: 80
|
||||
resources:
|
||||
limits:
|
||||
cpu: 900m
|
||||
requests:
|
||||
cpu: 600m
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: php-apache
|
||||
labels:
|
||||
run: php-apache
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
run: php-apache
|
||||
EOF
|
||||
```
|
||||
3. Create a HorizontalPodAutoscaler.
|
||||
|
||||
Set an average CPU usage across all Pods of 20% to see one additional worker node being created later. Note that the CPU usage isn't related to the host CPU usage, but rather to the requested CPU capacities (20% of 600 milli-cores CPU across all Pods). See the [original tutorial](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler) for more information on the HPA configuration.
|
||||
```bash
|
||||
kubectl autoscale deployment php-apache --cpu-percent=20 --min=1 --max=10
|
||||
```
|
||||
4. Create a Pod that generates load on the server:
|
||||
```bash
|
||||
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"
|
||||
```
|
||||
5. Wait a few minutes until new nodes are added to the cluster. You can [monitor](#monitoring) the state of the HorizontalPodAutoscaler, the list of nodes, and the behavior of the autoscaler.
|
||||
6. To stop the load generator, press CTRL+C and run:
|
||||
```bash
|
||||
kubectl delete pod load-generator
|
||||
```
|
||||
7. The cluster-autoscaler checks every few minutes if nodes are underutilized. It will taint such candidates for removal and wait additional 10 minutes before the nodes are eventually removed and deallocated. The whole process can take about 20 minutes.
|
||||
|
||||
## Monitoring
|
||||
:::tip
|
||||
|
||||
For better observability, run the listed commands in different tabs in your terminal.
|
||||
|
||||
:::
|
||||
|
||||
You can watch the status of the HorizontalPodAutoscaler with the current CPU usage, the target CPU limit, and the number of replicas created with:
|
||||
```bash
|
||||
kubectl get hpa php-apache --watch
|
||||
```
|
||||
From time to time compare the list of nodes to check the behavior of the autoscaler:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
For deeper insights, see the logs of the autoscaler Pod, which contains more details about the scaling decision process:
|
||||
```bash
|
||||
kubectl logs -f deployment/constellation-cluster-autoscaler -n kube-system
|
||||
```
|
@ -0,0 +1,29 @@
|
||||
# Online Boutique
|
||||
[Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo) is an e-commerce demo application by Google consisting of 11 separate microservices. In this demo, Constellation automatically sets up a load balancer on your CSP, making it easy to expose services from your confidential cluster.
|
||||
|
||||
<!-- vale off -->
|
||||
|
||||
<img src={require("../../_media/example-online-boutique.jpg").default} alt="Online Boutique - Web UI" width="662"/>
|
||||
|
||||
<!-- vale on -->
|
||||
|
||||
1. Create a namespace:
|
||||
```bash
|
||||
kubectl create ns boutique
|
||||
```
|
||||
2. Deploy the application:
|
||||
```bash
|
||||
kubectl apply -n boutique -f https://github.com/GoogleCloudPlatform/microservices-demo/raw/main/release/kubernetes-manifests.yaml
|
||||
```
|
||||
3. Wait for all services to become available:
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=300s -n boutique --all deployments
|
||||
```
|
||||
4. Get the frontend's external IP address:
|
||||
```shell-session
|
||||
$ kubectl get service frontend-external -n boutique | awk '{print $4}'
|
||||
EXTERNAL-IP
|
||||
<your-ip>
|
||||
```
|
||||
(`<your-ip>` is a placeholder for the IP assigned by your CSP.)
|
||||
5. Enter the IP from the result in your browser to browse the online shop.
|
285
docs/versioned_docs/version-2.1/getting-started/first-steps.md
Normal file
@ -0,0 +1,285 @@
|
||||
# First steps
|
||||
|
||||
The following steps guide you through the process of creating a cluster and deploying a sample app. This example assumes that you have successfully [installed and set up Constellation](install.md),
|
||||
and have access to a cloud subscription.
|
||||
|
||||
:::tip
|
||||
If you don't have a cloud subscription, check out [mini Constellation](mini-constellation.md), which lets you set up a local Constellation cluster using virtualization.
|
||||
:::
|
||||
|
||||
## Create a cluster
|
||||
|
||||
1. Create the configuration file for your selected cloud provider.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
```bash
|
||||
constellation config generate azure
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
```bash
|
||||
constellation config generate gcp
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
This creates the file `constellation-conf.yaml` in your current working directory.
|
||||
|
||||
2. Fill in your cloud provider specific information.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure (CLI)">
|
||||
|
||||
You need several resources for the cluster. You can use the following `az` script to create them:
|
||||
|
||||
```bash
|
||||
RESOURCE_GROUP=constellation # enter name of new resource group for your cluster here
|
||||
LOCATION=westus # enter location of resources here
|
||||
SUBSCRIPTION_ID=$(az account show --query id --out tsv)
|
||||
SERVICE_PRINCIPAL_NAME=constell
|
||||
az group create --name "${RESOURCE_GROUP}" --location "${LOCATION}"
|
||||
az group create --name "${RESOURCE_GROUP}-identity" --location "${LOCATION}"
|
||||
az ad sp create-for-rbac -n "${SERVICE_PRINCIPAL_NAME}" --role Owner --scopes "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RESOURCE_GROUP}" | tee azureServiceAccountKey.json
|
||||
az identity create -g "${RESOURCE_GROUP}-identity" -n "${SERVICE_PRINCIPAL_NAME}"
|
||||
identityID=$(az identity show -n "${SERVICE_PRINCIPAL_NAME}" -g "${RESOURCE_GROUP}-identity" --query principalId --out tsv)
|
||||
az role assignment create --assignee-principal-type ServicePrincipal --assignee-object-id "${identityID}" --role 'Virtual Machine Contributor' --scope "/subscriptions/${SUBSCRIPTION_ID}"
|
||||
az role assignment create --assignee-principal-type ServicePrincipal --assignee-object-id "${identityID}" --role 'Application Insights Component Contributor' --scope "/subscriptions/${SUBSCRIPTION_ID}"
|
||||
echo "subscription: ${SUBSCRIPTION_ID}
|
||||
tenant: $(az account show --query tenantId -o tsv)
|
||||
location: ${LOCATION}
|
||||
resourceGroup: ${RESOURCE_GROUP}
|
||||
userAssignedIdentity: $(az identity show -n "${SERVICE_PRINCIPAL_NAME}" -g "${RESOURCE_GROUP}-identity" --query id --out tsv)
|
||||
appClientID: $(jq -r '.appId' azureServiceAccountKey.json)
|
||||
clientSecretValue: $(jq -r '.password' azureServiceAccountKey.json)"
|
||||
```
|
||||
|
||||
Fill the values produced by the script into your configuration file.
|
||||
|
||||
By default, Constellation uses `Standard_DC4as_v5` CVMs (4 vCPUs, 16 GB RAM) to create your cluster. Optionally, you can switch to a different VM type by modifying **instanceType** in the configuration file. For CVMs, any VM type with a minimum of 4 vCPUs from the [DCasv5 & DCadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/dcasv5-dcadsv5-series) or [ECasv5 & ECadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/ecasv5-ecadsv5-series) families is supported.
|
||||
|
||||
Run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="azure-portal" label="Azure (Portal)">
|
||||
|
||||
* **subscription**: The UUID of your Azure subscription, e.g., `8b8bd01f-efd9-4113-9bd1-c82137c32da7`.
|
||||
|
||||
You can view your subscription UUID via `az account show` and read the `id` field. For more information refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription).
|
||||
|
||||
* **tenant**: The UUID of your Azure tenant, e.g., `3400e5a2-8fe2-492a-886c-38cb66170f25`.
|
||||
|
||||
You can view your tenant UUID via `az account show` and read the `tenant` field. For more information refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-ad-tenant).
|
||||
|
||||
* **location**: The Azure datacenter location you want to deploy your cluster in, e.g., `westus`. CVMs are currently only supported in a few regions, check [Azure's products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=virtual-machines®ions=all). These are:
|
||||
|
||||
* `westus`
|
||||
* `eastus`
|
||||
* `northeurope`
|
||||
* `westeurope`
|
||||
|
||||
* **resourceGroup**: [Create a new resource group in Azure](https://portal.azure.com/#create/Microsoft.ResourceGroup) for your Constellation cluster. Set this configuration field to the name of the created resource group.
|
||||
|
||||
* **userAssignedIdentity**: [Create a new managed identity in Azure](https://portal.azure.com/#create/Microsoft.ManagedIdentity). You should create the identity in a different resource group as all resources within the cluster resource group will be deleted on cluster termination.
|
||||
|
||||
Add two role assignments to the identity: `Virtual Machine Contributor` and `Application Insights Component Contributor`. The `scope` of both should refer to the previously created cluster resource group.
|
||||
|
||||
Set the configuration value to the full ID of the created identity, e.g., `/subscriptions/8b8bd01f-efd9-4113-9bd1-c82137c32da7/resourcegroups/constellation-identity/providers/Microsoft.ManagedIdentity/userAssignedIdentities/constellation-identity`. You can get it by opening the `JSON View` from the `Overview` section of the identity.
|
||||
|
||||
The user-assigned identity is used by instances of the cluster to access other cloud resources.
|
||||
For more information about managed identities refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
|
||||
|
||||
* **appClientID**: [Create a new app registration in Azure](https://portal.azure.com/#view/Microsoft_AAD_RegisteredApps/CreateApplicationBlade/quickStartType~/null/isMSAApp~/false).
|
||||
|
||||
Set `Supported account types` to `Accounts in this organizational directory only` and leave the `Redirect URI` empty.
|
||||
|
||||
Set the configuration value to the `Application (client) ID`, e.g., `86ec31dd-532b-4a8c-a055-dd23f25fb12f`.
|
||||
|
||||
In the cluster resource group, go to `Access Control (IAM)` and set the created app registration as `Owner`.
|
||||
|
||||
* **clientSecretValue**: In the previously created app registration, go to `Certificates & secrets` and create a new `Client secret`.
|
||||
|
||||
Set the configuration value to the secret value.
|
||||
|
||||
* **instanceType**: The VM type you want to use for your Constellation nodes.
|
||||
|
||||
For CVMs, any type with a minimum of 4 vCPUs from the [DCasv5 & DCadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/dcasv5-dcadsv5-series) or [ECasv5 & ECadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/ecasv5-ecadsv5-series) families is supported. It defaults to `Standard_DC4as_v5` (4 vCPUs, 16 GB RAM).
|
||||
|
||||
Run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP (CLI)">
|
||||
|
||||
You need a service account for the cluster. You can use the following `gcloud` script to create it:
|
||||
|
||||
```bash
|
||||
SERVICE_ACCOUNT_ID=constell # enter name of service account here
|
||||
PROJECT_ID= # enter project id here
|
||||
SERVICE_ACCOUNT_EMAIL=${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com
|
||||
gcloud iam service-accounts create "${SERVICE_ACCOUNT_ID}" --description="Service account used inside Constellation" --display-name="Constellation service account" --project="${PROJECT_ID}"
|
||||
gcloud projects add-iam-policy-binding "${PROJECT_ID}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role='roles/compute.instanceAdmin.v1'
|
||||
gcloud projects add-iam-policy-binding "${PROJECT_ID}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role='roles/compute.networkAdmin'
|
||||
gcloud projects add-iam-policy-binding "${PROJECT_ID}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role='roles/compute.securityAdmin'
|
||||
gcloud projects add-iam-policy-binding "${PROJECT_ID}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role='roles/compute.storageAdmin'
|
||||
gcloud projects add-iam-policy-binding "${PROJECT_ID}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role='roles/iam.serviceAccountUser'
|
||||
gcloud iam service-accounts keys create gcpServiceAccountKey.json --iam-account="${SERVICE_ACCOUNT_EMAIL}"
|
||||
echo "project: ${PROJECT_ID}
|
||||
serviceAccountKeyPath: $(realpath gcpServiceAccountKey.json)"
|
||||
```
|
||||
|
||||
Fill the values produced by the script into your configuration file.
|
||||
|
||||
By default, Constellation uses `n2d-standard-4` VMs (4 vCPUs, 16 GB RAM) to create your cluster. Optionally, you can switch to a different VM type by modifying **instanceType** in the configuration file. Supported are all machines from the N2D family. Refer to [N2D machine series](https://cloud.google.com/compute/docs/general-purpose-machines#n2d_machines) or run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp-console" label="GCP (Console)">
|
||||
|
||||
* **project**: The ID of your GCP project, e.g., `constellation-129857`.
|
||||
|
||||
You can find it on the [welcome screen of your GCP project](https://console.cloud.google.com/welcome). For more information refer to [Google's documentation](https://support.google.com/googleapi/answer/7014113).
|
||||
|
||||
* **region**: The GCP region you want to deploy your cluster in, e.g., `us-west1`.
|
||||
|
||||
You can find a [list of all regions in Google's documentation](https://cloud.google.com/compute/docs/regions-zones#available).
|
||||
|
||||
* **zone**: The GCP zone you want to deploy your cluster in, e.g., `us-west1-a`.
|
||||
|
||||
You can find a [list of all zones in Google's documentation](https://cloud.google.com/compute/docs/regions-zones#available).
|
||||
|
||||
* **serviceAccountKeyPath**: To configure this, you need to create a GCP [service account](https://cloud.google.com/iam/docs/service-accounts) with the following permissions:
|
||||
|
||||
- `Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1)`
|
||||
- `Compute Network Admin (roles/compute.networkAdmin)`
|
||||
- `Compute Security Admin (roles/compute.securityAdmin)`
|
||||
- `Compute Storage Admin (roles/compute.storageAdmin)`
|
||||
- `Service Account User (roles/iam.serviceAccountUser)`
|
||||
|
||||
Afterward, create and download a new JSON key for this service account. Place the downloaded file in your Constellation workspace, and set the config parameter to the filename, e.g., `constellation-129857-15343dba46cb.json`.
|
||||
|
||||
* **instanceType**: The VM type you want to use for your Constellation nodes.
|
||||
|
||||
Supported are all machines from the N2D family. It defaults to `n2d-standard-4` (4 vCPUs, 16 GB RAM), but you can use any other VMs from the same family. Refer to [N2D machine series](https://cloud.google.com/compute/docs/general-purpose-machines#n2d_machines) or run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
:::info
|
||||
|
||||
In case you don't have access to CVMs on Azure, you may use less secure [trusted launch VMs](../workflows/trusted-launch.md) instead. For this, set **confidentialVM** to `false` in the configuration file.
|
||||
|
||||
:::
|
||||
|
||||
3. Download the trusted measurements for your configured image.
|
||||
|
||||
```bash
|
||||
constellation config fetch-measurements
|
||||
```
|
||||
|
||||
For details, see the [verification section](../workflows/verify-cluster.md).
|
||||
|
||||
4. Create the cluster with one control-plane node and two worker nodes. `constellation create` uses options set in `constellation-conf.yaml`.
|
||||
|
||||
:::tip
|
||||
|
||||
On Azure, you may need to wait 15+ minutes at this point for role assignments to propagate.
|
||||
|
||||
:::
|
||||
|
||||
```bash
|
||||
constellation create --control-plane-nodes 1 --worker-nodes 2 -y
|
||||
```
|
||||
|
||||
This should give the following output:
|
||||
|
||||
```shell-session
|
||||
$ constellation create ...
|
||||
Your Constellation cluster was created successfully.
|
||||
```
|
||||
|
||||
5. Initialize the cluster
|
||||
|
||||
```bash
|
||||
constellation init
|
||||
```
|
||||
|
||||
This should give the following output:
|
||||
|
||||
```shell-session
|
||||
$ constellation init
|
||||
Your Constellation master secret was successfully written to ./constellation-mastersecret.json
|
||||
Initializing cluster ...
|
||||
Your Constellation cluster was successfully initialized.
|
||||
|
||||
Constellation cluster identifier g6iMP5wRU1b7mpOz2WEISlIYSfdAhB0oNaOg6XEwKFY=
|
||||
Kubernetes configuration constellation-admin.conf
|
||||
|
||||
You can now connect to your cluster by executing:
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
The cluster's identifier will be different in your output.
|
||||
Keep `constellation-mastersecret.json` somewhere safe.
|
||||
This will allow you to [recover your cluster](../workflows/recovery.md) in case of a disaster.
|
||||
|
||||
:::info
|
||||
|
||||
Depending on your CSP and region, `constellation init` may take 10+ minutes to complete.
|
||||
|
||||
:::
|
||||
|
||||
6. Configure kubectl
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
## Deploy a sample application
|
||||
|
||||
1. Deploy the [emojivoto app](https://github.com/BuoyantIO/emojivoto)
|
||||
|
||||
```bash
|
||||
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
|
||||
```
|
||||
|
||||
2. Expose the frontend service locally
|
||||
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
|
||||
kubectl -n emojivoto port-forward svc/web-svc 8080:80 &
|
||||
curl http://localhost:8080
|
||||
kill %1
|
||||
```
|
||||
|
||||
## Terminate your cluster
|
||||
|
||||
```bash
|
||||
constellation terminate
|
||||
```
|
||||
|
||||
This should give the following output:
|
||||
|
||||
```shell-session
|
||||
$ constellation terminate
|
||||
Terminating ...
|
||||
Your Constellation cluster was terminated successfully.
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
On Azure, if you have used the `az` script, you can keep the prerequisite resources and reuse them for a new cluster.
|
||||
|
||||
Or you can delete them:
|
||||
|
||||
```bash
|
||||
RESOURCE_GROUP=constellation # name of your cluster resource group
|
||||
APPID=$(jq -r '.appId' azureServiceAccountKey.json)
|
||||
az ad sp delete --id "${APPID}"
|
||||
az group delete -g "${RESOURCE_GROUP}-identity" --yes --no-wait
|
||||
az group delete -g "${RESOURCE_GROUP}" --yes --no-wait
|
||||
```
|
||||
|
||||
:::
|
192
docs/versioned_docs/version-2.1/getting-started/install.md
Normal file
@ -0,0 +1,192 @@
|
||||
# Installation and setup
|
||||
|
||||
Constellation runs entirely in your cloud environment and can be controlled via a dedicated command-line interface (CLI).
|
||||
|
||||
The following guides you through the steps of installing the CLI on your machine, verifying it, and connecting it to your cloud service provider (CSP).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Make sure the following requirements are met:
|
||||
|
||||
- Your machine is running Linux or macOS
|
||||
- You have admin rights on your machine
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/) is installed
|
||||
- Your CSP is Microsoft Azure or Google Cloud Platform (GCP)
|
||||
|
||||
## Install the Constellation CLI
|
||||
|
||||
The CLI executable is available at [GitHub](https://github.com/edgelesssys/constellation/releases).
|
||||
Install it with the following commands:
|
||||
|
||||
<tabs>
|
||||
<tabItem value="linux-amd64" label="Linux (amd64)">
|
||||
|
||||
1. Download the CLI:
|
||||
|
||||
```bash
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/latest/download/constellation-linux-amd64
|
||||
```
|
||||
|
||||
2. [Verify the signature](../workflows/verify-cli.md) (optional)
|
||||
|
||||
3. Install the CLI to your PATH:
|
||||
|
||||
```bash
|
||||
sudo install constellation-linux-amd64 /usr/local/bin/constellation
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="linux-arm64" label="Linux (arm64)">
|
||||
|
||||
1. Download the CLI:
|
||||
|
||||
```bash
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/latest/download/constellation-linux-arm64
|
||||
```
|
||||
|
||||
2. [Verify the signature](../workflows/verify-cli.md) (optional)
|
||||
|
||||
3. Install the CLI to your PATH:
|
||||
|
||||
```bash
|
||||
sudo install constellation-linux-arm64 /usr/local/bin/constellation
|
||||
```
|
||||
|
||||
|
||||
</tabItem>
|
||||
|
||||
<tabItem value="darwin-arm64" label="macOS (Apple Silicon)">
|
||||
|
||||
1. Download the CLI:
|
||||
|
||||
```bash
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/latest/download/constellation-darwin-arm64
|
||||
```
|
||||
|
||||
2. [Verify the signature](../workflows/verify-cli.md) (optional)
|
||||
|
||||
3. Install the CLI to your PATH:
|
||||
|
||||
```bash
|
||||
sudo install constellation-darwin-arm64 /usr/local/bin/constellation
|
||||
```
|
||||
|
||||
|
||||
|
||||
</tabItem>
|
||||
|
||||
<tabItem value="darwin-amd64" label="macOS (Intel)">
|
||||
|
||||
1. Download the CLI:
|
||||
|
||||
```bash
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/latest/download/constellation-darwin-amd64
|
||||
```
|
||||
|
||||
2. [Verify the signature](../workflows/verify-cli.md) (optional)
|
||||
|
||||
3. Install the CLI to your PATH:
|
||||
|
||||
```bash
|
||||
sudo install constellation-darwin-amd64 /usr/local/bin/constellation
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
:::tip
|
||||
The CLI supports autocompletion for various shells. To set it up, run `constellation completion` and follow the given steps.
|
||||
:::
|
||||
|
||||
## Set up cloud credentials
|
||||
|
||||
The CLI makes authenticated calls to the CSP API. Therefore, you need to set up Constellation with the credentials for your CSP.
|
||||
|
||||
:::tip
|
||||
If you don't have a cloud subscription, you can try [mini Constellation](mini-constellation.md), which lets you set up a local Constellation cluster using virtualization.
|
||||
:::
|
||||
|
||||
### Required permissions
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
You need the following permissions for your user account:
|
||||
|
||||
- `Contributor` (to create cloud resources)
|
||||
- `User Access Administrator` (to create a service account)
|
||||
|
||||
If you don't have these permissions with scope *subscription*, ask your administrator to [create the service account and a resource group for your Constellation cluster](first-steps.md).
|
||||
Your user account needs the `Contributor` permission scoped to this resource group.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
Create a new project for Constellation or use an existing one.
|
||||
Enable the [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com) on it.
|
||||
|
||||
You need the following permissions on this project:
|
||||
|
||||
- `compute.*` (or the subset defined by `roles/compute.instanceAdmin.v1`)
|
||||
- `iam.serviceAccountUser`
|
||||
|
||||
Follow Google's guide on [understanding](https://cloud.google.com/iam/docs/understanding-roles) and [assigning roles](https://cloud.google.com/iam/docs/granting-changing-revoking-access).
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
### Authentication
|
||||
|
||||
You need to authenticate with your CSP. The following lists the required steps for *testing* and *production* environments.
|
||||
|
||||
:::note
|
||||
The steps for a *testing* environment are simpler. However, they may expose secrets to the CSP. If in doubt, follow the *production* steps.
|
||||
:::
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
**Testing**
|
||||
|
||||
Simply open the [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview).
|
||||
|
||||
**Production**
|
||||
|
||||
Use the latest version of the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/) on a trusted machine:
|
||||
|
||||
```bash
|
||||
az login
|
||||
```
|
||||
|
||||
Other options are described in Azure's [authentication guide](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli).
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
**Testing**
|
||||
|
||||
You can use the [Google Cloud Shell](https://cloud.google.com/shell). Make sure your [session is authorized](https://cloud.google.com/shell/docs/auth). For example, execute `gsutil` and accept the authorization prompt.
|
||||
|
||||
**Production**
|
||||
|
||||
Use one of the following options on a trusted machine:
|
||||
|
||||
- Use the [`gcloud` CLI](https://cloud.google.com/sdk/gcloud)
|
||||
|
||||
```bash
|
||||
gcloud auth application-default login
|
||||
```
|
||||
|
||||
This will ask you to log-in to your Google account and create your credentials.
|
||||
The Constellation CLI will automatically load these credentials when needed.
|
||||
|
||||
- Set up a service account and pass the credentials manually
|
||||
|
||||
Follow [Google's guide](https://cloud.google.com/docs/authentication/production#manually) for setting up your credentials.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Next steps
|
||||
|
||||
You are now ready to [deploy your first confidential Kubernetes cluster and application](first-steps.md).
|
@ -0,0 +1,155 @@
|
||||
# Mini Constellation
|
||||
|
||||
With `constellation mini`, users can deploy and test Constellation locally without the need for a cloud subscription.
|
||||
|
||||
The command uses virtualization to create a local cluster with one control-plane and one worker node.
|
||||
|
||||
:::info
|
||||
|
||||
Since mini Constellation is running on your local system, please note that common cloud features, such as load-balancing,
|
||||
attaching persistent storage, or autoscaling, are unavailable.
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* [Constellation CLI](./install.md#install-the-constellation-cli)
|
||||
* A x86-64 CPU with at least 4 cores
|
||||
* Recommended are 6 cores or more
|
||||
* Hardware virtualization enabled in the BIOS/UEFI (often referred to as Intel VT-x or AMD-V/SVM)
|
||||
* At least 4 GB RAM
|
||||
* Recommend are 6 GB or more
|
||||
* 20 GB of free disk space
|
||||
* a Linux operating system
|
||||
* [KVM kernel module](https://www.linux-kvm.org/page/Main_Page) enabled
|
||||
* [Docker](https://docs.docker.com/engine/install/)
|
||||
* [xsltproc](https://gitlab.gnome.org/GNOME/libxslt/-/wikis/home)
|
||||
* Install on Ubuntu:
|
||||
|
||||
```bash
|
||||
sudo apt install xsltproc
|
||||
```
|
||||
|
||||
* Install on Fedora
|
||||
|
||||
```bash
|
||||
sudo dnf install xsltproc
|
||||
```
|
||||
|
||||
* (Optional) [`virsh`](https://www.libvirt.org/manpages/virsh.html) to observe and access your nodes
|
||||
|
||||
## Create your cluster
|
||||
|
||||
Setting up your mini Constellation cluster is as easy as running the following command:
|
||||
|
||||
```bash
|
||||
constellation mini up
|
||||
```
|
||||
|
||||
This will configure your current directory as the working directory for Constellation.
|
||||
All `constellation` commands concerning this cluster need to be issued from this directory.
|
||||
|
||||
The command will create your cluster and initialize it. Depending on your system, this may take up to 10 minutes.
|
||||
The output should look like the following:
|
||||
|
||||
```shell
|
||||
$ constellation mini up
|
||||
Downloading image to ./constellation.qcow2
|
||||
Done.
|
||||
|
||||
Creating cluster in QEMU ...
|
||||
Cluster successfully created.
|
||||
Connect to the VMs by executing:
|
||||
virsh -c qemu+tcp://localhost:16599/system
|
||||
|
||||
Your Constellation master secret was successfully written to ./constellation-mastersecret.json
|
||||
Initializing cluster ...
|
||||
Your Constellation cluster was successfully initialized.
|
||||
|
||||
Constellation cluster identifier hmrRaTJEKHk6zlM6wcTCGxZ+7HAA16ec4T9CmKs12uQ=
|
||||
Kubernetes configuration constellation-admin.conf
|
||||
|
||||
You can now connect to your cluster by executing:
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
You can now configure `kubectl` to connect to your local Constellation cluster:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
It may take a couple of minutes for all cluster resources to be available.
|
||||
You can check on the state of your cluster by running the following:
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
If your cluster is running as expected the output should look like the following:
|
||||
|
||||
```shell
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
control-plane-0 Ready control-plane,master 2m59s v1.23.9
|
||||
worker-0 Ready <none> 32s v1.23.9
|
||||
```
|
||||
|
||||
## Deploy a sample application
|
||||
|
||||
1. Deploy the [emojivoto app](https://github.com/BuoyantIO/emojivoto)
|
||||
|
||||
```bash
|
||||
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
|
||||
```
|
||||
|
||||
2. Expose the frontend service locally
|
||||
|
||||
```bash
|
||||
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
|
||||
kubectl -n emojivoto port-forward svc/web-svc 8080:80 &
|
||||
curl http://localhost:8080
|
||||
kill %1
|
||||
```
|
||||
|
||||
## Terminate your cluster
|
||||
|
||||
Once you are done, you can clean up the created resources using the following command:
|
||||
|
||||
```shell
|
||||
constellation mini down
|
||||
```
|
||||
|
||||
This will destroy your cluster and clean up the your working directory.
|
||||
The VM image and cluster configuration file (`constellation-conf.yaml`) will be left behind and may be reused to create new clusters.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### VMs have no internet access
|
||||
|
||||
`iptables` rules may prevent your VMs form properly accessing the internet.
|
||||
Make sure your rules are'nt dropping forwarded packages.
|
||||
|
||||
List your rules:
|
||||
|
||||
```shell
|
||||
sudo iptables -S
|
||||
```
|
||||
|
||||
The output may look similar to the following:
|
||||
|
||||
```shell
|
||||
-P INPUT ACCEPT
|
||||
-P FORWARD DROP
|
||||
-P OUTPUT ACCEPT
|
||||
-N DOCKER
|
||||
-N DOCKER-ISOLATION-STAGE-1
|
||||
-N DOCKER-ISOLATION-STAGE-2
|
||||
-N DOCKER-USER
|
||||
```
|
||||
|
||||
If your `FORWARD` chain is set to `DROP`, you will need to update your rules:
|
||||
|
||||
```shell
|
||||
sudo iptables -P FORWARD ACCEPT
|
||||
```
|
34
docs/versioned_docs/version-2.1/intro.md
Normal file
@ -0,0 +1,34 @@
|
||||
---
|
||||
slug: /
|
||||
id: intro
|
||||
---
|
||||
# Introduction
|
||||
|
||||
Welcome to the documentation of Constellation! Constellation is a Kubernetes engine that aims to provide the best possible data security.
|
||||
|
||||
![Constellation concept](/img/concept.svg)
|
||||
|
||||
Constellation shields your entire Kubernetes cluster from the underlying cloud infrastructure. Everything inside is always encrypted, including at runtime in memory. For this, Constellation leverages a technology called *confidential computing* and more specifically Confidential VMs.
|
||||
|
||||
:::tip
|
||||
See the 📄[whitepaper](https://content.edgeless.systems/hubfs/Confidential%20Computing%20Whitepaper.pdf) for more information on confidential computing.
|
||||
:::
|
||||
|
||||
## Goals
|
||||
|
||||
From a security perspective, Constellation is designed to keep all data always encrypted and to prevent any access from the underlying (cloud) infrastructure. This includes access from datacenter employees, privileged cloud admins, and attackers coming through the infrastructure. Such attackers could be malicious co-tenants escalating their privileges or hackers who managed to compromise a cloud server.
|
||||
|
||||
From a DevOps perspective, Constellation is designed to work just like what you would expect from a modern Kubernetes engine.
|
||||
|
||||
## Use cases
|
||||
|
||||
Constellation provides unique security [features](overview/confidential-kubernetes.md) and [benefits](overview/security-benefits.md). The core use cases are:
|
||||
|
||||
* Increasing the overall security of your clusters
|
||||
* Increasing the trustworthiness of your SaaS offerings
|
||||
* Moving sensitive workloads from on-prem to the cloud
|
||||
* Meeting regulatory requirements
|
||||
|
||||
## Next steps
|
||||
|
||||
You can learn more about the concept of Confidential Kubernetes, features, security benefits, and performance of Constellation in the *Basics* section. To jump right into the action head to *Getting started*.
|
43
docs/versioned_docs/version-2.1/overview/clouds.md
Normal file
@ -0,0 +1,43 @@
|
||||
# Feature status of clouds
|
||||
|
||||
What works on which cloud? Currently, Confidential VMs (CVMs) are available in varying quality on the different clouds and software stacks.
|
||||
|
||||
For Constellation, the ideal environment provides the following:
|
||||
|
||||
1. Ability to run arbitrary software and images inside CVMs
|
||||
2. CVMs based on AMD SEV-SNP (available in EPYC CPUs since the Milan generation) or, in the future, Intel TDX (available in Xeon CPUs from the Sapphire Rapids generation onward)
|
||||
3. Ability for CVM guests to obtain raw attestation statements directly from the CPU, ideally via a TPM-like interface
|
||||
4. Reviewable, open-source firmware inside CVMs
|
||||
|
||||
(1) is a functional must-have. (2)--(4) are required for remote attestation that fully keeps the infrastructure/cloud out. Constellation can work without them or with approximations, but won't protect against certain privileged attackers anymore.
|
||||
|
||||
The following table summarizes the state of features for different infrastructures as of September 2022.
|
||||
|
||||
| **Feature** | **Azure** | **GCP** | **AWS** | **OpenStack (Yoga)** |
|
||||
|-------------------------------|-----------|---------|---------|----------------------|
|
||||
| **1. Custom images** | Yes | Yes | No | Yes |
|
||||
| **2. SEV-SNP or TDX** | Yes | No | No | Depends on kernel/HV |
|
||||
| **3. Raw guest attestation** | Yes | No | No | Depends on kernel/HV |
|
||||
| **4. Reviewable firmware** | No* | No | No | Depends on kernel/HV |
|
||||
|
||||
## Microsoft Azure
|
||||
|
||||
With its [CVM offering](https://docs.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview), Azure provides the best foundations for Constellation. Regarding (3), Azure provides direct access to remote-attestation statements. However, regarding (4), the standard CVMs still include closed-source firmware running in VM Privilege Level (VMPL) 0. This firmware is signed by Azure. The signature is reflected in the remote-attestation statements of CVMs. Thus, the Azure closed-source firmware becomes part of Constellation's trusted computing base (TCB).
|
||||
|
||||
Recently, Azure [announced](https://techcommunity.microsoft.com/t5/azure-confidential-computing/azure-confidential-vms-using-sev-snp-dcasv5-ecasv5-are-now/ba-p/3573747) the *limited preview* of CVMs with customizable firmware. With this CVM type, (4) switches from *No* to *Yes*. Constellation will support customizable firmware on Azure in the future.
|
||||
|
||||
## Google Cloud Platform (GCP)
|
||||
|
||||
The [CVMs available in GCP](https://cloud.google.com/compute/confidential-vm/docs/create-confidential-vm-instance) are based on AMD SEV but don't have SNP features enabled. This impacts attestation capabilities. Currently, GCP doesn't offer CVM-based attestation at all. Instead, GCP provides attestation statements based on its regular [vTPM](https://cloud.google.com/blog/products/identity-security/virtual-trusted-platform-module-for-shielded-vms-security-in-plaintext), which is managed by the hypervisor. On GCP, the hypervisor is thus currently part of Constellation's TCB.
|
||||
|
||||
## Amazon Web Services (AWS)
|
||||
|
||||
AWS currently doesn't offer CVMs. AWS proprietary Nitro Enclaves offer some related features, but [are explicitly not designed to keep AWS itself out](https://aws.amazon.com/blogs/security/confidential-computing-an-aws-perspective/). An experimental version of Constellation exists that runs on Nitro Enclaves.
|
||||
|
||||
## OpenStack
|
||||
|
||||
OpenStack is an open-source cloud and infrastructure management software. It's used by many smaller CSPs and datacenters. In the latest *Yoga* version, OpenStack has basic support for CVMs. However, much depends on the employed kernel and hypervisor. Features (2)--(4) are likely to be a *Yes* with Linux kernel version 6.2. Thus, going forward, OpenStack on corresponding AMD or Intel hardware will be a viable underpinning for Constellation.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The different clouds and software like the Linux kernel and OpenStack are in the process of building out their support for state-of-the-art CVMs. Azure has already most features in place. For Constellation, the status quo means that the TCB has different shapes on different infrastructures. With broad SEV-SNP support coming to the Linux kernel, we soon expect a normalization of features across infrastructures.
|
@ -0,0 +1,42 @@
|
||||
# Confidential Kubernetes
|
||||
|
||||
We use the term *Confidential Kubernetes* to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
|
||||
|
||||
1. **Workload shielding**: the confidentiality and integrity of all workload-related data and code are enforced.
|
||||
2. **Control plane shielding**: the confidentiality and integrity of the cluster's control plane, state, and workload configuration are enforced.
|
||||
3. **Attestation and verifiability**: the two properties above can be verified remotely based on hardware-rooted cryptographic certificates.
|
||||
|
||||
Each of the above properties is equally important. Only with all three in conjunction, an entire cluster can be shielded without gaps.
|
||||
|
||||
## Constellation security features
|
||||
|
||||
Constellation implements the Confidential Kubernetes concept with the following security features.
|
||||
|
||||
* **Runtime encryption**: Constellation runs all Kubernetes nodes inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster.
|
||||
* **Network and storage encryption**: Constellation augments this with transparent encryption of the [network](../architecture/networking.md) and [persistent storage](../architecture/encrypted-storage.md). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime.
|
||||
* **Transparent key management**: Constellation manages the corresponding [cryptographic keys](../architecture/keys.md) inside CVMs.
|
||||
* **Node attestation and verification**: Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster.
|
||||
* **Confidential computing-optimized images**: A node is "good" if it's running a signed Constellation [node image](../architecture/images.md) inside a CVM and is in the expected state. (Node images are hardware-measured during boot. The measurements are reflected in the attestation statements that are produced by nodes and verified by Constellation.)
|
||||
* **"Whole cluster" attestation**: Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified.
|
||||
|
||||
With the above, Constellation wraps an entire cluster into one coherent and verifiable *confidential context*. The concept is depicted in the following.
|
||||
|
||||
![Confidential Kubernetes](../_media/concept-constellation.svg)
|
||||
|
||||
## Contrast: Managed Kubernetes with CVMs
|
||||
|
||||
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, *Node A* has no means to verify if *Node B* is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
|
||||
|
||||
![Concept: Managed Kubernetes plus CVMs](../_media/concept-managed.svg)
|
||||
|
||||
The following table highlights the key differences in terms of features.
|
||||
|
||||
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|
||||
|-------------------------------------|------------------------------|--------------------------------------------|
|
||||
| Runtime encryption | Partial (data plane only)| **Yes** |
|
||||
| Node image verification | No | **Yes** |
|
||||
| Full cluster attestation | No | **Yes** |
|
||||
| Transparent network encryption | No | **Yes** |
|
||||
| Transparent storage encryption | No | **Yes** |
|
||||
| Confidential key management | No | **Yes** |
|
||||
| Cloud agnostic / multi-cloud | No | **Yes** |
|
23
docs/versioned_docs/version-2.1/overview/license.md
Normal file
@ -0,0 +1,23 @@
|
||||
# License
|
||||
|
||||
## Source code
|
||||
|
||||
Constellation's source code is available on [GitHub](https://github.com/edgelesssys/constellation) under the [GNU Affero General Public License v3.0](https://github.com/edgelesssys/constellation/blob/main/LICENSE).
|
||||
|
||||
## Binaries
|
||||
|
||||
Edgeless Systems provides ready-to-use and [signed](../architecture/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/images.md).
|
||||
|
||||
These binaries may be used free of charge within the bounds of Constellation's [**Community License**](#community-license). An [**Enterprise License**](#enterprise-license) can be purchased from Edgeless Systems.
|
||||
|
||||
The Constellation CLI displays relevant license information when you initialize your cluster. You are responsible for staying within the bounds of your respective license. Constellation doesn't enforce any limits so as not to endanger your cluster's availability.
|
||||
|
||||
### Community License
|
||||
|
||||
You are free to use the Constellation binaries provided by Edgeless Systems to create services for internal consumption, evaluation purposes, or non-commercial use. You must not use the Constellation binaries to provide commercial hosted services to third parties. Edgeless Systems gives no warranties and offers no support.
|
||||
|
||||
### Enterprise License
|
||||
|
||||
Enterprise Licenses don't have the above limitations and come with support and additional features. Find out more at the [product website](https://www.edgeless.systems/products/constellation/).
|
||||
|
||||
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/orchestration.md#workspaces) in a file named `constellation.license`.
|
108
docs/versioned_docs/version-2.1/overview/performance.md
Normal file
@ -0,0 +1,108 @@
|
||||
# Performance
|
||||
|
||||
This section analyzes the performance of Constellation.
|
||||
|
||||
## Performance impact from runtime encryption
|
||||
|
||||
All nodes in a Constellation cluster run inside Confidential VMs (CVMs). Thus, Constellation's performance is directly affected by the performance of CVMs.
|
||||
|
||||
AMD and Azure jointly released a [performance benchmark](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796) for CVMs based on 3rd Gen AMD EPYC processors (Milan) with SEV-SNP. With a range of mostly compute-intensive benchmarks like SPEC CPU 2017 and CoreMark, they found that CVMs only have a small (2%--8%) performance degradation compared to standard VMs. You can expect to see similar performance for compute-intensive workloads running on Constellation.
|
||||
|
||||
## Performance impact from other features
|
||||
|
||||
To assess the overall performance of Constellation, we benchmarked Constellation v2.0.0 using [K-Bench](https://github.com/vmware-tanzu/k-bench). K-Bench is a configurable framework to benchmark Kubernetes clusters in terms of storage I/O, network performance, and creating/scaling resources.
|
||||
|
||||
As a baseline, we compare Constellation with the non-confidential managed Kubernetes offerings on Microsoft Azure and Google Cloud Platform (GCP). These are AKS on Azure and GKE on GCP.
|
||||
|
||||
### Configurations
|
||||
|
||||
We used the following configurations for the benchmarks.
|
||||
|
||||
#### Constellation and GKE on GCP
|
||||
|
||||
- Nodes: 3
|
||||
- Machines: `n2d-standard-4`
|
||||
- CVM: `true`
|
||||
- Zone: `europe-west3-b`
|
||||
|
||||
#### Constellation and AKS on Azure
|
||||
|
||||
- Nodes: 3
|
||||
- Machines: `DC4as_v5`
|
||||
- CVM: `true`
|
||||
- Region: `West Europe`
|
||||
- Zone: `2`
|
||||
|
||||
#### K-Bench
|
||||
|
||||
Using the default [K-Bench test configurations](https://github.com/vmware-tanzu/k-bench/tree/master/config), we ran the following tests on the clusters:
|
||||
|
||||
- `default`
|
||||
- `dp_network_internode`
|
||||
- `dp_network_intranode`
|
||||
- `dp_fio`
|
||||
|
||||
### Results
|
||||
|
||||
#### Kubernetes API Latency
|
||||
|
||||
At its core, the Kubernetes API is the way to query and modify a cluster's state. Latency matters here. Hence, it's vital that even with the additional level of security from Constellation's network the API latency doesn't spike.
|
||||
K-Bench's `default` test performs calls to the API to create, update, and delete cluster resources.
|
||||
|
||||
The three graphs below compare the API latencies (lower is better) in milliseconds for pods, services, and deployments.
|
||||
|
||||
![API Latency - Pods](../_media/benchmark_api_pods.png)
|
||||
|
||||
Pods: Except for the `Pod Update` call, Constellation is faster than AKS and GKE in terms of API calls.
|
||||
|
||||
![API Latency - Services](../_media/benchmark_api_svc.png)
|
||||
|
||||
Services: Constellation has lower latencies than AKS and GKE except for service creation on AKS.
|
||||
|
||||
![API Latency - Deployments](../_media/benchmark_api_dpl.png)
|
||||
|
||||
Deployments: Constellation has the lowest latency for all cases except for scaling deployments on GKE and creating deployments on AKS.
|
||||
|
||||
#### Network
|
||||
|
||||
There are two main indicators for network performance: intra-node and inter-node transmission speed.
|
||||
K-Bench provides benchmark tests for both, configured as `dp_network_internode` and `dp_network_intranode`. The tests use [`iperf`](https://iperf.fr/) to measure the bandwidth available.
|
||||
|
||||
##### Inter-node
|
||||
|
||||
Inter-node communication is the network transmission between different Kubernetes nodes.
|
||||
|
||||
The first test (`dp_network_internode`) measures the throughput between nodes. Constellation has an inter-node throughput of around 816 Mbps on Azure to 872 Mbps on GCP. While that's faster than the average throughput of AKS at 577 Mbps, GKE provides faster networking at 9.55 Gbps.
|
||||
The difference can largely be attributed to Constellation's [network encryption](../architecture/networking.md) that protects data in-transit.
|
||||
|
||||
##### Intra-node
|
||||
|
||||
Intra-node communication happens between pods running on the same node.
|
||||
The connections directly pass through the node's OS layer and never hit the network.
|
||||
The benchmark evaluates how the [Constellation's node OS image](../architecture/images.md) and runtime encryption influence the throughput.
|
||||
|
||||
Constellation's bandwidth for both sending and receiving is at 31 Gbps on Azure and 22 Gbps on GCP. AKS achieves 26 Gbps and GKE achieves about 27 Gbps in the tests.
|
||||
|
||||
![](../_media/benchmark_net.png)
|
||||
|
||||
### Storage I/O
|
||||
|
||||
Azure and GCP offer persistent storage for their Kubernetes services AKS and GKE via the Container Storage Interface (CSI). CSI storage in Kubernetes is available via `PersistentVolumes` (PV) and consumed via `PersistentVolumeClaims` (PVC).
|
||||
Upon requesting persistent storage through a PVC, GKE and AKS will provision a PV as defined by a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
Constellation provides persistent storage on Azure and GCP [that's encrypted on the CSI layer](../architecture/encrypted-storage.md).
|
||||
Similarly, Constellation will provision a PV via a default storage class upon a PVC request.
|
||||
|
||||
The K-Bench [`fio`](https://fio.readthedocs.io/en/latest/fio_doc.html) benchmark consists of several tests.
|
||||
We selected four different tests that perform asynchronous access patterns because we believe they most accurately depict real-world I/O access for most applications.
|
||||
|
||||
The following graph shows I/O throughput in MiB/s (higher is better).
|
||||
|
||||
![I/O benchmark graph](../_media/benchmark_io.png)
|
||||
|
||||
Comparing Constellation on GCP with GKE, you see that Constellation offers similar read/write speeds in all scenarios.
|
||||
|
||||
Constellation on Azure and AKS, however, partially differ. In read-write mixes, Constellation on Azure outperforms AKS in terms of I/O. On full-write access, Constellation and AKS have the same speed.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Despite providing substantial [security benefits](./security-benefits.md), Constellation overall only has a slight performance overhead over the managed Kubernetes offerings AKS and GKE. Constellation is on par in most benchmarks, but is slightly slower in certain scenarios due to network and storage encryption. When it comes to API latencies, Constellation even outperforms the less security-focused competition.
|
11
docs/versioned_docs/version-2.1/overview/product.md
Normal file
@ -0,0 +1,11 @@
|
||||
# Product features
|
||||
|
||||
Constellation is a Kubernetes engine that aims to provide the best possible data security in combination with enterprise-grade scalability and reliability features---and a smooth user experience.
|
||||
|
||||
From a security perspective, Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and corresponding security features, which shield your entire cluster from the underlying infrastructure.
|
||||
|
||||
From an operational perspective, Constellation provides the following key features:
|
||||
|
||||
* **Native support for different clouds**: Constellation works on Microsoft Azure and Google Cloud Platform (GCP). Support for Amazon Web Services (AWS) and OpenStack-based environments is coming with a future release. Constellation securely interfaces with the cloud infrastructure to provide [cluster autoscaling](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), [dynamic persistent volumes](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), and [service load balancing](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
|
||||
* **High availability**: Constellation uses a [multi-master architecture](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) with a [stacked etcd topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) to ensure high availability.
|
||||
* **Integrated Day-2 operations**: Constellation lets you securely [upgrade](../workflows/upgrade.md) your cluster to a new release. It also lets you securely [recover](../workflows/recovery.md) a failed cluster. Both with a single command.
|
@ -0,0 +1,22 @@
|
||||
# Security benefits and threat model
|
||||
|
||||
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the *cloud & datacenter infrastructure* and the *physical hosts*, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify-cluster.md).
|
||||
|
||||
![TCB comparison](../_media/tcb.svg)
|
||||
|
||||
Given this background, the following describes the concrete threat classes that Constellation addresses.
|
||||
|
||||
## Insider access
|
||||
|
||||
Employees and third-party contractors of cloud service providers (CSPs) have access to different layers of the cloud infrastructure.
|
||||
This opens up a large attack surface where workloads and data can be read, copied, or manipulated. With Constellation, Kubernetes deployments are shielded from the infrastructure and thus such accesses are prevented.
|
||||
|
||||
## Infrastructure-based attacks
|
||||
|
||||
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and access data over a longer period. Analogously to the *insider access* scenario, Constellation also prevents access to a deployment's data in this scenario.
|
||||
|
||||
## Supply chain attacks
|
||||
|
||||
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses [remote attestation](../architecture/attestation.md) in conjunction with public [transparency logs](../workflows/verify-cli.md) to prevent this.
|
||||
|
||||
In the future, Constellation will extend this feature to customer workloads. This will enable cluster owners to create auditable policies that precisely define which containers can run in a given deployment.
|
422
docs/versioned_docs/version-2.1/reference/cli.md
Normal file
@ -0,0 +1,422 @@
|
||||
# CLI reference
|
||||
|
||||
<!-- This file is generated by constellation/hack/clidocgen via update-cli-reference.yml workflow. Don't edit manually. -->
|
||||
|
||||
Use the Constellation CLI to create and manage your clusters.
|
||||
|
||||
Usage:
|
||||
|
||||
```
|
||||
constellation [command]
|
||||
```
|
||||
Commands:
|
||||
|
||||
* [config](#constellation-config): Work with the Constellation configuration file
|
||||
* [generate](#constellation-config-generate): Generate a default configuration file
|
||||
* [fetch-measurements](#constellation-config-fetch-measurements): Fetch measurements for configured cloud provider and image
|
||||
* [instance-types](#constellation-config-instance-types): Print the supported instance types for all cloud providers
|
||||
* [create](#constellation-create): Create instances on a cloud platform for your Constellation cluster
|
||||
* [init](#constellation-init): Initialize the Constellation cluster
|
||||
* [mini](#constellation-mini): Manage mini Constellation clusters
|
||||
* [up](#constellation-mini-up): Create and initialize a new mini Constellation cluster
|
||||
* [down](#constellation-mini-down): Destroy a mini Constellation cluster
|
||||
* [verify](#constellation-verify): Verify the confidential properties of a Constellation cluster
|
||||
* [upgrade](#constellation-upgrade): Plan and perform an upgrade of a Constellation cluster
|
||||
* [plan](#constellation-upgrade-plan): Plan an upgrade of a Constellation cluster
|
||||
* [execute](#constellation-upgrade-execute): Execute an upgrade of a Constellation cluster
|
||||
* [recover](#constellation-recover): Recover a completely stopped Constellation cluster
|
||||
* [terminate](#constellation-terminate): Terminate a Constellation cluster
|
||||
* [version](#constellation-version): Display version of this CLI
|
||||
|
||||
## constellation config
|
||||
|
||||
Work with the Constellation configuration file
|
||||
|
||||
### Synopsis
|
||||
|
||||
Work with the Constellation configuration file.
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for config
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation config generate
|
||||
|
||||
Generate a default configuration file
|
||||
|
||||
### Synopsis
|
||||
|
||||
Generate a default configuration file for your selected cloud provider.
|
||||
|
||||
```
|
||||
constellation config generate {aws|azure|gcp|qemu} [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-f, --file string path to output file, or '-' for stdout (default "constellation-conf.yaml")
|
||||
-h, --help help for generate
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation config fetch-measurements
|
||||
|
||||
Fetch measurements for configured cloud provider and image
|
||||
|
||||
### Synopsis
|
||||
|
||||
Fetch measurements for configured cloud provider and image. A config needs to be generated first!
|
||||
|
||||
```
|
||||
constellation config fetch-measurements [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for fetch-measurements
|
||||
-s, --signature-url string alternative URL to fetch measurements' signature from
|
||||
-u, --url string alternative URL to fetch measurements from
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation config instance-types
|
||||
|
||||
Print the supported instance types for all cloud providers
|
||||
|
||||
### Synopsis
|
||||
|
||||
Print the supported instance types for all cloud providers.
|
||||
|
||||
```
|
||||
constellation config instance-types [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for instance-types
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation create
|
||||
|
||||
Create instances on a cloud platform for your Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Create instances on a cloud platform for your Constellation cluster.
|
||||
|
||||
```
|
||||
constellation create [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-c, --control-plane-nodes int number of control-plane nodes (required)
|
||||
-h, --help help for create
|
||||
--name string create the cluster with the specified name (default "constell")
|
||||
-w, --worker-nodes int number of worker nodes (required)
|
||||
-y, --yes create the cluster without further confirmation
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation init
|
||||
|
||||
Initialize the Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Initialize the Constellation cluster. Start your confidential Kubernetes.
|
||||
|
||||
```
|
||||
constellation init [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--conformance enable conformance mode
|
||||
--endpoint string endpoint of the bootstrapper, passed as HOST[:PORT]
|
||||
-h, --help help for init
|
||||
--master-secret string path to base64-encoded master secret
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation mini
|
||||
|
||||
Manage mini Constellation clusters
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage mini Constellation clusters.
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for mini
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation mini up
|
||||
|
||||
Create and initialize a new mini Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Create and initialize a new mini Constellation cluster.
|
||||
A mini cluster consists of a single control-plane and worker node, hosted using QEMU/KVM.
|
||||
|
||||
|
||||
```
|
||||
constellation mini up [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for up
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation mini down
|
||||
|
||||
Destroy a mini Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Destroy a mini Constellation cluster.
|
||||
|
||||
```
|
||||
constellation mini down [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for down
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation verify
|
||||
|
||||
Verify the confidential properties of a Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Verify the confidential properties of a Constellation cluster.
|
||||
|
||||
If arguments aren't specified, values are read from `constellation-id.json`.
|
||||
|
||||
```
|
||||
constellation verify [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--cluster-id string expected cluster identifier
|
||||
-h, --help help for verify
|
||||
-e, --node-endpoint string endpoint of the node to verify, passed as HOST[:PORT]
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation upgrade
|
||||
|
||||
Plan and perform an upgrade of a Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Plan and perform an upgrade of a Constellation cluster.
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for upgrade
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation upgrade plan
|
||||
|
||||
Plan an upgrade of a Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Plan an upgrade of a Constellation cluster by fetching compatible image versions and their measurements.
|
||||
|
||||
```
|
||||
constellation upgrade plan [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-f, --file string path to output file, or '-' for stdout (omit for interactive mode)
|
||||
-h, --help help for plan
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation upgrade execute
|
||||
|
||||
Execute an upgrade of a Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Execute an upgrade of a Constellation cluster by applying the chosen configuration.
|
||||
|
||||
```
|
||||
constellation upgrade execute [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for execute
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation recover
|
||||
|
||||
Recover a completely stopped Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Recover a Constellation cluster by sending a recovery key to an instance in the boot stage.
|
||||
This is only required if instances restart without other instances available for bootstrapping.
|
||||
|
||||
```
|
||||
constellation recover [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-e, --endpoint string endpoint of the instance, passed as HOST[:PORT]
|
||||
-h, --help help for recover
|
||||
--master-secret string path to master secret file (default "constellation-mastersecret.json")
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation terminate
|
||||
|
||||
Terminate a Constellation cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Terminate a Constellation cluster. The cluster can't be started again, and all persistent storage will be lost.
|
||||
|
||||
```
|
||||
constellation terminate [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for terminate
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
||||
## constellation version
|
||||
|
||||
Display version of this CLI
|
||||
|
||||
### Synopsis
|
||||
|
||||
Display version of this CLI.
|
||||
|
||||
```
|
||||
constellation version [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for version
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--config string path to the configuration file (default "constellation-conf.yaml")
|
||||
```
|
||||
|
72
docs/versioned_docs/version-2.1/workflows/create.md
Normal file
@ -0,0 +1,72 @@
|
||||
# Create your cluster
|
||||
|
||||
Creating your cluster requires two steps:
|
||||
|
||||
1. Creating the necessary resources in your cloud environment
|
||||
2. Bootstrapping the Constellation cluster and setting up a connection
|
||||
|
||||
See the [architecture](../architecture/orchestration.md) section for details on the inner workings of this process.
|
||||
|
||||
## The *create* step
|
||||
|
||||
This step creates the necessary resources for your cluster in your cloud environment.
|
||||
|
||||
### Configuration
|
||||
|
||||
Generate a configuration file for your cloud service provider (CSP):
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
```bash
|
||||
constellation config generate azure
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
```bash
|
||||
constellation config generate gcp
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
This creates the file `constellation-conf.yaml` in the current directory. [Fill in your CSP-specific information](../getting-started/first-steps.md#create-a-cluster) before you continue.
|
||||
|
||||
Next, download the trusted measurements for your configured image.
|
||||
|
||||
```bash
|
||||
constellation config fetch-measurements
|
||||
```
|
||||
|
||||
For details, see the [verification section](../workflows/verify-cluster.md).
|
||||
|
||||
### Create
|
||||
|
||||
Choose the initial size of your cluster.
|
||||
The following command creates a cluster with one control-plane and two worker nodes:
|
||||
|
||||
```bash
|
||||
constellation create --control-plane-nodes 1 --worker-nodes 2
|
||||
```
|
||||
|
||||
For details on the flags, consult the command help via `constellation create -h`.
|
||||
|
||||
*create* stores your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory.
|
||||
|
||||
## The *init* step
|
||||
|
||||
The following command initializes and bootstraps your cluster:
|
||||
|
||||
```bash
|
||||
constellation init
|
||||
```
|
||||
|
||||
Next, configure `kubectl` for your cluster:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
🏁 That's it. You've successfully created a Constellation cluster.
|
1
docs/versioned_docs/version-2.1/workflows/lb.md
Normal file
@ -0,0 +1 @@
|
||||
# Expose services
|
117
docs/versioned_docs/version-2.1/workflows/recovery.md
Normal file
@ -0,0 +1,117 @@
|
||||
# Recover your cluster
|
||||
|
||||
Recovery of a Constellation cluster means getting it back into a healthy state after too many concurrent node failures in the control plane.
|
||||
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/images.md#state-disk).
|
||||
|
||||
Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes.
|
||||
The `constellation recover` command securely connects to all nodes in need of recovery using [attested TLS](../architecture/attestation.md#attested-tls-atls) and provides them with the keys to decrypt their state disks and continue booting.
|
||||
|
||||
## Identify unhealthy clusters
|
||||
|
||||
The first step to recovery is identifying when a cluster becomes unhealthy.
|
||||
Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
|
||||
|
||||
You can check the health status of the nodes via the cloud service provider (CSP).
|
||||
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
|
||||
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
In the Azure portal, find the cluster's resource group.
|
||||
Inside the resource group, open the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>`.
|
||||
On the left, go to **Settings** > **Instances** and check that enough members are in a *Running* state.
|
||||
|
||||
Second, check the boot logs of these *Instances*.
|
||||
In the scale set's *Instances* view, open the details page of the desired instance.
|
||||
On the left, go to **Support + troubleshooting** > **Serial console**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:41Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"azure"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["10.9.0.5:30090","10.9.0.6:30090"]}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"10.9.0.5:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T09:57:03Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.9.0.5:30090: i/o timeout\"","endpoint":"10.9.0.5:30090"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:57:03Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"10.9.0.6:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.9.0.6:30090: i/o timeout\"","endpoint":"10.9.0.6:30090"}
|
||||
{"level":"ERROR","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
|
||||
```
|
||||
|
||||
This means that you have to recover the node manually.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
First, check that the control plane *Instance Group* has enough members in a *Ready* state.
|
||||
In the GCP Console, go to **Instance Groups** and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
|
||||
|
||||
Second, check the status of the *VM Instances*.
|
||||
Go to **VM Instances** and open the details of the desired instance.
|
||||
Check the serial console output of that instance by opening the **Logs** > **Serial port 1 (console)** page:
|
||||
|
||||
![GCP portal serial console link](../_media/recovery-gcp-serial-console-link.png)
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.4:30090: connect: connection refused\"","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.2:30090: i/o timeout\"","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"ERROR","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
|
||||
```
|
||||
|
||||
This means that you have to recover the node manually.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Recover a cluster
|
||||
|
||||
Recovering a cluster requires the following parameters:
|
||||
|
||||
* The `constellation-id.json` file in your working directory or the cluster's load balancer IP address
|
||||
* The master secret of the cluster
|
||||
|
||||
A cluster can be recovered like this:
|
||||
|
||||
```bash
|
||||
$ constellation recover --master-secret constellation-mastersecret.json
|
||||
Pushed recovery key.
|
||||
Pushed recovery key.
|
||||
Pushed recovery key.
|
||||
Recovered 3 control-plane nodes.
|
||||
```
|
||||
|
||||
In the serial console output of the node you'll see a similar output to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer","caller":"recoveryserver/server.go:93","msg":"Received recover call"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer","caller":"recoveryserver/server.go:125","msg":"Received state disk key and measurement secret, shutting down server"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer.gRPC","caller":"zap/server_interceptors.go:61","msg":"finished streaming call with code OK","grpc.start_time":"2022-09-08T10:26:59Z","system":"grpc","span.kind":"server","grpc.service":"recoverproto.API","grpc.method":"Recover","peer.address":"192.0.2.3:41752","grpc.code":"OK","grpc.time_ms":15.701}
|
||||
{"level":"INFO","ts":"2022-09-08T10:27:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:87","msg":"RejoinClient stopped"}
|
||||
```
|
94
docs/versioned_docs/version-2.1/workflows/scale.md
Normal file
@ -0,0 +1,94 @@
|
||||
# Scale your cluster
|
||||
|
||||
Constellation provides all features of a Kubernetes cluster including scaling and autoscaling.
|
||||
|
||||
## Worker node scaling
|
||||
|
||||
### Autoscaling
|
||||
|
||||
Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of
|
||||
worker nodes:
|
||||
|
||||
```bash
|
||||
worker_group=$(kubectl get scalinggroups -o json | jq -r '.items[].metadata.name | select(contains("worker"))')
|
||||
echo "The name of your worker scaling group is '$worker_group'"
|
||||
```
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource to `true`:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run.
|
||||
You can configure the minimum and maximum number of worker nodes in the scaling group by patching the `min` or
|
||||
`max` fields of the scaling group resource:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler will now never provision more than 5 worker nodes.
|
||||
|
||||
If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the
|
||||
following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of
|
||||
and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of
|
||||
worker nodes before and after the deployment:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx --replicas 150
|
||||
kubectl -n kube-system get nodes
|
||||
kubectl rollout status deployment nginx
|
||||
kubectl -n kube-system get nodes
|
||||
```
|
||||
|
||||
### Manual scaling
|
||||
|
||||
Alternatively, you can manually scale your cluster up or down:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-workers`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new **instance count** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **worker** instance group.
|
||||
3. Set the new **number of instances** and **save**.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Control-plane node scaling
|
||||
|
||||
Control-plane nodes can **only be scaled manually and only scaled up**!
|
||||
|
||||
To increase the number of control-plane nodes, follow these steps:
|
||||
|
||||
<tabs groupId="csp">
|
||||
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-controlplanes`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new (increased) **instance count** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **control-plane** instance group.
|
||||
3. Set the new (increased) **number of instances** and **save**.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
If you scale down the number of control-planes nodes, the removed nodes won't be able to exit the `etcd` cluster correctly. This will endanger the quorum that's required to run a stable Kubernetes control plane.
|
59
docs/versioned_docs/version-2.1/workflows/ssh.md
Normal file
@ -0,0 +1,59 @@
|
||||
# Manage SSH keys
|
||||
|
||||
Constellation allows you to create UNIX users that can connect to both control-plane and worker nodes over SSH. As the system partitions are read-only, users need to be re-created upon each restart of a node. This is automated by the *Access Manager*.
|
||||
|
||||
On cluster initialization, users defined in the `ssh-users` section of the Constellation configuration file are created and stored in the `ssh-users` ConfigMap in the `kube-system` namespace. For a running cluster, you can add or remove users by modifying the ConfigMap and restarting a node.
|
||||
|
||||
## Access Manager
|
||||
The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519.
|
||||
|
||||
:::note
|
||||
All users are automatically created with `sudo` capabilities.
|
||||
:::
|
||||
|
||||
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a node restart is required after making changes to the ConfigMap.
|
||||
|
||||
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`.
|
||||
|
||||
You can update the ConfigMap by:
|
||||
```bash
|
||||
kubectl edit configmap -n kube-system ssh-users
|
||||
```
|
||||
|
||||
Or alternatively, by modifying and re-applying it with the definition listed in the examples.
|
||||
|
||||
## Examples
|
||||
You can add a user `myuser` in `constellation-config.yaml` like this:
|
||||
|
||||
```yaml
|
||||
# Create SSH users on Constellation nodes upon the first initialization of the cluster.
|
||||
sshUsers:
|
||||
myuser: "ssh-rsa AAAA...mgNJd9jc="
|
||||
```
|
||||
|
||||
This user is then created upon the first initialization of the cluster, and translated into a ConfigMap as shown below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: ssh-users
|
||||
namespace: kube-system
|
||||
data:
|
||||
myuser: "ssh-rsa AAAA...mgNJd9jc="
|
||||
```
|
||||
|
||||
You can add users by adding `data` entries:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: ssh-users
|
||||
namespace: kube-system
|
||||
data:
|
||||
myuser: "ssh-rsa AAAA...mgNJd9jc="
|
||||
anotheruser: "ssh-ed25519 AAAA...CldH"
|
||||
```
|
||||
|
||||
Similarly, removing any entries causes users to be evicted upon the next restart of the node.
|
271
docs/versioned_docs/version-2.1/workflows/storage.md
Normal file
@ -0,0 +1,271 @@
|
||||
# Use persistent storage
|
||||
|
||||
Persistent storage in Kubernetes requires cloud-specific configuration.
|
||||
For abstraction of container storage, Kubernetes offers [volumes](https://kubernetes.io/docs/concepts/storage/volumes/),
|
||||
allowing users to mount storage solutions directly into containers.
|
||||
The [Container Storage Interface (CSI)](https://kubernetes-csi.github.io/docs/) is the standard interface for exposing arbitrary block and file storage systems into containers in Kubernetes.
|
||||
Cloud service providers (CSPs) offer their own CSI-based solutions for cloud storage.
|
||||
|
||||
### Confidential storage
|
||||
|
||||
Most cloud storage solutions support encryption, such as [GCE Persistent Disks (PD)](https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek).
|
||||
Constellation supports the available CSI-based storage options for Kubernetes engines in Azure and GCP.
|
||||
However, their encryption takes place in the storage backend and is managed by the CSP.
|
||||
Thus, using the default CSI drivers for these storage types means trusting the CSP with your persistent data.
|
||||
|
||||
To address this, Constellation provides CSI drivers for Azure Disk and GCE PD, offering [encryption on the node level](../architecture/keys.md#storage-encryption). They enable transparent encryption for persistent volumes without needing to trust the cloud backend. Plaintext data never leaves the confidential VM context, offering you confidential storage.
|
||||
|
||||
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
|
||||
|
||||
## CSI drivers
|
||||
|
||||
Constellation supports the following drivers, which offer node-level encryption and optional integrity protection.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
**Constellation CSI driver for Azure Disk**:
|
||||
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for more information. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
**Constellation CSI driver for GCP Persistent Disk**:
|
||||
Mount [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
|
||||
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
|
||||
You can use them to bring a volume back to a prior state or provision new volumes.
|
||||
Follow the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
Note that in case the options above aren't a suitable solution for you, Constellation is compatible with all other CSI-based storage options. For example, you can use [Azure Files](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction) or [GCP Filestore](https://cloud.google.com/filestore) with Constellation out of the box. Constellation is just not providing transparent encryption on the node level for these storage types yet.
|
||||
|
||||
## Installation
|
||||
|
||||
The following installation guide gives an overview of how to securely use CSI-based cloud storage for persistent volumes in Constellation.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. Install the CSI driver:
|
||||
|
||||
```bash
|
||||
helm install azuredisk-csi-driver https://raw.githubusercontent.com/edgelesssys/constellation-azuredisk-csi-driver/main/charts/edgeless/latest/azuredisk-csi-driver.tgz \
|
||||
--namespace kube-system \
|
||||
--set linux.distro=fedora \
|
||||
--set controller.replicas=1
|
||||
```
|
||||
|
||||
2. Create a [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for your driver
|
||||
|
||||
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
|
||||
A storage class only needs to be created once and can then be used by multiple volumes.
|
||||
The following snippet creates a simple storage class using [Standard SSDs](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created.
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: encrypted-storage
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: azuredisk.csi.confidential.cloud
|
||||
parameters:
|
||||
skuName: StandardSSD_LRS
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
EOF
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. Install the CSI driver:
|
||||
|
||||
```bash
|
||||
kubectl apply -k github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/overlays/edgeless/latest
|
||||
```
|
||||
|
||||
2. Create a [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for your driver
|
||||
|
||||
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
|
||||
A storage class only needs to be created once and can then be used by multiple volumes.
|
||||
The following snippet creates a simple storage class using [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the backing storage device when the first Pod claiming the volume is created.
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: encrypted-storage
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: gcp.csi.confidential.cloud
|
||||
parameters:
|
||||
type: pd-standard
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
EOF
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
:::info
|
||||
|
||||
By default, integrity protection is disabled for performance reasons. If you want to enable integrity protection, add `csi.storage.k8s.io/fstype: ext4-integrity` to `parameters`. Alternatively, you can use another filesystem by specifying another file system type with the suffix `-integrity`. Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
:::
|
||||
|
||||
3. Create a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
|
||||
|
||||
A [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) is a request for storage with certain properties.
|
||||
It can refer to a storage class.
|
||||
The following creates a persistent volume claim, requesting 20 GB of storage via the previously created storage class:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-example
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: encrypted-storage
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
4. Create a Pod with persistent storage
|
||||
|
||||
You can assign a persistent volume claim to an application in need of persistent storage.
|
||||
The mounted volume will persist restarts.
|
||||
The following creates a pod that uses the previously created persistent volume claim:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: web-server
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: web-server
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-example
|
||||
readOnly: false
|
||||
EOF
|
||||
```
|
||||
|
||||
### Set the default storage class
|
||||
The examples above are defined to be automatically set as the default storage class. The default storage class is responsible for all persistent volume claims that don't explicitly request `storageClassName`. In case you need to change the default, follow the steps below:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. List the storage classes in your cluster:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER AGE
|
||||
some-storage (default) disk.csi.azure.com 1d
|
||||
encrypted-storage azuredisk.csi.confidential.cloud 1d
|
||||
```
|
||||
|
||||
The default storage class is marked by `(default)`.
|
||||
|
||||
2. Mark old default storage class as non default
|
||||
|
||||
If you previously used another storage class as the default, you will have to remove that annotation:
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass <name-of-old-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
```
|
||||
|
||||
3. Mark new class as the default
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass <name-of-new-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
4. Verify that your chosen storage class is default:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER AGE
|
||||
some-storage disk.csi.azure.com 1d
|
||||
encrypted-storage (default) azuredisk.csi.confidential.cloud 1d
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. List the storage classes in your cluster:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER AGE
|
||||
some-storage (default) pd.csi.storage.gke.io 1d
|
||||
encrypted-storage gcp.csi.confidential.cloud 1d
|
||||
```
|
||||
|
||||
The default storage class is marked by `(default)`.
|
||||
|
||||
2. Mark old default storage class as non default
|
||||
|
||||
If you previously used another storage class as the default, you will have to remove that annotation:
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass <name-of-old-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
```
|
||||
|
||||
3. Mark new class as the default
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass <name-of-new-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
4. Verify that your chosen storage class is default:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER AGE
|
||||
some-storage pd.csi.storage.gke.io 1d
|
||||
encrypted-storage (default) gcp.csi.confidential.cloud 1d
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
25
docs/versioned_docs/version-2.1/workflows/terminate.md
Normal file
@ -0,0 +1,25 @@
|
||||
# Terminate your cluster
|
||||
|
||||
You can terminate your cluster using the CLI. For this, you need the state file of your running cluster named `constellation-state.json` in the current directory.
|
||||
|
||||
:::danger
|
||||
|
||||
All ephemeral storage and state of your cluster will be lost. Make sure any data is safely stored in persistent storage. Constellation can recreate your cluster and the associated encryption keys, but won't backup your application data automatically.
|
||||
|
||||
:::
|
||||
|
||||
Terminate the cluster by running:
|
||||
|
||||
```bash
|
||||
constellation terminate
|
||||
```
|
||||
|
||||
This deletes all resources created by Constellation in your cloud environment.
|
||||
All local files created by the `create` and `init` commands are deleted as well, except for `constellation-mastersecret.json` and the configuration file.
|
||||
|
||||
:::caution
|
||||
|
||||
Termination can fail if additional resources have been created that depend on the ones managed by Constellation. In this case, you need to delete these additional
|
||||
resources manually. Just run the `terminate` command again afterward to continue the termination process of the cluster.
|
||||
|
||||
:::
|
39
docs/versioned_docs/version-2.1/workflows/troubleshooting.md
Normal file
@ -0,0 +1,39 @@
|
||||
# Troubleshooting
|
||||
|
||||
This section aids you in finding problems when working with Constellation.
|
||||
|
||||
## Cloud logging
|
||||
|
||||
To provide information during early stages of the node's boot process, Constellation logs messages into the cloud providers' log systems. Since these offerings **aren't** confidential, only generic information without any sensitive values are stored. This provides administrators with a high level understanding of the current state of a node.
|
||||
|
||||
You can view these information in the follow places:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. In your Azure subscription find the Constellation resource group.
|
||||
2. Inside the resource group find the Application Insights resource called `constellation-insights-*`.
|
||||
3. On the left-hand side go to `Logs`, which is located in the section `Monitoring`.
|
||||
+ Close the Queries page if it pops up.
|
||||
5. In the query text field type in `traces`, and click `Run`.
|
||||
|
||||
To **find the disk UUIDs** use the following query: `traces | where message contains "Disk UUID"`
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. Select the project that hosts Constellation.
|
||||
2. Go to the `Compute Engine` service.
|
||||
3. On the right-hand side of a VM entry select `More Actions` (a stacked ellipsis)
|
||||
+ Select `View logs`
|
||||
|
||||
To **find the disk UUIDs** use the following query: `resource.type="gce_instance" text_payload=~"Disk UUID:.*\n" logName=~".*/constellation-boot-log"`
|
||||
|
||||
:::info
|
||||
|
||||
Constellation uses the default bucket to store logs. Its [default retention period is 30 days](https://cloud.google.com/logging/quotas#logs_retention_periods).
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
53
docs/versioned_docs/version-2.1/workflows/trusted-launch.md
Normal file
@ -0,0 +1,53 @@
|
||||
# Use Azure trusted launch VMs
|
||||
|
||||
Constellation also supports [trusted launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch) on Microsoft Azure. Trusted launch VMs don't offer the same level of security as Confidential VMs, but are available in more regions and in larger quantities. The main difference between trusted launch VMs and normal VMs is that the former offer vTPM-based remote attestation. When used with trusted launch VMs, Constellation relies on vTPM-based remote attestation to verify nodes.
|
||||
|
||||
:::caution
|
||||
|
||||
Trusted launch VMs don't provide runtime encryption and don't keep the cloud service provider (CSP) out of your trusted computing base.
|
||||
|
||||
:::
|
||||
|
||||
Constellation supports trusted launch VMs with instance types `Standard_D*_v4` and `Standard_E*_v4`. Run `constellation config instance-types` for a list of all supported instance types.
|
||||
|
||||
## VM images
|
||||
|
||||
Azure currently doesn't support [community galleries for trusted launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/share-gallery-community). Thus, you need to manually import the Constellation node image into your cloud subscription.
|
||||
|
||||
The latest image is available at [https://public-edgeless-constellation.s3.us-east-2.amazonaws.com/azure_image_exports/2.0.0](https://public-edgeless-constellation.s3.us-east-2.amazonaws.com/azure_image_exports/2.0.0). Simply adjust the last three digits to download a different version.
|
||||
|
||||
After you've downloaded the image, create a resource group `constellation-images` in your Azure subscription and import the image.
|
||||
You can use a script to do this:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/edgelesssys/constellation/main/hack/importAzure.sh
|
||||
chmod +x importAzure.sh
|
||||
AZURE_IMAGE_VERSION=2.0.0 AZURE_RESOURCE_GROUP_NAME=constellation-images AZURE_IMAGE_FILE=./2.0.0 ./importAzure.sh
|
||||
```
|
||||
|
||||
The script creates the following resources:
|
||||
1. A new image gallery with the default name `constellation-import`
|
||||
2. A new image definition with the default name `constellation`
|
||||
3. The actual image with the provided version. In this case `2.0.0`
|
||||
|
||||
Once the import is completed, use the `ID` of the image version in your `constellation-conf.yaml` for the `image` field. Set `confidentialVM` to `false`.
|
||||
|
||||
Fetch the image measurements:
|
||||
|
||||
```bash
|
||||
IMAGE_VERSION=2.0.0
|
||||
URL=https://public-edgeless-constellation.s3.us-east-2.amazonaws.com//CommunityGalleries/ConstellationCVM-b3782fa0-0df7-4f2f-963e-fc7fc42663df/Images/constellation/Versions/$IMAGE_VERSION/measurements.yaml
|
||||
constellation config fetch-measurements -u$URL -s$URL.sig
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
The [constellation create](create.md) command will issue a warning because manually imported images aren't recognized as production grade images:
|
||||
|
||||
```shell-session
|
||||
Configured image doesn't look like a released production image. Double check image before deploying to production.
|
||||
```
|
||||
|
||||
Please ignore this warning.
|
||||
|
||||
:::
|
35
docs/versioned_docs/version-2.1/workflows/upgrade.md
Normal file
@ -0,0 +1,35 @@
|
||||
# Upgrade your cluster
|
||||
|
||||
Constellation provides an easy way to upgrade to the next release.
|
||||
This involves choosing a new VM image to use for all nodes in the cluster and updating the cluster's expected measurements.
|
||||
|
||||
## Plan the upgrade
|
||||
|
||||
If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull a list of available updates.
|
||||
|
||||
```bash
|
||||
constellation upgrade plan
|
||||
```
|
||||
|
||||
The command lets you interactively choose from a list of available updates and prepares your Constellation config file for the next step.
|
||||
|
||||
To use the command in scripts, use the `--file` flag to compile the available options into a YAML file.
|
||||
You can then set the chosen upgrade option in your Constellation config file.
|
||||
|
||||
:::caution
|
||||
|
||||
`constellation upgrade plan` only works for official Edgeless release images.
|
||||
If your cluster is using a custom image, the Constellation CLI will fail to find compatible images.
|
||||
However, you may still use the `upgrade execute` command by manually selecting a compatible image and setting it in your config file.
|
||||
|
||||
:::
|
||||
|
||||
## Execute the upgrade
|
||||
|
||||
Once your config file has been prepared with the new image and measurements, use the `upgrade execute` command to initiate the upgrade.
|
||||
|
||||
```bash
|
||||
constellation upgrade execute
|
||||
```
|
||||
|
||||
After the command has finished, the cluster will automatically replace old nodes using a rolling update strategy to ensure no downtime of the control or data plane.
|
90
docs/versioned_docs/version-2.1/workflows/verify-cli.md
Normal file
@ -0,0 +1,90 @@
|
||||
# Verify the CLI
|
||||
|
||||
Edgeless Systems uses [sigstore](https://www.sigstore.dev/) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/.
|
||||
|
||||
:::note
|
||||
The public key for Edgeless Systems' long-term code-signing key is:
|
||||
```
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEf8F1hpmwE+YCFXzjGtaQcrL6XZVT
|
||||
JmEe5iSLvG1SyQSAew7WdMKF6o9t8e2TFuCkzlOhhlws2OHWbiFZnFWCFw==
|
||||
-----END PUBLIC KEY-----
|
||||
```
|
||||
The public key is also available for download at https://edgeless.systems/es.pub and in the Twitter profile [@EdgelessSystems](https://twitter.com/EdgelessSystems).
|
||||
:::
|
||||
|
||||
The Rekor transparency log is a public append-only ledger that verifies and records signatures and associated metadata. The Rekor transparency log enables everyone to observe the sequence of (software) signatures issued by Edgeless Systems and many other parties. The transparency log allows for the public identification of dubious or malicious signatures.
|
||||
|
||||
You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as described in the following.
|
||||
|
||||
:::info
|
||||
You don't need to verify the Constellation node images. This is done automatically by your CLI and the rest of Constellation.
|
||||
:::
|
||||
|
||||
## Verify the signature
|
||||
|
||||
First, [install the Cosign CLI](https://docs.sigstore.dev/cosign/installation). Next, [download](https://github.com/edgelesssys/constellation/releases) and verify the signature that accompanies your CLI executable, for example:
|
||||
|
||||
```shell-session
|
||||
$ cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
|
||||
|
||||
Verified OK
|
||||
```
|
||||
|
||||
The above performs an offline verification of the provided public key, signature, and executable. To also verify that a corresponding entry exists in the public Rekor transparency log, add the variable `COSIGN_EXPERIMENTAL=1`:
|
||||
|
||||
```shell-session
|
||||
$ COSIGN_EXPERIMENTAL=1 cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
|
||||
|
||||
tlog entry verified with uuid: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13 index: 3477047
|
||||
Verified OK
|
||||
```
|
||||
|
||||
🏁 You now know that your CLI executable was officially released and signed by Edgeless Systems.
|
||||
|
||||
## Optional: Manually inspect the transparency log
|
||||
|
||||
To further inspect the public Rekor transparency log, [install the Rekor CLI](https://docs.sigstore.dev/rekor/installation). A search for the CLI executable should give a single UUID. (Note that this UUID contains the UUID from the previous `cosign` command.)
|
||||
|
||||
```shell-session
|
||||
$ rekor-cli search --artifact constellation-linux-amd64
|
||||
|
||||
Found matching entries (listed by UUID):
|
||||
362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
```
|
||||
|
||||
With this UUID you can get the full entry from the transparency log:
|
||||
|
||||
```shell-session
|
||||
$ rekor-cli get --uuid=362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
|
||||
LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d
|
||||
Index: 3477047
|
||||
IntegratedTime: 2022-09-12T22:28:16Z
|
||||
UUID: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
Body: {
|
||||
"HashedRekordObj": {
|
||||
"data": {
|
||||
"hash": {
|
||||
"algorithm": "sha256",
|
||||
"value": "40e137b9b9b8204d672642fd1e181c6d5ccb50cfc5cc7fcbb06a8c2c78f44aff"
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"content": "MEUCIQCSER3mGj+j5Pr2kOXTlCIHQC3gT30I7qkLr9Awt6eUUQIgcLUKRIlY50UN8JGwVeNgkBZyYD8HMxwC/LFRWoMn180=",
|
||||
"publicKey": {
|
||||
"content": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFZjhGMWhwbXdFK1lDRlh6akd0YVFjckw2WFpWVApKbUVlNWlTTHZHMVN5UVNBZXc3V2RNS0Y2bzl0OGUyVEZ1Q2t6bE9oaGx3czJPSFdiaUZabkZXQ0Z3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg=="
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
The field `publicKey` should contain Edgeless Systems' public key in Base64 encoding.
|
||||
|
||||
You can get an exhaustive list of artifact signatures issued by Edgeless Systems via the following command:
|
||||
|
||||
```bash
|
||||
rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509
|
||||
```
|
||||
|
||||
Edgeless Systems monitors this list to detect potential unauthorized use of its private key.
|
48
docs/versioned_docs/version-2.1/workflows/verify-cluster.md
Normal file
@ -0,0 +1,48 @@
|
||||
# Verify your cluster
|
||||
|
||||
Constellation's [attestation feature](../architecture/attestation.md) allows you, or a third party, to verify the integrity and confidentiality of your Constellation cluster.
|
||||
|
||||
## Fetch measurements
|
||||
|
||||
To verify the integrity of Constellation you need trusted measurements to verify against. For each node image released by Edgeless Systems, there are signed measurements, which you can download using the CLI:
|
||||
|
||||
```bash
|
||||
constellation config fetch-measurements
|
||||
```
|
||||
|
||||
This command performs the following steps:
|
||||
1. Download the signed measurements for the configured image. By default, this will use Edgeless Systems' public measurement registry.
|
||||
2. Verify the signature of the measurements. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub).
|
||||
3. Write measurements into configuration file.
|
||||
|
||||
## The *verify* command
|
||||
|
||||
:::note
|
||||
The steps below are purely optional. They're automatically executed by `constellation init` when you initialize your cluster. The `constellation verify` command mostly has an illustrative purpose.
|
||||
:::
|
||||
|
||||
The `verify` command obtains and verifies an attestation statement from a running Constellation cluster.
|
||||
|
||||
```bash
|
||||
constellation verify [--cluster-id ...]
|
||||
```
|
||||
|
||||
From the attestation statement, the command verifies the following properties:
|
||||
* The cluster is using the correct Confidential VM (CVM) type.
|
||||
* Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
|
||||
* The unique ID of the cluster matches the one from your `constellation-id.json` file or passed in via `--cluster-id`.
|
||||
|
||||
Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape.
|
||||
|
||||
### Custom arguments
|
||||
|
||||
The `verify` command also allows you to verify any Constellation deployment that you have network access to. For this you need the following:
|
||||
|
||||
* The IP address of a running Constellation cluster's [VerificationService](../architecture/components.md#verification-service). The `VerificationService` is exposed via a `NodePort` service using the external IP address of your cluster. Run `kubectl get nodes -o wide` and look for `EXTERNAL-IP`.
|
||||
* The cluster's *clusterID*. See [cluster identity](../architecture/keys.md#cluster-identity) for more details.
|
||||
|
||||
For example:
|
||||
|
||||
```shell-session
|
||||
constellation verify -e 192.0.2.1 --cluster-id Q29uc3RlbGxhdGlvbkRvY3VtZW50YXRpb25TZWNyZXQ=
|
||||
```
|
229
docs/versioned_sidebars/version-2.1-sidebars.json
Normal file
@ -0,0 +1,229 @@
|
||||
{
|
||||
"docs": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Introduction",
|
||||
"id": "intro"
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Basics",
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Confidential Kubernetes",
|
||||
"id": "overview/confidential-kubernetes"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Security benefits",
|
||||
"id": "overview/security-benefits"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Product features",
|
||||
"id": "overview/product"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Feature status of clouds",
|
||||
"id": "overview/clouds"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Performance",
|
||||
"id": "overview/performance"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "License",
|
||||
"id": "overview/license"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Getting started",
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Installation",
|
||||
"id": "getting-started/install"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "First steps",
|
||||
"id": "getting-started/first-steps"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Mini Constellation",
|
||||
"id": "getting-started/mini-constellation"
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Examples",
|
||||
"link": {
|
||||
"type": "doc",
|
||||
"id": "getting-started/examples"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Emojivoto",
|
||||
"id": "getting-started/examples/emojivoto"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Online Boutique",
|
||||
"id": "getting-started/examples/online-boutique"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Horizontal Pod Autoscaling",
|
||||
"id": "getting-started/examples/horizontal-scaling"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Workflows",
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Verify the CLI",
|
||||
"id": "workflows/verify-cli"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Create your cluster",
|
||||
"id": "workflows/create"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Scale your cluster",
|
||||
"id": "workflows/scale"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Upgrade your cluster",
|
||||
"id": "workflows/upgrade"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Terminate your cluster",
|
||||
"id": "workflows/terminate"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Recover your cluster",
|
||||
"id": "workflows/recovery"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Verify your cluster",
|
||||
"id": "workflows/verify-cluster"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Manage SSH keys",
|
||||
"id": "workflows/ssh"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Use persistent storage",
|
||||
"id": "workflows/storage"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Use Azure trusted launch VMs",
|
||||
"id": "workflows/trusted-launch"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Troubleshooting",
|
||||
"id": "workflows/troubleshooting"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Architecture",
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Overview",
|
||||
"id": "architecture/overview"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Components",
|
||||
"id": "architecture/components"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Attestation",
|
||||
"id": "architecture/attestation"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Cluster orchestration",
|
||||
"id": "architecture/orchestration"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Versions and support",
|
||||
"id": "architecture/versions"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Images",
|
||||
"id": "architecture/images"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Keys and cryptographic primitives",
|
||||
"id": "architecture/keys"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Encrypted persistent storage",
|
||||
"id": "architecture/encrypted-storage"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "Networking",
|
||||
"id": "architecture/networking"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Reference",
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"label": "CLI",
|
||||
"id": "reference/cli"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
@ -1,3 +1,4 @@
|
||||
[
|
||||
"2.1",
|
||||
"2.0"
|
||||
]
|
||||
|
6
package-lock.json
generated
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"name": "constellation",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {}
|
||||
}
|