Add docs to repo (#38)

This commit is contained in:
Moritz Eckert 2022-09-02 11:52:42 +02:00 committed by GitHub
parent 50d3f3ca7f
commit b95f3dbc91
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
180 changed files with 13401 additions and 67 deletions

View File

@ -85,7 +85,12 @@ az ad sp create-for-rbac --name "github-actions-e2e-tests" --role contributor --
az role assignment create --role "User Access Administrator" --scope /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435 --assignee <SERVICE_PRINCIPAL_CLIENT_ID>
```
Next, [add API permissions to Managed Identity](https://github.com/edgelesssys/wiki/blob/master/other_tech/azure.md#adding-api-permission-to-managed-identity)
Next, add API permissions to Managed Identity:
* Not possible through portal; requires PowerShell
* <https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/grant-graph-api-permission-to-managed-identity-object/ba-p/2792127>
* `$GraphAppId` in this article is for Microsoft Graph. Azure AD Graph is `00000002-0000-0000-c000-000000000000`
* Note that changing permissions can take between few seconds to several hours
Store output of `az ad sp ...` in [GitHub Action Secret](https://github.com/edgelesssys/constellation/settings/secrets/actions) or create a local secret file for act to consume.

View File

@ -23,7 +23,7 @@ This checklist will prepare `v1.3.0` from `v1.2.0`. Adjust your version numbers
gh workflow run build-operator-manual.yml --ref release/v1.3.0 -F imageTag=v1.3.0
```
3. Review and update changelog with all changes since last release. [GitHub's diff view](https://github.com/edgelesssys/constellation/compare/v1.2.0...main) helps a lot!
4. Update versions [versions.go](../internal/versions/versions.go#L33-L39) to `v1.3.0` and **push your changes**.
4. Update versions [versions.go](../../internal/versions/versions.go#L33-L39) to `v1.3.0` and **push your changes**.
5. Create a [production coreOS image](/.github/workflows/build-coreos.yml)
```sh
gh workflow run build-coreos.yml --ref release/v1.3.0 -F debug=false -F coreOSConfigBranch=constellation

20
.github/workflows/check-links.yml vendored Normal file
View File

@ -0,0 +1,20 @@
name: Links
on:
push:
branches:
- main
pull_request:
jobs:
linkChecker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Link Checker
uses: lycheeverse/lychee-action@v1.3.0
with:
fail: true
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

View File

@ -0,0 +1,19 @@
name: Create Pull Request for CLI reference update
on:
push:
branches:
- action/constellation/update-cli-reference
jobs:
pull-request:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: pull-request
uses: repo-sync/pull-request@v2
with:
destination_branch: "main"
pr_title: "Update CLI reference"
pr_body: |
:robot: *This is an automated PR.* :robot:
github_token: ${{ secrets.GITHUB_TOKEN }}

22
.github/workflows/docs-vale.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Linting
on:
push:
branches:
- main
pull_request:
jobs:
prose:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@master
- name: Vale
uses: errata-ai/vale-action@v1
with:
files: docs/docs
env:
# Required, set by GitHub actions automatically:
# https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

View File

@ -50,9 +50,9 @@ jobs:
API_TOKEN_GITHUB: ${{ secrets.CI_GITHUB_REPOSITORY }}
with:
source_file: 'cli.md'
destination_repo: 'edgelesssys/constellation-docs'
destination_repo: 'edgelesssys/constellation'
destination_branch_create: 'action/constellation/update-cli-reference'
destination_folder: 'docs/reference'
destination_folder: 'docs/docs/reference'
user_name: '${{ github.actor }}'
user_email: '${{ github.actor }}@users.noreply.github.com'
commit_message: 'CLI reference was updated by edgelesssys/constellation@${{ env.COMMIT_END}}'
@ -64,9 +64,9 @@ jobs:
API_TOKEN_GITHUB: ${{ secrets.CI_GITHUB_REPOSITORY }}
with:
source_file: 'cli.md'
destination_repo: 'edgelesssys/constellation-docs'
destination_repo: 'edgelesssys/constellation'
destination_branch: 'action/constellation/update-cli-reference'
destination_folder: 'docs/reference'
destination_folder: 'docs/docs/reference'
user_name: '${{ github.actor }}'
user_email: '${{ github.actor }}@users.noreply.github.com'
commit_message: 'CLI reference was updated by edgelesssys/constellation@${{ env.COMMIT_END}}'

11
.lycheeignore Normal file
View File

@ -0,0 +1,11 @@
file:///github/workspace/.+/\$%7Bhref%7D
file:///github/workspace/.+/.+/_media/
file:///github/workspace/LICENSE
http://localhost.*
https://github.com/edgelesssys/constellation.*
https://github.com/edgelesssys/edg-gcp-compute-persistent-disk-csi-driver
https://github.com/edgelesssys/edg-azuredisk-csi-driver
http://my.storage/measurements.yaml
http://my.storage/measurements.yaml.sig
http://php-apache
https://www.linkedin.com.*

18
.vale.ini Normal file
View File

@ -0,0 +1,18 @@
StylesPath = docs/styles
Vocab = constellation
# IgnoredScopes specifies inline-level HTML tags to ignore.
# These tags may occur in an active scope (unlike SkippedScopes, skipped entirely) but their content still will not raise any alerts.
# Default: ignore `code` and `tt`.
IgnoredScopes = code, tt, img
[*.md]
BasedOnStyles = Vale, Microsoft, Google
# decrease to suggestion
Microsoft.Foreign = suggestion # conflicts with Microsoft.Contractions
Microsoft.HeadingAcronyms = suggestion # doesn't consider well-known ones
# increase to warning
Microsoft.OxfordComma = warning
Microsoft.SentenceLength = warning

View File

@ -1,6 +1,6 @@
## First steps
Thank you for getting involved! Before you start, please familiarize yourself with the [documentation](https://docs.edgeless.systems/constellation/latest).
Thank you for getting involved! Before you start, please familiarize yourself with the [documentation](https://docs.edgeless.systems/constellation).
Please follow our [Code of Conduct](CODE_OF_CONDUCT.md) when interacting with this project.
@ -10,7 +10,7 @@ If you want to support our development:
* Share our projects on social media
* Join the [Confidential Computing Discord](https://discord.gg/rH8QTH56JN)
Constellation is licensed under the [TODO](LICENSE). When contributing, you also need to agree to our [Contributor License Agreement](https://cla-assistant.io/edgelesssys/constellation).
Constellation is licensed under the [AGPL](LICENSE). When contributing, you also need to agree to our [Contributor License Agreement](https://cla-assistant.io/edgelesssys/constellation).
## Development guidelines
@ -72,7 +72,6 @@ Development components:
* [proto](proto): Proto files generator
* [terraform](terraform): Infrastructure management using terraform (instead of `constellation create/destroy`)
* [libvirt](terraform/libvirt): Deploy local cluster using terraform, libvirt and QEMU
* [test](test): Integration test
Additional repositories:

View File

@ -2,7 +2,7 @@
<b>⭐ Star us on GitHub — it motivates us a lot!</b>
</p>
![](docs/constellation-header.png)
![](docs/static/img/constellation-header.png)
<h1 align="center">Welcome to Constellation!</h1>
@ -42,9 +42,9 @@ From the inside, it's a fully featured, [certified] Kubernetes engine. From the
Constellation is open source and enterprise-ready, tailored for unleashing the power of confidential computing for all your workloads at scale.
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/product-overview-dark.png">
<source media="(prefers-color-scheme: light)" srcset="docs/product-overview.png">
<img alt="Constellation product overview" src="docs/product-overview.png">
<source media="(prefers-color-scheme: dark)" srcset="docs/docs/_media/product-overview-dark.png">
<source media="(prefers-color-scheme: light)" srcset="docs/docs/_media/product-overview.png">
<img alt="Constellation product overview" src="docs/docs/_media/product-overview.png">
</picture>
For a brief introduction to the Confidential Kubernetes concept, read the [introduction][confidential-kubernetes].
@ -85,7 +85,7 @@ For more elaborate overviews of Constellation, see the [architecture] documentat
## 🚀 Getting started
![Constellation Shell](docs/constellation-shell-windowframe.svg)
![Constellation Shell](docs/static/img/constellation-shell-windowframe.svg)
Sounds great, how can I try this?

View File

@ -37,7 +37,7 @@ $ helm install cilium cilium/cilium --namespace=kube-system
```
After Cilium is installed, you can explore the features that Cilium has to
offer from the [Getting Started Guides page](https://docs.cilium.io/en/latest/gettingstarted/).
offer from the [Getting Started Guides page](https://docs.cilium.io/en/latest/gettingstarted/k8s-install-default/#next-steps).
## Source Code

4
docs/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
node_modules
package-lock.json
.docusaurus
build/

31
docs/README.md Normal file
View File

@ -0,0 +1,31 @@
# Constellation Documentation
Published @ <https://docs.edgeless.systems/constellation> via `netlify`.
## Previewing
During edits you can preview your changes using the [`docusaurus`](https://docusaurus.io/docs/installation):
```sh
# requires node >=16.14
npm install
npm run build
npm run serve
```
Browse to <http://localhost:3000/constellation>
## Release process
1. [Tagging a new version](https://docusaurus.io/docs/next/versioning#tagging-a-new-version)
```shell
npm run docusaurus docs:version X.X
```
When tagging a new version, the document versioning mechanism will:
Copy the full `docs/` folder contents into a new `versioned_docs/version-[versionName]/` folder.
Create a versioned sidebars file based from your current sidebar configuration (if it exists) - saved as `versioned_sidebars/version-[versionName]-sidebars.json`.
Append the new version number to `versions.json`.

3
docs/babel.config.js Normal file
View File

@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 91 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 104 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

View File

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 154 KiB

View File

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

1287
docs/docs/_media/tcb.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 237 KiB

View File

@ -0,0 +1,237 @@
# Attestation
This page explains Constellation's attestation process and highlights the cornerstones of its trust model.
## Terms
The following lists a few terms and concepts that helps to understand the attestation concept of Constellation.
### Trusted Platform Module (TPM)
A TPM chip is a dedicated tamper-resistant crypto-processor.
It can securely store artifacts such as passwords, certificates, encryption keys, or *runtime measurements* (more on this below).
When a TPM is implemented in software, it's typically called a *virtual* TPM (vTPM).
### Runtime measurement
A runtime measurement is a cryptographic hash of the memory pages of a so called *runtime component*. Runtime components of interest typically include a system's bootloader or OS kernel.
### Platform Configuration Register (PCR)
A Platform Configuration Register (PCR) is a memory location in the TPM that has some unique properties.
To store a new value in a PCR, the existing value is extended with a new value as follows:
```
PCR[N] = HASHalg( PCR[N] || ArgumentOfExtend )
```
The PCRs are typically used to store runtime measurements.
The new value of a PCR is always an extension of the existing value.
Thus, storing the measurements of multiple components into the same PCR irreversibly links them together.
### Measured boot
Measured boot builds on the concept of chained runtime measurements.
Each component in the boot-chain loads and measures the next component into the PCR before executing it.
By comparing the resulting PCR values against trusted reference values, the integrity of the entire boot chain and thereby, the running system can be ensured.
### Remote attestation (RA)
Remote attestation is the process of verifying certain properties of an application or platform, such as integrity and confidentiality, from a remote location.
In the case of a measured boot, the goal is to obtain a signed attestation statement on the PCR values of the boot measurements.
The statement can then be verified and compared to a set of trusted reference values.
This way, the integrity of the platform can be ensured before sharing secrets with it.
### Confidential virtual machine (CVM)
Confidential computing (CC) is the protection of data in-use with hardware-based trusted execution environments (TEEs).
With CVMs, TEEs encapsulate entire virtual machines and isolate them against the hypervisor, other VMs, and direct memory access.
After loading the initial VM image into encrypted memory, the hypervisor calls for a secure processor to measure these initial memory pages.
The secure processor locks these pages and generates an attestation report on the initial page measurements.
CVM memory pages are encrypted with a key that resides inside the secure processor, which makes sure only the guest VM can access them.
The attestation report is signed by the secure processor and can be verified using remote attestation via the certificate authority of the hardware vendor.
Such an attestation statement guarantees the confidentiality and integrity of a CVM.
### Attested TLS (aTLS)
In a CC environment, attested TLS (aTLS) can be used to establish secure connections between two parties utilizing the remote attestation features of the CC components.
aTLS modifies the TLS handshake by embedding an attestation statement into the TLS certificate.
Instead of relying on a certificate authority, aTLS uses this attestation statement to establish trust in the certificate.
The protocol can be used by clients to verify a server certificate, by a server to verify a client certificate, or for mutual verification (mutual aTLS).
## Overview
The challenge for Constellation is to lift a CVM's attestation statement to the Kubernetes software layer and make it end-to-end verifiable.
From there, Constellation needs to expand the attestation from a single CVM to the entire cluster.
The [*JoinService*](components.md#joinservice) and [*VerificationService*](components.md#verificationservice) are where all runs together.
Internally, the *JoinService* uses remote attestation to securely join CVM nodes to the cluster.
Externally, the *VerificationService* provides an attestation statement for the cluster's CVMs and configuration.
The following explains the details of both steps.
## Node attestation
The idea is that Constellation nodes should have verifiable integrity from the CVM hardware measurement up to the Kubernetes software layer.
The solution is a verifiable boot chain and an integrity-protected runtime environment.
Constellation uses measured boot within CVMs, measuring each component in the boot process before executing it.
Outside of CC, it's usually implemented via TPMs.
CVM technologies differ in how they implement runtime measurements, but the general concepts are similar to those of a TPM.
For simplicity, we use TPM terminology like *PCR* in the following.
When a Constellation node image boots inside a CVM, it uses measured boot for all stages and components of the boot chain.
This process goes up to the root filesystem.
The root filesystem is mounted read-only with integrity protection, guaranteeing forward integrity.
For the details on the image and boot stages see the [image architecture](../architecture/images.md) documentation.
Any changes to the image will inevitably also change the measured boot's PCR values.
To create a node attestation statement, the Constellation image obtains a CVM attestation statement from the hardware.
This includes the runtime measurements and thereby binds the measured boot results to the CVM hardware measurement.
In addition to the image measurements, Constellation extends a PCR during the [initialization phase](../workflows/create.md#the-init-step) that irrevocably marks the node as initialized.
The measurement is created using the [*clusterID*](../architecture/keys.md#cluster-identity), tying all future attestation statements to this ID.
Thereby, an attestation statement is unique for every cluster and a node can be identified unambiguously as being initialized.
To verify an attestation, the hardware's signature and a statement are verified first to establish trust in the contained runtime measurements.
If successful, the measurements are verified against the trusted values of the particular Constellation release version.
Finally, the measurement of the *clusterID* can be compared by calculating it with the [master secret](keys.md#master-secret).
### Runtime measurements
Constellation utilizes runtime measurement to implement the measured boot approach.
As stated above, the underlying hardware technology and guest firmware differ in their implementations of runtime measurements.
The following gives a detailed description of the available measurements in the different cloud environments.
The runtime measurements contain two types of values.
First, are the ones produced by the cloud infrastructure and firmware of the CVM.
Depending on the cloud environment they can contain non-reproducible values.
Those correspond to measurements of closed-source firmware components and other values controlled by the cloud provider.
While not being directly verifiable, they can be compared against previously observed values.
As part of the [signed image measurements](#chain-of-trust), Constellation provides measurements that are known, previously observed values.
Thereby, Constellation enables users to identify changes and deviations and allows them to act accordingly.
See how to [fetch](../workflows/verify.md#fetch-measurements) the latest measurements and verify a cluster.
Second, are the measurements produced by the Constellation bootloader and boot chain itself.
The Constellation Bootloader is the first part of the Constellation stack that takes over from the CVM firmware and measures the rest of the boot chain.
Refer to [images](images.md) for more details on the Constellation boot chain.
The Constellation [Bootstrapper](components.md#bootstrapper) is the first user mode component that runs in a Constellation image.
It extends PCR registers with the [IDs](keys.md#cluster-identity) of the cluster marking a node as initialized.
Constellation allows to specify in the [config](../reference/config.md) which measurements should be enforced during the attestation process
Enforcing non-reproducible measurements controlled by the cloud provider means that changes in these values require manual updates to the cluster's config.
By default, Constellation only enforces measurements that are stable values produced by the infrastructure or by Constellation directly.
<tabs>
<tabItem value="azure" label="Azure" default>
Constellation leverages the [vTPM](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch#vtpm) feature of Azure CVMs for runtime measurements.
The vTPM on Azure CVMs adheres to the [TPM 2.0](https://trustedcomputinggroup.org/resource/tpm-library-specification/) specification of the [Trusted Computing Group](https://trustedcomputinggroup.org/resource/trusted-platform-module-tpm-summary/).
It provides a [measured boot](https://docs.microsoft.com/en-us/azure/security/fundamentals/measured-boot-host-attestation#measured-boot) verification that's based on the trusted launch feature of [Trusted Launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch).
The following table lists all PCR values of the vTPM and the measured components.
It also lists what components of the boot chain did the measurements and if the value is reproducible and verifiable.
The latter means that value can be generated offline and compared to the one in the vTPM
| PCR | Components | Measured by | Reproducible and Verifiable |
|---------------|-------------------------------------|---------------------------------|-----------------------------|
| 0 | Firmware | Azure | No |
| 1 | Firmware | Azure | No |
| 2 | Firmware | Azure | No |
| 3 | Firmware | Azure | No |
| 4 | Constellation Bootloader, GRUB | Azure, Constellation Bootloader | Yes |
| 5 | Reserved | Azure | No |
| 6 | VM Unique ID | Azure | No |
| 7 | Secure Boot State | Azure, Constellation Bootloader | No |
| 8 | Kernel command line, GRUB config | Constellation Bootloader | Yes |
| 9 | Kernel, initramfs | Constellation Bootloader | Yes |
| 10 | Reserved | - | Yes |
| 11 | Reserved | Constellation Bootstrapper | Yes |
| 12 | ClusterID | Constellation Bootstrapper | Yes |
| 13&ndash;23 | Unused | - | - |
</tabItem>
<tabItem value="gcp" label="GCP" default>
Constellation leverages the [vTPM](https://cloud.google.com/compute/confidential-vm/docs/about-cvm) feature of CVMs on GCP for runtime measurements.
Note that the vTPM in GCP doesn't run inside the hardware-protected CVM context, but is emulated on the hypervisor level.
The vTPM on GCP CVMs adheres to the [TPM 2.0](https://trustedcomputinggroup.org/resource/tpm-library-specification/) specification of the [Trusted Computing Group](https://trustedcomputinggroup.org/resource/trusted-platform-module-tpm-summary/).
It provides a [launch attestation report](https://cloud.google.com/compute/confidential-vm/docs/monitoring#about_launch_attestation_report_events) that's based on the Measured Boot feature of [Shielded VMs](https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#measured-boot).
The following table lists all PCR values of the vTPM and the measured components.
It also lists what components of the boot chain did the measurements and if the value is reproducible and verifiable.
The latter means that value can be generated offline and compared to the one in the vTPM.
| PCR | Components | Measured by | Reproducible and Verifiable |
|---------------|----------------------------------|-------------------------------|-----------------------------|
| 0 | CVM constant string | GCP | No |
| 1 | Reserved | GCP | No |
| 2 | Reserved | GCP | No |
| 3 | Reserved | GCP | No |
| 4 | Constellation Bootloader, GRUB | GCP, Constellation Bootloader | Yes |
| 5 | Disk GUID partition table | GCP | No |
| 6 | Disk GUID partition table | GCP | No |
| 7 | GCP Secure Boot Policy | GCP, Constellation Bootloader | No |
| 8 | Kernel command line, GRUB config | Constellation Bootloader | Yes |
| 9 | Kernel, initramfs | Constellation Bootloader | Yes |
| 10 | Reserved | Constellation Bootstrapper | Yes |
| 11 | Reserved | Constellation Bootstrapper | Yes |
| 12 | ClusterID | Constellation Bootstrapper | Yes |
| 13&ndash;23 | Unused |- | - |
</tabItem>
</tabs>
## Cluster attestation
Cluster-facing, Constellation's [*JoinService*](components.md#joinservice) verifies each node joining the cluster given the [configured](../reference/config.md) ground truth runtime measurements.
User-facing, the [*VerificationService*](components.md#verificationservice) provides an interface to verify a node using remote attestation.
By verifying the first node during the [initialization](components.md#bootstrapper) and configuring the ground truth measurements that are subsequently enforced by the *JoinService*, the whole cluster is verified in a transitive way.
### Cluster-facing attestation
The *JoinService* is provided with the runtime measurements of the whitelisted Constellation image version as the ground truth.
During the initialization and the cluster bootstrapping, each node connects to the *JoinService* using [aTLS](#attested-tls-atls).
During the handshake, the node transmits an attestation statement including its runtime measurements.
The *JoinService* verifies that statement and compares the measurements against the ground truth.
For details of the initialization process check the [component descriptions](components.md).
After the initialization, every node updates its runtime measurements with the *clusterID* value, marking it irreversibly as initialized.
When an initialized node tries to join another cluster, its measurements inevitably mismatch the measurements of an uninitialized node and it will be declined.
### User-facing attestation
The [*VerificationService*](components.md#verificationservice) provides an endpoint for obtaining its hardware-based remote attestation statement, which includes the runtime measurements.
A user can [verify](../workflows/verify.md) this statement and compare the measurements against the configured ground truth and, thus, verify the identity and integrity of all Constellation components and the cluster configuration. Subsequently, the user knows that the entire cluster is in the expected state and is trustworthy.
## Chain of trust
So far, this page described how an entire Constellation cluster can be verified using hardware attestation capabilities and runtime measurements.
The last missing link is how the ground truth in the form of runtime measurements can be securely distributed to the verifying party.
When building Constellation images the process entails creating the ground truth runtime measurements.
The builds of Constellation images are reproducible and the measurements of an image can be recalculated and verified by everyone.
With every release, Edgeless Systems publishes signed runtime measurements.
When installing the Constellation CLI, the release binary is also signed by Edgeless Systems.
The [installation guide](../architecture/orchestration.md#verify-your-cli-installation) explains how this signature can be verified.
The CLI contains the public key required to verify signed runtime measurements from Edgeless Systems.
When a new cluster is [created](../workflows/create.md) or updated, the CLI automatically verifies the measurements for the selected image.
Alternatively, Constellation has the option to use and verify custom build images.
For this, the CLI can be provided with a custom public key for verification.
Thus, we've a chain of trust based on cryptographic signatures, which goes from CLI to runtime measurements to images. This is illustrated in the following diagram.
```mermaid
flowchart LR
A[Edgeless]-- "signs (cosign)" -->B[CLI]
C[User]-- "verifies (cosign)" -->B[CLI]
B[CLI]-- "contains" -->D["Public Key"]
A[Edgeless]-- "signs" -->E["Runtime measurements"]
D["Public Key"]-- "verifies" -->E["Runtime measurements"]
E["Runtime measurements"]-- "verify" -->F["Constellation cluster"]
```

View File

@ -0,0 +1,81 @@
# Components
Constellation takes care of bootstrapping and initializing a Confidential Kubernetes cluster.
During the lifetime of the cluster, it handles day 2 operations such as key management, remote attestation, and updates.
These features are provided by several components:
* The [Bootstrapper](components.md#bootstrapper) initializes a Constellation node and bootstraps the cluster
* The [JoinService](components.md#joinservice) joins new nodes to an existing cluster
* The [VerificationService](components.md#verificationservice) provides remote attestation functionality
* The [Key Management Service (KMS)](components.md#kms) manages Constellation-internal keys
* The [AccessManager](components.md#accessmanager) manages node SSH access
The relations between components are shown in the following diagram:
```mermaid
flowchart LR
subgraph admin [Admin's machine]
A[Constellation CLI]
end
subgraph img [CoreOS image]
B[CoreOS]
C[Bootstrapper]
end
subgraph Kubernetes
D[AccessManager]
E[JoinService]
F[KMS]
G[VerificationService]
end
A -- deploys -->
B -- starts --> C
C -- deploys --> D
C -- deploys --> E
C -- deploys --> F
C -- deploys --> G
```
## Bootstrapper
The *Bootstrapper* is the first component launched after booting a Constellation node image.
It sets up that machine as a Kubernetes node and integrates that node into the Kubernetes cluster.
To this end, the *Bootstrapper* first downloads and [verifies](https://blog.sigstore.dev/kubernetes-signals-massive-adoption-of-sigstore-for-protecting-open-source-ecosystem-73a6757da73) the [Kubernetes components](https://kubernetes.io/docs/concepts/overview/components/) at the [configured](../reference/config.md) versions.
The *Bootstrapper* tries to find an existing cluster and if successful, communicates with the [JoinService](components.md#joinservice) to join the node.
Otherwise, it waits for an initialization request to create a new Kubernetes cluster.
## JoinService
The *JoinService* runs as [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) on each control-plane node.
New nodes (at cluster start, or later through autoscaling) send a request to the service over [attested TLS (aTLS)](attestation.md#attested-tls-atls).
The *JoinService* verifies the new node's certificate and attestation statement.
If attestation is successful, the new node is supplied with an encryption key from the [*KMS*](components.md#kms) for its state disk, and a Kubernetes bootstrap token.
```mermaid
sequenceDiagram
participant New node
participant JoinService
New node->>JoinService: aTLS handshake (server side verification)
JoinService-->>New node: #
New node->>+JoinService: IssueJoinTicket(DiskUUID, NodeName, IsControlPlane)
JoinService->>+KMS: GetDataKey(DiskUUID)
KMS-->>-JoinService: DiskEncryptionKey
JoinService-->>-New node: DiskEncryptionKey, KubernetesJoinToken, ...
```
## VerificationService
The *VerificationService* runs as DaemonSet on each node.
It provides user-facing functionality for remote attestation during the cluster's lifetime via an endpoint for [verifying the cluster](attestation.md#cluster-attestation).
Read more about the hardware-based [attestation feature](attestation.md) of Constellation and how to [verify](../workflows/verify.md) a cluster on the client side.
## KMS
The *KMS* runs as DaemonSet on each control-plane node.
It implements the key management for the [storage encryption keys](keys.md#storage-encryption) in Constellation. These keys are used for the [state disk](images.md#state-disk) of each node and the [transparently encrypted storage](encrypted-storage.md) for Kubernetes.
Depending on wether the [constellation-managed](keys.md#constellation-managed-key-management) or [user-managed](keys.md#user-managed-key-management) mode is used, the *KMS* holds the key encryption key (KEK) directly or calls an external service for key derivation respectively.
## AccessManager
The *AccessManager* runs as DaemonSet on each node.
It manages the user's SSH access to nodes as specified in the [configuration](../reference/config.md).

View File

@ -0,0 +1,57 @@
# Encrypted persistent storage
Confidential VMs provide runtime memory encryption to protect data in use.
In the context of Kubernetes, this is sufficient for the confidentiality and integrity of stateless services.
Consider a front-end web server, for example, that keeps all connection information cached in main memory.
No sensitive data is ever written to an insecure medium.
However, many real-world applications need some form of state or data-lake service that's connected to a persistent storage device and requires encryption at rest.
As described in [Use persistent storage](../workflows/storage.md), cloud service providers (CSPs) use the container storage interface (CSI) to make their storage solutions available to Kubernetes workloads.
These CSI storage solutions often support some sort of encryption.
For example, Google Cloud [encrypts data at rest by default](https://cloud.google.com/security/encryption/default-encryption), without any action required by the customer.
## Cloud provider-managed encryption
CSP-managed storage solutions encrypt the data in the cloud backend before writing it physically to disk.
In the context of confidential computing and Constellation, the CSP and its managed services aren't trusted.
Hence, cloud provider-managed encryption protects your data from offline hardware access to physical storage devices.
It doesn't protect it from anyone with infrastructure-level access to the storage backend or a malicious insider in the cloud platform.
Even with "bring your own key" or similar concepts, the CSP performs the encryption process with access to the keys and plaintext data.
In the security model of Constellation, securing persistent storage and thereby data at rest requires that all cryptographic operations are performed inside a trusted execution environment.
Consequently, using CSP-managed encryption of persistent storage usually isn't an option.
## Constellation-managed encryption
Constellation provides CSI drivers for storage solutions in all major clouds with built-in encryption support.
Block storage provisioned by the CSP is [mapped](https://guix.gnu.org/manual/en/html_node/Mapped-Devices.html) using the [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html), and optionally the [dm-integrity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html), kernel modules, before it's formatted and accessed by the Kubernetes workloads.
All cryptographic operations happen inside the trusted environment of the confidential Constellation node.
Please note that for integrity-protected disks, [volume expansion](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/) isn't supported.
By default the driver uses data encryption keys (DEKs) issued by the Constellation [*KMS*](components.md#kms).
The DEKs are in turn derived from the Constellation's key encryption key (KEK), which is directly derived from the [master secret](keys.md#master-secret).
This is the recommended mode of operation, and also requires the least amount of setup by the cluster administrator.
Alternatively, the driver can be configured to use a key management system to store and access KEKs and DEKs.
Please refer to [keys and cryptography](keys.md) for more details on key management in Constellation.
Once deployed and configured, the CSI driver ensures transparent encryption and integrity of all persistent volumes provisioned via its storage class.
Data at rest is secured without any additional actions required by the developer.
## Cryptographic algorithms
This section gives an overview of the libraries, cryptographic algorithms, and their configurations, used in Constellation's CSI drivers.
### dm-crypt
To interact with the dm-crypt kernel module, Constellation uses [libcryptsetup](https://gitlab.com/cryptsetup/cryptsetup/).
New devices are formatted as [LUKS2](https://gitlab.com/cryptsetup/LUKS2-docs/-/tree/master) partitions with a sector size of 4096 bytes.
The used key derivation function is [Argon2id](https://datatracker.ietf.org/doc/html/rfc9106) with the [recommended parameters for memory-constrained environments](https://datatracker.ietf.org/doc/html/rfc9106#section-7.4) of 3 iterations and 64 MiB of memory, utilizing 4 parallel threads.
For encryption Constellation uses AES in XTS-Plain64. The key size is 512 bit.
### dm-integrity
To interact with the dm-integrity kernel module, Constellation uses [libcryptsetup](https://gitlab.com/cryptsetup/cryptsetup/).
When enabled, the used data integrity algorithm is [HMAC](https://datatracker.ietf.org/doc/html/rfc2104) with SHA256 as the hash function.
The tag size is 32 Bytes.

View File

@ -0,0 +1,45 @@
# Constellation images
Constellation uses [Fedora CoreOS](https://docs.fedoraproject.org/en-US/fedora-coreos/) as the operating system running inside confidential VMs. This Linux distribution is optimized for containers and is designed to have an immutable filesystem.
The Constellation images extend on that concept by leveraging measured boot and verification of the root filesystem.
## Measured boot
```mermaid
flowchart LR
Firmware --> Bootloader
Bootloader --> kernel
Bootloader --> initramfs
initramfs --> rootfs[root filesystem]
```
Measured boot uses a Trusted Platform Module (TPM) to measure every part of the boot process. This allows for verification of the integrity of a running system at any point in time. To ensure correct measurements of every stage, each stage is responsible to measure the next stage before transitioning.
### Firmware
With confidential VMs, the firmware is the root of trust and is measured automatically at boot. After initialization, the firmware will load and measure the bootloader before executing it.
### Bootloader
The bootloader is the first modifiable part of the boot chain. The bootloader is tasked with loading the kernel, initramfs and setting the kernel command line. The Constellation bootloader measures these components before starting the kernel.
### initramfs
The initramfs is a small filesystem loaded to prepare the actual root filesystem. The Constellation initramfs maps the block device containing the root filesystem with [dm-verity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html). The initramfs then mounts the root filesystem from the mapped block device.
dm-verity provides integrity checking using a cryptographic hash tree. When a block is read, its integrity is checked by verifying the tree against a trusted root hash. The initramfs reads this root hash from the previously measured kernel command line. Thus, if any block of the root filesystem's device is modified on disk, trying to read the modified block will result in a kernel panic at runtime.
After mounting the root filesystem, the initramfs will switch over and start the `init` process of the integrity-protected root filesystem.
## State disk
In addition to the read-only root filesystem, each Constellation node has a disk for storing state data.
This disk is mounted readable and writable by the initramfs and contains data that should persist across reboots.
Such data can contain sensitive information and, therefore, must be stored securely.
To that end, the state disk is protected by authenticated encryption.
See the section on [keys and encryption](keys.md#storage-encryption) for more information on the cryptographic primitives in use.
## Kubernetes components
During initialization, the [*Bootstrapper*](components.md#bootstrapper) downloads and [verifies](https://blog.sigstore.dev/kubernetes-signals-massive-adoption-of-sigstore-for-protecting-open-source-ecosystem-73a6757da73) the [Kubernetes components](https://kubernetes.io/docs/concepts/overview/components/) as [configured](../reference/config.md) by the user.
They're stored on the state partition and can be updated once new releases need to be installed.

View File

@ -0,0 +1,127 @@
# Key management and cryptographic primitives
Constellation protects and isolates your cluster and workloads.
To that end, cryptography is the foundation that ensures the confidentiality and integrity of all components.
Evaluating the security and compliance of Constellation requires a precise understanding of the cryptographic primitives and keys used.
The following gives an overview of the architecture and explains the technical details.
## Confidential VMs
Confidential VM (CVM) technology comes with hardware and software components for memory encryption, isolation, and remote attestation.
For details on the implementations and cryptographic soundness please refer to the hardware vendors' documentation and advisories.
## Master secret
The master secret is the cryptographic material used for deriving the [*clusterID*](#cluster-identity) and the *key encryption key (KEK)* for [storage encryption](#storage-encryption).
It's generated during the bootstrapping of a Constellation cluster.
It can either be managed by [Constellation](#constellation-managed-key-management) or an [external key management system](#user-managed-key-management)
In case of [recovery](#recovery-and-migration), the master secret allows to decrypt the state and recover a Constellation cluster.
## Cluster identity
The identity of a Constellation cluster consists of two parts:
* *baseID:* The identity of a valid and measured, uninitialized Constellation node
* *clusterID:* The identity unique to a single initialized Constellation cluster
Using the CVM's attestation mechanism and [measured boot up to the read-only root filesystem](images.md) guarantees *baseID*.
The *clusterID* is derived from the master secret and a cryptographically random salt. It's unique for every Constellation cluster.
The remote attestation statement of a Constellation cluster combines *baseID* and *clusterID* for a verifiable, unspoofable, unique identity.
## Network encryption
Constellation encrypts all cluster network communication using the [container network interface (CNI)](https://github.com/containernetworking/cni).
See [network encryption](networking.md) for more details.
The Cilium agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes in the cluster.
Each node automatically creates its own [Curve25519](http://cr.yp.to/ecdh.html) encryption key-pair and distributes its public key via Kubernetes.
Each nodes public key is then used by other nodes to decrypt and encrypt traffic from and to Cilium-managed endpoints running on that node.
Connections are always encrypted peer-to-peer using [ChaCha20](http://cr.yp.to/chacha.html) with [Poly1305](http://cr.yp.to/mac.html).
WireGuard implements [forward secrecy with key rotation every 2 minutes](https://lists.zx2c4.com/pipermail/wireguard/2017-December/002141.html).
Cilium supports [key rotation](https://docs.cilium.io/en/stable/gettingstarted/encryption-ipsec/#key-rotation) for the long-term node keys via Kubernetes secrets.
## Storage encryption
Constellation supports transparent encryption of persistent storage.
The Linux kernel's device mapper-based encryption features are used to encrypt the data on the block storage level.
Currently, the following primitives are used for block storage encryption:
* [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html)
* [dm-integrity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html)
Adding primitives for integrity protection in the CVM attacker model are under active development and will be available in a future version of Constellation.
See [encrypted storage](encrypted-storage.md) for more details.
As a cluster administrator, when creating a cluster, you can use the Constellation [installation program](orchestration.md) to select one of the following methods for key management:
* Constellation-managed key management
* User-managed key management
### Constellation-managed key management
#### Key material and key derivation
During the creation of a Constellation, the cluster's master secret is used to derive a KEK.
This means creating two clusters with the same master secret will yield the same KEK.
Any data encryption key (DEK) is derived from the KEK via HKDF.
Note that the master secret is recommended to be unique for every Constellation cluster and shouldn't be reused (except in case of [recovering](../workflows/recovery.md) a cluster).
#### State and storage
The KEK is derived from the master secret during the initialization.
Subsequently, all other key material is derived from the KEK.
Given the same KEK, any DEK can be derived deterministically from a given identifier.
Hence, there is no need to store DEKs. They can be derived on demand.
After the KEK was derived, it's stored in memory only and never leaves the CVM context.
#### Availability
Constellation-managed key management has the same availability as the underlying Kubernetes cluster.
Therefore, the KEK is stored in the [distributed Kubernetes etcd storage](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/) to allow for unexpected but non-fatal (control-plane) node failure.
The etcd storage is backed by the encrypted and integrity protected [state disk](images.md#state-disk) of the nodes.
#### Recovery
Constellation clusters can be recovered in the event of a disaster, even when all node machines have been stopped and need to be rebooted.
For details on the process see the [recovery workflow](../workflows/recovery.md).
### User-managed key management
User-managed key management is under active development and will be available soon.
In scenarios where constellation-managed key management isn't an option, this mode allows you to keep full control of your keys.
For example, compliance requirements may force you to keep your KEKs in an on-prem key management system (KMS).
During the creation of a Constellation, you specify a KEK present in a remote KMS.
This follows the common scheme of "bring your own key" (BYOK).
Constellation will support several KMSs for managing the storage and access of your KEK.
Initially, it will support the following KMSs:
* [AWS KMS](https://aws.amazon.com/kms/)
* [GCP KMS](https://cloud.google.com/security-key-management)
* [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/#product-overview)
* [KMIP-compatible KMS](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=kmip)
Storing the keys in Cloud KMS of AWS, GCP, or Azure binds the key usage to the particular cloud identity access management (IAM).
In the future, Constellation will support remote attestation-based access policies for Cloud KMS once available.
Note that using a Cloud KMS limits the isolation and protection to the guarantees of the particular offering.
KMIP support allows you to use your KMIP-compatible on-prem KMS and keep full control over your keys.
This follows the common scheme of "hold your own key" (HYOK).
The KEK is used to encrypt per-data "data encryption keys" (DEKs).
DEKs are generated to encrypt your data before storing it on persistent storage.
After being encrypted by the KEK, the DEKs are stored on dedicated cloud storage for persistence.
Currently, Constellation supports the following cloud storage options:
* [AWS S3](https://aws.amazon.com/s3/)
* [GCP Cloud Storage](https://cloud.google.com/storage)
* [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/#overview)
The DEKs are only present in plaintext form in the encrypted main memory of the CVMs.
Similarly, the cryptographic operations for encrypting data before writing it to persistent storage are performed in the context of the CVMs.
#### Recovery and migration
In the case of a disaster, the KEK can be used to decrypt the DEKs locally and subsequently use them to decrypt and retrieve the data.
In case of migration, configuring the same KEK will provide seamless migration of data.
Thus, only the DEK storage needs to be transferred to the new cluster alongside the encrypted data for seamless migration.

View File

@ -0,0 +1,23 @@
# Network encryption
Constellation encrypts all pod communication using the [container network interface (CNI)](https://github.com/containernetworking/cni).
To that end, Constellation deploys, configures, and operates the [Cilium](https://cilium.io/) CNI plugin.
Cilium provides [transparent encryption](https://docs.cilium.io/en/stable/gettingstarted/encryption) for all cluster traffic using either IPSec or [WireGuard](https://www.wireguard.com/).
Currently, Constellation only supports WireGuard as the encryption engine.
You can read more about the cryptographic soundness of WireGuard [in their white paper](https://www.wireguard.com/papers/wireguard.pdf).
Cilium is actively working on implementing a feature called [`host-to-host`](https://github.com/cilium/cilium/pull/19401) encryption mode for WireGuard.
With `host-to-host`, all traffic between nodes will be tunneled via WireGuard (host-to-host, host-to-pod, pod-to-host, pod-to-pod).
Until the `host-to-host` feature is released, Constellation enables `pod-to-pod` encryption.
This mode encrypts all traffic between Kubernetes pods using WireGuard tunnels.
Constellation uses an extended version of `pod-to-pod` called *strict* mode.
When using Cilium in the default setup but with encryption enabled, there is a [known issue](https://docs.cilium.io/en/v1.12/gettingstarted/encryption/#egress-traffic-to-not-yet-discovered-remote-endpoints-may-be-unencrypted)
that can cause pod-to-pod traffic to be unencrypted.
Constellation uses strict mode to mitigates this issue.
We change the default behavior of traffic that's destined for an unknown endpoint to not be send out in plaintext to instead being dropped.
The strict mode can distinguish between traffic that's send to an pod from traffic that's destined for an cluster-external endpoint, since it knows the pod's CIDR range.
The last remaining traffic that's not encrypted is traffic originating from hosts.
This mainly includes health checks from Kubernetes API server.
Also, traffic proxied over the API server via e.g. `kubectl port-forward` isn't encrypted.

View File

@ -0,0 +1,89 @@
# Orchestrating Constellation clusters
You can use the CLI to create a cluster on the supported cloud platforms.
The CLI provisions the resources in your cloud environment and initiates the initialization of your cluster.
It uses a set of parameters and an optional configuration file to manage your cluster installation.
The CLI is also used for updating your cluster.
## Workspaces
Each Constellation cluster has an associated *workspace*.
The workspace is where the persistent data such as the Constellation state, config, and ID files are stored.
Each workspace is associated with a single cluster and configuration.
Currently, the CLI stores state in the local filesystem making the current directory the active workspace.
Multiple clusters require multiple workspaces, hence, multiple directories.
Note that every operation on a cluster always has to be performed from the directory associated with its workspace.
## Cluster creation process
Releases of Constellation are [published on GitHub](https://github.com/edgelesssys/constellation/releases).
To allow for fine-grained configuration of your cluster and cloud environment, Constellation supports an extensive configuration file with strong defaults.
The CLI provides you with a good default configuration which can be generated with `constellation config generate`. Some cloud account-specific information is always required and to be set by the user.
Details and examples can be found in the [reference guide](../reference/config.md).
The following files are generated during the creation of a Constellation cluster and stored in the current workspace:
* a configuration file
* a state file
* an ID file
* a base64 encoded master secret
* a Kubernetes `kubeconfig` file.
Constellation must store the state of its created infrastructure and configuration.
This state is used by Constellation to map real-world resources to your configuration and keep track of metadata.
This state is stored by default in a local file named `constellation-state.json`.
After the successful creation of your cluster, the CLI will provide you with a Kubernetes `kubeconfig` file.
This file provides you with access to your Kubernetes cluster and configures the [kubectl](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) tool.
In addition, the cluster's [identifier](orchestration.md#post-installation-configuration) is returned and stored in a file called `constellation-id.json`
### Creation process details
1. The CLI `create` command creates the confidential VMs (CVMs) resources in your cloud environment and configures the network
2. Each CVM boots the Constellation node image and measures every component in the boot chain
3. The first component launched in each node is the [*Bootstrapper*](components.md#bootstrapper)
4. The *Bootstrapper* waits until it either receives an initialization request or discovers an initialized cluster
5. The CLI `init` command connects to the *Bootstrapper* of a selected node, sends the configuration, and initiates the initialization of the cluster
6. The *Bootstrapper* of **that** node [initializes the Kubernetes cluster](components.md#bootstrapper) and deploys the other Constellation [components](components.md) including the [*JoinService*](components.md#joinservice)
7. Subsequently, the *Bootstrappers* of the other nodes discover the initialized cluster and send join requests to the *JoinService*
8. As part of the join request each node includes an attestation statement of its boot measurements as a form of authentication
9. The *JoinService* verifies the attestation statements and joins the nodes to the Kubernetes cluster
10. This process is repeated for every node joining the cluster later (e.g., through autoscaling)
## Post-installation configuration
Post-installation the CLI provides a configuration for [accessing the cluster using the Kubernetes API](https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/).
The `kubeconfig` file provides the credentials and configuration for connecting and authenticating to the API server.
Once configured, orchestrate the Kubernetes cluster via `kubectl`.
Keep the state files in the workspace directory such as the `constellation-state.json` for the CLI to be able to manage your cluster.
Without it, you won't be able to modify or terminate your cluster.
After the initialization, the CLI will present you with a couple of tokens:
* The [*master secret*](keys.md#master-secret) (stored in the `constellation-mastersecret.json` file by default)
* The [*clusterID*](keys.md#cluster-identity) of your cluster in base64 format
You can read more about these values and their meaning in the guide on [cluster identity](keys.md#cluster-identity).
The *master secret* must be kept secret and can be used to [recover your cluster](../workflows/recovery.md).
Instead of managing this secret manually, you can [use your key management solution of choice](keys.md#user-managed-key-management) with Constellation.
The *clusterID* uniquely identifies a cluster and can be used to [verify your cluster](../workflows/verify.md).
## Upgrades
Constellation images and components might need to be upgraded to new versions during the lifetime of a cluster.
Constellation implements a rolling update mechanism ensuring no downtime of the control or data plane.
You can upgrade a Constellation cluster with a single operation by using the CLI.
For step-by-step instructions on how to do this, refer to [Upgrade Constellation](../workflows/upgrade.md).
### Attestation of upgrades
The new verification hashes (measurements) are released together with every image.
During an update procedure, the CLI provides the new measurements to the [JoinService](components.md#joinservice) securely.
New measurements for an updated image are automatically pulled and verified by the CLI following the [supply chain security concept](attestation.md#chain-of-trust) of Constellation.
The [attestation section](attestation.md#cluster-facing-attestation) describes in detail how these measurements are then used by the JoinService for the attestation of nodes.
The updated measurements are reproducible based on the updated node images, hence, auditable by the customer.

View File

@ -0,0 +1,23 @@
# Overview
Constellation is a cloud-based confidential orchestration platform.
The foundation of Constellation is Kubernetes and therefore shares the same technology stack and architecture principles.
To learn more about Constellation and Kubernetes, see [product overview](../overview/product.md).
## About installation and updates
As a cluster administrator, you can use the [Constellation CLI](orchestration.md) to install and deploy a cluster.
## About the components and attestation
Constellation manages the nodes and network in your cluster. All nodes are bootstrapped by the [*Bootstrapper*](components.md#bootstrapper). They're verified and authenticated by the [*JoinService*](components.md#joinservice) before being added to the cluster and the network. Finally, the entire cluster can be verified via the [*VerificationService*](components.md#verification-service) using [remote attestation](attestation.md).
## About node images and verified boot
Constellation comes with operating system images for Kubernetes control-plane and worker nodes.
They're highly optimized for running containerized workloads and specifically prepared for running inside confidential VMs.
You can learn more about [the images](images.md) and how verified boot ensures their integrity during boot and beyond.
## About key management and cryptographic primitives
Encryption of data at-rest, in-transit, and in-use is the fundamental building block for confidential computing and Constellation. Learn more about the [keys and cryptographic primitives](keys.md) used in Constellation and about [encrypted persistent storage](encrypted-storage.md).

View File

@ -0,0 +1,17 @@
# Versions and support policy
This page details which guarantees Constellation makes regarding versions, compatibility, and life cycle.
All released components of Constellation use a three-digit version number, of the form `v<MAJOR>.<MINOR>.<PATCH>`.
## Release cadence
All [components](components.md) of Constellation are released in lock step on the first Tuesday of every month. This release primarily introduces new features, but may also include security or performance improvements. The `MINOR` version will be incremented as part of this release.
Additional `PATCH` releases may be created on demand, to fix security issues or bugs before the next `MINOR` release window.
New releases are published on [GitHub](https://github.com/edgelesssys/constellation/releases).
### Kubernetes support policy
Constellation is aligned to the [version support policy of Kubernetes](https://kubernetes.io/releases/version-skew-policy/#supported-versions), and therefore supports the most recent three minor versions.

View File

@ -0,0 +1,10 @@
# Examples
After [installing the CLI](install.md) and [creating a cluster](first-steps.md) we've collected a few Kubernetes example applications for deploying in your cluster. They highlight different features and give a first hands-on experience working with Kubernetes in a Constellation cluster. If you have experience with Kubernetes deployments there shouldn't be any sensible differences. Have fun exploring your Confidential Kubernetes!
Before trying out the example applications, make sure you [installed the Constellation CLI](install.md) and [created a cluster](first-steps.md). From there, the examples shown here are designed to work on a freshly created cluster and don't require any further prerequisites.
Check out the following examples:
* [Emojivoto](examples/emojivoto.md): A simple but fun demo application to test the general functionality of your confidential cluster.
* [Online Boutique](examples/online-boutique.md): An e-commerce demo application by Google consisting of 11 separate microservices.
* [Horizontal Pod Autoscaling](examples/horizontal-scaling.md): An example demonstrating Constellation's autoscaling capabilities.

View File

@ -0,0 +1,22 @@
# Emojivoto
For a simple but fun demo application to test the general functionality of your confidential cluster, take a look at [Emojivoto](https://github.com/BuoyantIO/emojivoto).
<!-- vale off -->
<img src={require("../../_media/example-emojivoto.jpg").default} alt="emojivoto - Web UI" width="552"/>
<!-- vale on -->
1. Deploy the application:
```bash
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
```
2. Wait until it becomes available:
```bash
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
```
3. Forward the web service to your machine:
```bash
kubectl -n emojivoto port-forward svc/web-svc 8080:80
```
4. Visit [http://localhost:8080](http://localhost:8080)

View File

@ -0,0 +1,93 @@
# Horizontal Pod Autoscaling
This example demonstrates Constellation's autoscaling capabilities by utilizing a slightly adapted version of the Kubernetes [HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). During the following steps, we will see Constellation be able to spawn new VMs on demand, add them to the cluster and later on delete them again when the load has settled down.
## Requirements
The cluster needs to be initialized with Kubernetes 1.23 or higher. In addition, autoscaling must be enabled to trigger Constellation to assign new nodes dynamically.
Just for this example specifically, the cluster should have as few worker nodes in the beginning as possible. Starting with a small cluster having only *one* control plane node and *one* worker node using one of the low-end supported VMs is recommended for an easier demonstration and saving costs. The example has been tested on Azure using a `Standard_DC4as_v5` and on GCP using `n2d-standard-4` instance.
## Setup
1. Install the Kubernetes Metrics Server:
```bash
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
```
2. Deploy the HPA example server that's supposed to be scaled under load.
This is almost the same as the example which can be found in the official Kubernetes HPA walkthrough, with the only difference being increased CPU limits and requests to facilitate the triggering of node scaling events.
```bash
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: registry.k8s.io/hpa-example
ports:
- containerPort: 80
resources:
limits:
cpu: 900m
requests:
cpu: 600m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache
EOF
```
3. Create a HorizontalPodAutoscaler.
It's recommended to set an average CPU utilization across all Pods of 20% with the above server CPU limits and requests to see one additional worker nodes being created later. Note that the CPU utilization used here isn't 1:1 the host CPU utilization, but rather the requested CPU capacities (20% of 600 milli-cores CPU across all Pods). Take a look at the [original tutorial](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler) for more information on the HPA configuration.
```bash
kubectl autoscale deployment php-apache --cpu-percent=20 --min=1 --max=10
```
4. Create a Pod which generates load onto the server:
```bash
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"
```
5. Wait for a few minutes until new nodes are added to the cluster. See below for how to monitor the state of the HorizontalPodAutoscaler, the list of nodes and the behavior of the autoscaler.
6. To kill the load generator, press CTRL+C and run:
```bash
kubectl delete pod load-generator
```
7. The cluster-autoscaler checks every few minutes if nodes are underutilized and can be removed from the cluster. It will taint such candidates for removal and wait additional 10 minutes before the nodes are eventually removed and deallocated. The whole process can take ~20 minutes in total.
## Monitoring
:::tip
For better observability, run the listed commands in different tabs in your terminal.
:::
You can watch the status of the HorizontalPodAutoscaler with the current CPU, the target CPU limit, and the number of replicas created with:
```bash
kubectl get hpa php-apache --watch
```
From time to time compare the list of nodes to check the behavior of the autoscaler:
```bash
kubectl get nodes
```
For deeper insights, take a look at the logs of the autoscaler Pod which contains more details about the scaling decision process:
```bash
kubectl logs -f deployment/constellation-cluster-autoscaler -n kube-system
```

View File

@ -0,0 +1,28 @@
# Online Boutique
[Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo) is an e-commerce demo application by Google consisting of 11 separate microservices. Constellation is automatically able to set up a load balancer on your CSP, making it easy to expose services from your confidential cluster without any additional setup required.
<!-- vale off -->
<img src={require("../../_media/example-online-boutique.jpg").default} alt="Online Boutique - Web UI" width="662"/>
<!-- vale on -->
1. Create a namespace:
```bash
kubectl create ns boutique
```
2. Deploy the application:
```bash
kubectl apply -n boutique -f https://github.com/GoogleCloudPlatform/microservices-demo/raw/main/release/kubernetes-manifests.yaml
```
3. Wait for all services to become fully available:
```bash
kubectl wait --for=condition=available --timeout=300s -n boutique --all deployments
```
4. Get the external facing IP address of the frontend (with `<your-ip>` being a placeholder for the assigned IP from your CSP):
```terminal-session
kubectl get service frontend-external -n boutique | awk '{print $4}'
EXTERNAL-IP
<your-ip>
```
5. Enter the IP from the result in your browser to browse the online shop.

View File

@ -0,0 +1,127 @@
# First steps
The following steps will guide you through the process of creating a cluster and deploying a sample app. This example assumes that you have successfully [installed and set up Constellation](install.md).
## Create a cluster
1. Create the configuration file for your selected cloud provider.
<tabs>
<tabItem value="azure" label="Azure" default>
On Azure you also need a *user-assigned managed identity* with the [correct permissions](install.md?id=authorization).
Then execute:
```bash
constellation config generate azure
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation config generate gcp
```
</tabItem>
</tabs>
This creates the file `constellation-conf.yaml` in your current working directory. Edit this file to set your cloud subscription IDs and optionally customize further options of your Constellation cluster. All configuration options are documented in this file.
For more details, see the [reference section](../reference/config.md#required-customizations).
2. Download the measurements for your configured image.
```bash
constellation config fetch-measurements
```
This command is necessary to download the latest trusted measurements for your configured image.
For more details, see the [verification section](../workflows/verify.md).
3. Create the cluster with one control-plane node and two worker nodes. `constellation create` uses options set in `constellation-conf.yaml` automatically.
<tabs>
<tabItem value="azure" label="Azure" default>
```bash
constellation create azure --control-plane-nodes 1 --worker-nodes 2 --instance-type Standard_D4a_v4 -y
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation create gcp --control-plane-nodes 1 --worker-nodes 2 --instance-type n2d-standard-2 -y
```
</tabItem>
</tabs>
This should give the following output:
```shell-session
$ constellation create ...
Your Constellation cluster was created successfully.
```
4. Initialize the cluster
```bash
constellation init
```
This should give the following output:
```shell-session
$ constellation init
Creating service account ...
Your Constellation cluster was successfully initialized.
Constellation cluster's identifier g6iMP5wRU1b7mpOz2WEISlIYSfdAhB0oNaOg6XEwKFY=
Kubernetes configuration constellation-admin.conf
You can now connect to your cluster by executing:
export KUBECONFIG="$PWD/constellation-admin.conf"
```
The cluster's identifier will be different in your output.
Keep `constellation-mastersecret.json` somewhere safe.
This will allow you to [recover your cluster](../workflows/recovery.md) in case of a disaster.
5. Configure kubectl
```bash
export KUBECONFIG="$PWD/constellation-admin.conf"
```
## Deploy a sample application
1. Deploy the [emojivoto app](https://github.com/BuoyantIO/emojivoto)
```bash
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
```
2. Expose the frontend service locally
```bash
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
kubectl -n emojivoto port-forward svc/web-svc 8080:80 &
curl http://localhost:8080
kill %1
```
## Terminate your cluster
```bash
constellation terminate
```
This should give the following output:
```shell-session
$ constellation terminate
Terminating ...
Your Constellation cluster was terminated successfully.
```

View File

@ -0,0 +1,176 @@
# Installation and Setup
Constellation runs entirely in your cloud environment and can be easily controlled via a dedicated Command Line Interface (CLI).
The installation process will guide you through the steps of installing the CLI on your machine, verifying it, and connecting it to your Cloud Service Provider (CSP).
### Prerequisites
Before we start, make sure the following requirements are fulfilled:
- Your machine is running Ubuntu or macOS
- You have admin rights on your machine
- [kubectl](https://kubernetes.io/docs/tasks/tools/) is installed
- Your cloud provider is Microsoft Azure or Google Cloud
## Install the Constellation CLI
The Constellation CLI can be downloaded from our [release page](https://github.com/edgelesssys/constellation/releases). Therefore, navigate to a release and download the file `constellation`. Move the downloaded file to a directory in your `PATH` (default: `/usr/local/bin`) and make it executable by entering `chmod s+x constellation` in your terminal.
Running `constellation` should then give you:
```shell-session
$ constellation
Manage your Constellation cluster.
Usage:
constellation [command]
...
```
### Optional: Enable shell autocompletion
The Constellation CLI supports autocompletion for various shells. To set it up, run `constellation completion` and follow the steps.
## Verify the CLI
For extra security, make sure to verify your CLI. Therefore, install [cosign](https://github.com/sigstore/cosign). Then, head to our [release page](https://github.com/edgelesssys/constellation/releases) again and, from the same release as before, download the following files:
- `cosign.pub` (Edgeless System's cosign public key)
- `constellation.sig` (the CLI's signature)
You can then verify your CLI before launching a cluster using the paths to the public key, signature, and CLI executable:
```bash
cosign verify-blob --key cosign.pub --signature constellation.sig constellation
```
For more detailed information read our [installation, update and verification documentation](../architecture/orchestration.md).
## Set up cloud credentials
The CLI makes authenticated calls to the CSP API. Therefore, you need to set up Constellation with the credentials for your CSP.
### Authentication
In the following, we provide you with different ways to authenticate with your CSP.
:::danger
Don't use the testing methods for setting up a production-grade Constellation cluster. In those methods, secrets are stored on disk during installation which would be exposed to the CSP.
:::
<tabs>
<tabItem value="azure" label="Azure" default>
**Testing**
To try out Constellation, using a cloud environment such as [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview) is the quickest way to get started.
**Production**
For production clusters, use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/) on a trusted machine:
```bash
az login
```
Other options are described in Azure's [authentication guide](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli).
</tabItem>
<tabItem value="gcp" label="GCP" default>
Enable the following cloud APIs first:
- [Compute Engine API](https://console.cloud.google.com/marketplace/product/google/compute.googleapis.com)
- [Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com)
- [Identity and Access Management (IAM) API](https://console.developers.google.com/apis/api/iam.googleapis.com)
**Testing**
- If you are running from within a Google VM, and the VM is allowed to access the necessary APIs, no further configuration is needed.
- If you are using the [Google Cloud Shell](https://cloud.google.com/shell), make sure your [session is authorized](https://cloud.google.com/shell/docs/auth). For example, execute `gsutil` and accept the authorization prompt.
**Production**
For production clusters, use one of the following options on a trusted machine:
- Use the [`gcloud` CLI](https://cloud.google.com/sdk/gcloud)
```bash
gcloud auth application-default login
```
This will ask you to log in to your Google account, and then create your credentials.
The Constellation CLI will automatically load these credentials when needed.
- Set up a service account and pass the credentials manually
Follow [Google's guide](https://cloud.google.com/docs/authentication/production#manually) for setting up your credentials.
</tabItem>
</tabs>
### Authorization
<tabs>
<tabItem value="azure" label="Azure" default>
Your user account needs the following permissions to set up a Constellation cluster:
- `Contributor`
- `User Access Administrator`
Additionally, you need to [create a user-assigned managed identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) with the following roles:
- `Virtual Machine Contributor`
- `Application Insights Component Contributor`
The user-assigned identity is used by the instances of the cluster to access other cloud resources.
You also need an empty resource group per cluster. Notice that the user-assigned identity has to be created in a
different resource group as all resources within the cluster resource group will be deleted on cluster termination.
Last, you need to [create an Active Directory app registration](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application) (you don't need to add a redirect URI).
As supported account types choose 'Accounts in this organizational directory only'. Then [create a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret), which will be used by Kubernetes.
On the cluster resource group, [add the app registration](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current#step-2-open-the-add-role-assignment-page)
with role `Owner`.
User-assigned identity, cluster resource group, app registration client ID and client secret value need to be set in the `constellation-conf.yaml` configuration file.
</tabItem>
<tabItem value="gcp" label="GCP" default>
Your user account needs the following permissions to set up a Constellation:
- `compute.*` (or the subset defined by `roles/compute.instanceAdmin.v1`)
Follow Google's guide on [understanding](https://cloud.google.com/iam/docs/understanding-roles) and [assigning roles](https://cloud.google.com/iam/docs/granting-changing-revoking-access).
Additionally, you need a service account with the following permissions:
- `Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1)`
- `Compute Network Admin (roles/compute.networkAdmin)`
- `Compute Security Admin (roles/compute.securityAdmin)`
- `Compute Storage Admin (roles/compute.storageAdmin)`
- `Service Account User (roles/iam.serviceAccountUser)`
The key for this service account is passed to the CLI and used by Kubernetes to authenticate with the cloud.
You can configure the path to the key in the `constellation-conf.yaml` configuration file.
GCP instances come with a [default service account](https://cloud.google.com/iam/docs/service-accounts#default) attached
that's used by the instances to access the cloud resources of the cluster. You don't need to configure it.
</tabItem>
</tabs>
### Troubleshooting
If you receive an error during any of the outlined steps, please verify that you have executed all previous steps in this documentation. Also, feel free to ask our active community on [Discord](https://discord.com/invite/rH8QTH56JN) for help.
### Next Steps
Once you have followed all previous steps, you can proceed [to deploy your first confidential Kubernetes cluster and application](first-steps.md).

51
docs/docs/intro.md Normal file
View File

@ -0,0 +1,51 @@
---
slug: /
id: intro
---
# Welcome to Constellation! 🎉
Constellation is the first Confidential Kubernetes platform!
Constellation leverages confidential computing to isolate entire Kubernetes clusters and all workloads from the rest of the cloud infrastructure.
From the inside, it's a fully-featured, [certified](https://www.cncf.io/certification/software-conformance/), Kubernetes engine.
From the outside, it's an end to end isolated, always encrypted stronghold. A Confidential Cloud in the public cloud.
Constellation is open source and enterprise-ready, tailored for unleashing the power of confidential computing for all your workloads at scale.
For a brief introduction to the Confidential Kubernetes concept, read the [introduction](overview/confidential-kubernetes.md).
For more elaborate overviews of Constellation's, see the [architecture](architecture/overview.md) section.
![Constellation](_media/product-overview.png)
## Features
Constellation's main features are:
* The only cloud agnostic Confidential Kubernetes platform
* Verifiable integrity and confidentiality protection of the entire Kubernetes cluster
* High-available, enterprise-ready Kubernetes engine
* Memory runtime encryption of all Kubernetes nodes
* Network encryption for the node to node traffic. Including the pod network.
* [Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) encryption for block storage
* Key management for transparent network and storage encryption
* CC-optimized, fully measured, and integrity-protected node OS
* Kubernetes node attestation
* Dynamic cluster autoscaling with autonomous node attestation
* Supply chain protection with [sigstore](https://www.sigstore.dev/)
## Getting started
Sounds great, how can I try this?
Constellation can be [deployed](getting-started/install.md) in minutes to your favorite infrastructure provider :rocket:
## Where does it fit
Constellation is the Kubernetes platform for secure, confidential cloud computing.
When moving workloads to the cloud most enterprises are facing the following challenges:
* How to **prevent unauthorized access** from hackers, cloud administrators, or governments?
* How to **ensure compliance** with privacy laws (e.g. GDPR) and industry-specific regulation (e.g. HIPAA)?
* How to **implement cloud security at the root** -- without simply adding "yet another tool"?
Constellation is designed to fundamentally change the playing field when it comes to cloud migration.
By leveraging confidential computing hardware capabilities it solves these challenges at the root.

View File

@ -0,0 +1,122 @@
# Performance
Security and performance are generally considered to be in a tradeoff relationship.
A tradeoff is a situation that involves losing one quality or aspect of something in return for gaining another quality or aspect.
Encryption is one of the common suspects for such a tradeoff that's inevitable for upholding the confidentiality and privacy of data during cloud transformation.
Constellation provides encryption [of data at rest](../architecture/encrypted-storage.md), [in-cluster transit](../architecture/networking.md), and [in-use](confidential-kubernetes.md) of a Kubernetes cluster.
This article elaborates on the performance impact for applications deployed in Constellation versus standard Kubernetes clusters.
AMD and Azure have collaboratively released a [performance benchmark](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796) for the runtime encryption of the 3rd Gen AMD EPYC processors with its SEV-SNP capability enabled.
They found that Confidential VMs have minimal performance differences on common benchmarks as compared with general-purpose VMs.
The overhead being in the single digits, runtime memory encryption will affect only the compute-heavy applications in their performance.
Confidential VMs such as AMD SEV-SNP are the foundation for Constellation, hence, the same performance results can be expected in terms of runtime overhead.
We performed additional benchmarking tests on Constellation clusters to assess more Kubernetes-specific intra-cluster network throughput, storage I/O, and Kubernetes API latencies.
## Test Setup
We benchmarked Constellation release v1.3.0 using [K-Bench](https://github.com/vmware-tanzu/k-bench). K-Bench is a configurable framework to benchmark Kubernetes clusters in terms of storage I/O, network performance, and creating/scaling resources.
As a baseline, we compared Constellation with the Constellation-supported cloud providers' managed Kubernetes offerings.
Throughout this article, you will find the comparison of Constellation on GCP with GKE and on Azure with AKS.
We can't provide an accurate intercloud meta-comparison at this point due to different Confidential VM machine types.
The benchmark ran with the following machines and configurations:
### Constellation on GCP / GKE
- Nodes: 3
- Machines: `n2d-standard-2`
- Kubernetes version: `1.23.6-gke.2200`
- Zone: `europe-west3-b`
### Constellation on Azure / AKS
- Nodes: 3
- Machines: `D2a_v4`
- Kubernetes version: `1.23.5`
- Region: `North Europe`
- Zone: `2`
### K-Bench
Using the default [K-Bench test configurations](https://github.com/vmware-tanzu/k-bench/tree/master/config), we ran the following tests on the clusters:
- `default`
- `dp_netperf_internode`
- `dp_network_internode`
- `dp_network_intranode`
- `dp_fio`
## Results
### Kubernetes API Latency
At its core, the Kubernetes API is the way to query and modify a cluster's state. Latency matters here. Hence, it's vital that even with the additional level of security from Constellation's network the API latency doesn't spike.
K-Bench's `default` test performs calls to the API to create, update and delete cluster resources.
The three graphs below compare the API latencies (lower is better) in milliseconds for pods, services, and deployments.
![API Latency - Pods](../_media/benchmark_api_pods.png)
Pods: Except for the `Pod Update` call, Constellation is faster than AKS and GKE in terms of API calls.
![API Latency - Services](../_media/benchmark_api_svc.png)
Services: Constellation has lower latencies than AKS and GKE except for service creation on AKS.
![API Latency - Deployments](../_media/benchmark_api_dpl.png)
Deployments: Constellation has the lowest latency for all cases except for scaling deployments on GKE and creating deployments on AKS.
### Network
When it comes to network performance, there are two main indicators we need to differentiate: intra-node and inter-node transmission speed.
K-Bench provides benchmark tests for both, configured as `dp_netperf_internode`, `dp_network_internode`, `dp_network_intranode`.
#### Inter-node
K-Bench has two benchmarks to evaluate the network performance between different nodes.
The first test (`dp_netperf_internode`) uses [`netperf`](https://hewlettpackard.github.io/netperf/) to measure the throughput. Constellation has a slightly lower network throughput than AKS and GKE.
This can largely be attributed to [Constellation's network encryption](../architecture/networking.md).
#### Intra-node
Intra-node communication happens between pods running on the same node.
The connections directly pass through the node's OS layer and never hit the network.
The benchmark evaluates how the [Constellation's node OS image](../architecture/images.md) and runtime encryption influence the throughput.
The K-Bench tests `dp_network_internode` and `dp_network_intranode`. The tests use [`iperf`](https://iperf.fr/) to measure the bandwidth available.
Constellation's bandwidth for both sending and receiving is at 20 Gbps while AKS achieves slightly higher numbers and GKE achieves about 30 Gbps in our tests.
![](../_media/benchmark_net.png)
### Storage I/O
Azure and GCP offer persistent storage for their Kubernetes services AKS and GKE via the Container Storage Interface (CSI). CSI storage in Kubernetes is available via `PersistentVolumes` (`PV`) and consumed via `PersistentVolumeClaims` (`PVC`).
Upon requesting persistent storage through a PVC, GKE and AKS will provision a PV as defined by a default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
Constellation provides persistent storage on Azure and GCP that's encrypted on the CSI layer. Read more about this in [how Constellation encrypts data at rest](../architecture/encrypted-storage.md).
Similarly, Constellation will provision a PV via a default storage class upon a PVC request.
The K-Bench [`fio`](https://fio.readthedocs.io/en/latest/fio_doc.html) benchmark consists of several tests.
We selected four different tests that perform asynchronous access patterns because we believe they most accurately depict real-world I/O access for most applications.
In the graph below, you will find the I/O throughput in `MiB/s` - where higher is better.
![I/O benchmark graph](../_media/benchmark_io.png)
Comparing Constellation on GCP with GKE, we see that Constellation offers similar read/write speeds in all scenarios.
Constellation on Azure and AKS, however, differ in sometimes. As you can see, only for the full write mix, Constellation and AKS have similar storage access speeds. In the 70/30 mix, AKS outperforms Constellation.
Note: For the sequential reads with a 0/100 read-write mix, no data could be measured on AKS, hence the missing data bar.
## Conclusion
Constellation can help transform the way organizations process data in the cloud by delivering high-performance Kubernetes while preserving confidentiality and privacy.
As demonstrated in our tests above, Constellation provides a Kubernetes cluster with minimal performance impact compared to the managed Kubernetes offerings AKS and GKE.
While enabling always encrypted processing of data, the network and storage encryption comes at a minimal price.
Constellation holds up in most benchmarks but in certain scenarios can be slightly lower in terms of storage and network throughput.
Kubernetes API latencies arent affected, and Constellation even outperforms AKS and GKE in this aspect.

View File

@ -0,0 +1,29 @@
# Confidential Kubernetes
We use the term *Confidential Kubernetes* to refer to the concept of using confidential-computing technology to shield entire Kubernetes clusters from the infrastructure. The three defining properties of this concept are:
1. **Workload shielding**: the confidentiality and integrity of all workload-related data and code are enforced.
2. **Control plane shielding**: the confidentiality and integrity of the cluster's control plane, state, and workload configuration are enforced.
3. **Attestation and verifiability**: the two properties above can be verified remotely based on hardware-rooted cryptographic certificates.
Each of the above properties is equally important. Only with all three in conjunction, an entire cluster can be shielded without gaps. This is what Constellation is about.
Constellation's approach is to run all nodes of the Kubernetes cluster inside Confidential VMs (CVMs). This gives runtime encryption for the entire cluster. Constellation augments this with transparent encryption of the [network](../architecture/keys.md#network-encryption) and [storage](../architecture/encrypted-storage.md). Thus, workloads and control plane are truly end-to-end encrypted: at rest, in transit, and at runtime. Constellation manages the corresponding [cryptographic keys](../architecture/keys.md) inside CVMs. Constellation verifies the integrity of each new CVM-based node using [remote attestation](../architecture/attestation.md). Only "good" nodes receive the cryptographic keys required to access the network and storage of a cluster. (A node is "good" if it's running a signed Constellation image inside a CVM and is in the expected state.) Towards the DevOps engineer, Constellation provides a single hardware-rooted certificate from which all of the above can be verified. As a result, Constellation wraps an entire cluster into one coherent *confidential context*. The concept is depicted in the following.
![Confidential Kubernetes](../_media/concept-constellation.svg)
In contrast, managed Kubernetes with CVMs, as it's for example offered in [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) and [GKE](https://cloud.google.com/kubernetes-engine), only provides runtime encryption for certain worker nodes. Here, each worker node is a separate (and typically unverified) confidential context. This only provides limited security benefits as it only prevents direct access to a worker node's memory. The large majority of potential attacks through the infrastructure remain unaffected. This includes attacks through the control plane, access to external key management, and the corruption of worker node images. This leaves many problems unsolved. For instance, *Node A* has no means to verify if *Node B* is "good" and if it's OK to share data with it. Consequently, this approach leaves a large attack surface, as is depicted in the following.
![Concept: Managed Kubernetes plus CVMs](../_media/concept-managed.svg)
The following table highlights the key differences in terms of features:
| | Managed Kubernetes with CVMs | Confidential Kubernetes (Constellation✨) |
|-------------------------------------|------------------------------|--------------------------------------------|
| Runtime encryption | Partial (data plane only)| **Yes** |
| Node image verification | No | **Yes** |
| Full cluster attestation | No | **Yes** |
| Transparent network encryption | No | **Yes** |
| Transparent storage encryption | No | **Yes** |
| Confidential key management | No | **Yes** |
| Cloud agnostic / multi-cloud | No | **Yes** |

View File

@ -0,0 +1,25 @@
# License
## Source code
Constellation's source code is available on [GitHub](https://github.com/edgelesssys/constellation) under the permissive [GNU Affero General Public License (AGPL)](https://www.gnu.org/licenses/agpl-3.0.en.html).
## Binaries
Edgeless Systems provides ready-to-use and [signed](../architecture/attestation.md#chain-of-trust) binaries of Constellation. This includes the CLI and the [node images](../architecture/images.md).
These binaries may be used free of charge within the bounds of Constellation's [**Community License**](#community-license). An [**Enterprise License**](#enterprise-license) can be purchased from Edgeless Systems.
The Constellation CLI displays relevant license information when you initialize your cluster. You are responsible for staying within the bounds of your respective license. Constellation doesn't enforce any limits so as not to endanger your cluster's availability.
### Community License
You are free to use the Constellation binaries provided by Edgeless Systems to create services for internal consumption. You must not use the Constellation binaries to provide hosted services of any type to third parties. Edgeless Systems gives no warranties and offers no support.
These terms may be different for future releases.
### Enterprise License
Enterprise Licenses don't have the above limitations and come with support and additional features. Find out more [here](https://www.edgeless.systems/products/constellation/).
Once you have received your Enterprise License file, place it in your [Constellation workspace](../architecture/orchestration.md#workspaces) in a file named `constellation.license`.

View File

@ -0,0 +1,21 @@
# Product features
Constellation is a confidential orchestration platform, designed to be the most secure way to run Kubernetes.
It leverages confidential computing to isolate entire Kubernetes deployments and all workloads from the infrastructure.
From the inside, a Constellation cluster feels 100% like Kubernetes as you know it.
But for everyone else, from the outside, its runtime-encrypted VMs talking over encrypted channels and writing encrypted data.
Constellation provides confidential computing enhancements to Kubernetes, including the following:
* Leveraging confidential VMs (CVMs) available in all major clouds to isolate and encrypt the Kubernetes control-plane and worker nodes.
* Node attestation including a [verified boot](../architecture/images.md#measured-boot) that roots in hardware-measured attestation provided by CVM technologies.
* Operating a [container network interface (CNI) plugin](../architecture/networking.md) between CVMs for encrypted network communications in your cluster. Enabling TLS offloading.
* [CVM-level persistent volume encryption](../architecture//encrypted-storage.md) ensures the confidentiality and integrity of persistent data outside of the Kubernetes cluster.
* [Confidential key management](../architecture//keys.md).
* Verifiable, measured, and authenticated [updates](../architecture/orchestration.md#upgrades) of node OS images and Kubernetes components.
Constellation provides an enterprise-ready Kubernetes environment with key features such as:
* Multi-cloud deployments. You can deploy Constellation clusters to all major cloud platforms for a consistent confidential orchestration platform.
* Highly available (HA) Confidential Kubernetes cluster with [stacked etcd topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology).
* Integrating with the Kubernetes cloud controller manager (CCM) to securely provide cloud services such as [cluster autoscaling](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), [dynamic persistent volumes](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), and [service load balancing](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).

View File

@ -0,0 +1,22 @@
# Security benefits and threat model
Constellation implements the [Confidential Kubernetes](confidential-kubernetes.md) concept and shields entire Kubernetes deployments from the infrastructure. More concretely, Constellation decreases the size of the trusted computing base (TCB) of a Kubernetes deployment. The TCB is the totality of elements in a computing environment that must be trusted not to be compromised. A smaller TCB results in a smaller attack surface. The following diagram shows how Constellation removes the *cloud & datacenter infrastructure* and the *physical hosts*, including the hypervisor, the host OS, and other components, from the TCB (red). Inside the confidential context (green), Kubernetes remains part of the TCB, but its integrity is attested and can be [verified](../workflows/verify.md).
![TCB comparison](../_media/tcb.svg)
Given this background, the following describes the concrete threat classes that Constellation addresses.
## Insider access
Employees and third-party contractors of cloud service providers (CSPs) have access to different layers of the cloud infrastructure.
This opens up a large attack surface where workloads and data can be read, copied, or manipulated. With Constellation, Kubernetes deployments are shielded from the infrastructure and thus such accesses are prevented.
## Infrastructure-based attacks
Malicious cloud users ("hackers") may break out of their tenancy and access other tenants' data. Advanced attackers may even be able to establish a permanent foothold within the infrastructure and repeatedly access data over a longer period. Analogously to the *insider access* scenario, Constellation also prevents access to a deployment's data in this scenario.
## Supply chain attacks
Supply chain security is receiving lots of attention recently due to an [increasing number of recorded attacks](https://www.enisa.europa.eu/news/enisa-news/understanding-the-increase-in-supply-chain-security-attacks). For instance, a malicious actor could attempt to tamper Constellation node images (including Kubernetes and other software) before they're loaded in the confidential VMs of a cluster. Constellation uses remote attestation in conjunction with public transparency logs to prevent this. The approach is detailed [here](../architecture/attestation.md).
In the future, Constellation will extend this feature to customer workloads. This will enable cluster owners to create auditable policies that precisely define which containers can run in a given deployment.

327
docs/docs/reference/cli.md Normal file
View File

@ -0,0 +1,327 @@
<!-- This file is generated by constellation/hack/clidocgen via update-cli-reference.yml workflow. Don't edit manually. -->
# CLI reference
Use the Constellation CLI to create and manage your clusters.
Usage:
```
constellation [command]
```
Commands:
* [config](#constellation-config): Work with the Constellation configuration file
* [generate](#constellation-generate): Generate a default configuration file
* [fetch-measurements](#constellation-fetch-measurements): Fetch measurements for configured cloud provider and image
* [create](#constellation-create): Create instances on a cloud platform for your Constellation cluster
* [init](#constellation-init): Initialize the Constellation cluster
* [verify](#constellation-verify): Verify the confidential properties of a Constellation cluster
* [recover](#constellation-recover): Recover a completely stopped Constellation cluster
* [terminate](#constellation-terminate): Terminate a Constellation cluster
* [upgrade](#constellation-upgrade): Plan and perform an upgrade of a Constellation cluster
* [execute](#constellation-execute): Execute an upgrade of a Constellation cluster
* [plan](#constellation-plan): Plan an upgrade of a Constellation cluster
* [version](#constellation-version): Display version of this CLI
## constellation config
Work with the Constellation configuration file
### Synopsis
Generate a configuration file for Constellation.
### Options
```
-h, --help help for config
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation config generate
Generate a default configuration file
### Synopsis
Generate a default configuration file for your selected cloud provider.
```
constellation config generate {aws|azure|gcp} [flags]
```
### Options
```
-f, --file string path to output file, or '-' for stdout (default "constellation-conf.yaml")
-h, --help help for generate
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation config fetch-measurements
Fetch measurements for configured cloud provider and image
### Synopsis
Fetch measurements for configured cloud provider and image. A config needs to be generated first!
```
constellation config fetch-measurements [flags]
```
### Options
```
-h, --help help for fetch-measurements
-s, --signature-url string alternative URL to fetch measurements' signature from
-u, --url string alternative URL to fetch measurements from
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation create
Create instances on a cloud platform for your Constellation cluster
### Synopsis
Create instances on a cloud platform for your Constellation cluster.
```
constellation create {aws|azure|gcp} [flags]
```
### Options
```
-c, --control-plane-nodes int number of control-plane nodes (required)
-h, --help help for create
-t, --instance-type string instance type of cluster nodes
--name string create the cluster with the specified name (default "constell")
-w, --worker-nodes int number of worker nodes (required)
-y, --yes create the cluster without further confirmation
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation init
Initialize the Constellation cluster
### Synopsis
Initialize the Constellation cluster. Start your confidential Kubernetes.
```
constellation init [flags]
```
### Options
```
--autoscale enable Kubernetes cluster-autoscaler
--endpoint string endpoint of the bootstrapper, passed as HOST[:PORT]
-h, --help help for init
--master-secret string path to base64-encoded master secret
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation verify
Verify the confidential properties of a Constellation cluster
### Synopsis
Verify the confidential properties of a Constellation cluster.
If arguments aren't specified, values are read from `constellation-id.json`.
```
constellation verify {aws|azure|gcp} [flags]
```
### Options
```
--cluster-id string verify using Constellation's cluster identifier
-h, --help help for verify
-e, --node-endpoint string endpoint of the node to verify, passed as HOST[:PORT]
--owner-id string verify using the owner identity derived from the master secret
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation recover
Recover a completely stopped Constellation cluster
### Synopsis
Recover a Constellation cluster by sending a recovery key to an instance in the boot stage.
This is only required if instances restart without other instances available for bootstrapping.
```
constellation recover [flags]
```
### Options
```
--disk-uuid string disk UUID of the encrypted state disk (required)
-e, --endpoint string endpoint of the instance, passed as HOST[:PORT] (required)
-h, --help help for recover
--master-secret string path to master secret file (default "constellation-mastersecret.json")
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation terminate
Terminate a Constellation cluster
### Synopsis
Terminate a Constellation cluster. The cluster can't be started again, and all persistent storage will be lost.
```
constellation terminate [flags]
```
### Options
```
-h, --help help for terminate
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation upgrade
Plan and perform an upgrade of a Constellation cluster
### Synopsis
Plan and perform an upgrade of a Constellation cluster.
### Options
```
-h, --help help for upgrade
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation upgrade execute
Execute an upgrade of a Constellation cluster
### Synopsis
Execute an upgrade of a Constellation cluster by applying the chosen configuration.
```
constellation upgrade execute [flags]
```
### Options
```
-h, --help help for execute
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation upgrade plan
Plan an upgrade of a Constellation cluster
### Synopsis
Plan an upgrade of a Constellation cluster by fetching compatible image versions and their measurements.
```
constellation upgrade plan [flags]
```
### Options
```
-f, --file string path to output file, or '-' for stdout, leave empty for interactive mode
-h, --help help for plan
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```
## constellation version
Display version of this CLI
### Synopsis
Display version of this CLI.
```
constellation version [flags]
```
### Options
```
-h, --help help for version
```
### Options inherited from parent commands
```
--config string path to the configuration file (default "constellation-conf.yaml")
```

View File

@ -0,0 +1,83 @@
# Configuration file
Constellation CLI reads all configuration options from `constellation-conf.yaml`.
> The Constellation CLI can generate a default configuration file. This should be the preferred way, so that the configuration matches the used CLI version.
A sample configuration for a Constellation cluster on Azure looks like this:
```yaml
version: v1 # Schema version of this configuration file.
autoscalingNodeGroupMin: 1 # Minimum number of worker nodes in autoscaling group.
autoscalingNodeGroupMax: 10 # Maximum number of worker nodes in autoscaling group.
stateDiskSizeGB: 30 # Size (in GB) of a node's disk to store the non-volatile state.
# Ingress firewall rules for node network.
ingressFirewall:
- name: bootstrapper # Name of rule.
description: bootstrapper default port # Description for rule.
protocol: tcp # Protocol, such as 'udp' or 'tcp'.
iprange: 0.0.0.0/0 # CIDR range for which this rule is applied.
fromport: 9000 # Start port of a range.
toport: 0 # End port of a range, or 0 if a single port is given by fromport.
- name: ssh # Name of rule.
description: SSH # Description for rule.
protocol: tcp # Protocol, such as 'udp' or 'tcp'.
iprange: 0.0.0.0/0 # CIDR range for which this rule is applied.
fromport: 22 # Start port of a range.
toport: 0 # End port of a range, or 0 if a single port is given by fromport.
- name: nodeport # Name of rule.
description: NodePort # Description for rule.
protocol: tcp # Protocol, such as 'udp' or 'tcp'.
iprange: 0.0.0.0/0 # CIDR range for which this rule is applied.
fromport: 30000 # Start port of a range.
toport: 32767 # End port of a range, or 0 if a single port is given by fromport.
- name: kubernetes # Name of rule.
description: Kubernetes # Description for rule.
protocol: tcp # Protocol, such as 'udp' or 'tcp'.
iprange: 0.0.0.0/0 # CIDR range for which this rule is applied.
fromport: 6443 # Start port of a range.
toport: 0 # End port of a range, or 0 if a single port is given by fromport.
# Supported cloud providers and their specific configurations.
provider:
# Configuration for Azure as provider.
azure:
subscription: "" # Subscription ID of the used Azure account. See: https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription
tenant: "" # Tenant ID of the used Azure account. See: https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-ad-tenant
location: "" # Azure datacenter region to be used. See: https://docs.microsoft.com/en-us/azure/availability-zones/az-overview#azure-regions-with-availability-zones
image: /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435/resourceGroups/CONSTELLATION-IMAGES/providers/Microsoft.Compute/galleries/Constellation/images/constellation-coreos/versions/0.0.1659453699 # Machine image used to create Constellation nodes.
stateDiskType: StandardSSD_LRS # Type of a node's state disk. The type influences boot time and I/O performance. See: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#disk-type-comparison
# Expected confidential VM measurements.
measurements:
11: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
12: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
userAssignedIdentity: "" # Authorize spawned VMs to access Azure API. See: https://docs.edgeless.systems/constellation/getting-started/install#authorization
kubernetesVersion: "1.24" # Kubernetes version installed in the cluster.
# # Egress firewall rules for node network.
# egressFirewall:
# - name: rule#1 # Name of rule.
# description: the first rule # Description for rule.
# protocol: tcp # Protocol, such as 'udp' or 'tcp'.
# iprange: 0.0.0.0/0 # CIDR range for which this rule is applied.
# fromport: 443 # Start port of a range.
# toport: 443 # End port of a range, or 0 if a single port is given by fromport.
# # Create SSH users on Constellation nodes.
# sshUsers:
# - username: Alice # Username of new SSH user.
# publicKey: ssh-rsa AAAAB3NzaC...5QXHKW1rufgtJeSeJ8= alice@domain.com # Public key of new SSH user.
```
## Required customizations
Most options of a generated configuration can be kept at their default values. However, you must edit some cloud provider options.
### Azure
Set the `subscription` and `tenant` IDs of your subscription.
Set the `userAssignedIdentity` that you [created for Constellation](../getting-started/install.md#azure).
### GCP
Set the `project` that you want to use for your Constellation cluster.

View File

@ -0,0 +1,105 @@
# Create your cluster
Creating your cluster requires two steps:
1. Creating the necessary resources in your cloud environment
2. Bootstrapping the Constellation cluster and setting up a connection
See the [architecture](../architecture/orchestration.md) section for details on the inner workings of this process.
## The *create* step
This step creates the necessary resources for your cluster in your cloud environment.
### Prerequisites
Before creating your cluster you need to decide on
* the size of your cluster (the number of control-plane and worker nodes)
* the machine type of your nodes (depending on the availability in your cloud environment)
* whether to enable autoscaling for your cluster (automatically adding and removing nodes depending on resource demands)
You can find the currently supported machine types for your cloud environment in the [installation guide](../architecture/orchestration.md).
### Configuration
Constellation can generate a configuration file for your cloud provider:
<tabs>
<tabItem value="azure" label="Azure" default>
```bash
constellation config generate azure
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation config generate gcp
```
</tabItem>
</tabs>
This creates the file `constellation-conf.yaml` in the current directory. You must edit it before you can execute the next steps. See the [reference section](../reference/config.md) for details.
Next, download the latest trusted measurements for your configured image.
```bash
constellation config fetch-measurements
```
For more details, see the [verification section](../workflows/verify.md).
### Create
The following command creates a cluster with one control-plane and two worker nodes:
<tabs>
<tabItem value="azure" label="Azure" default>
```bash
constellation create azure --control-plane-nodes 1 --worker-nodes 2 --instance-type Standard_D4a_v4 -y
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation create gcp --control-plane-nodes 1 --worker-nodes 2 --instance-type n2d-standard-2 -y
```
</tabItem>
</tabs>
For details on the flags and a list of supported instance types, consult the command help via `constellation create -h`.
*create* will store your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory.
## The *init* step
This step bootstraps your cluster and configures your Kubernetes client.
### Init
The following command initializes and bootstraps your cluster:
```bash
constellation init
```
To enable autoscaling in your cluster, add the `--autoscale` flag:
```bash
constellation init --autoscale
```
Next, configure kubectl for your Constellation cluster:
```bash
export KUBECONFIG="$PWD/constellation-admin.conf"
kubectl get nodes -o wide
```
That's it. You've successfully created a Constellation cluster.

View File

@ -0,0 +1 @@
# Expose services

View File

@ -0,0 +1,147 @@
# Recovery
Recovery of a Constellation cluster means getting a cluster back into a healthy state after it became unhealthy due to the underlying infrastructure.
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
Constellation keeps all stateful data protected and encrypted in a [stateful disk](../architecture/images.md#stateful-disk) attached to each node.
The stateful disk will be persisted across reboots.
The data restored from that disk contains the entire Kubernetes state including the application deployments.
Meaning after a successful recovery procedure the applications can continue operating without redeploying everything from scratch.
Recovery events are rare because Constellation is built for high availability and contains mechanisms to automatically replace and join nodes to the cluster.
Once a node reboots, the [*Bootstrapper*](../architecture/components.md#bootstrapper) will try to authenticate to the cluster's [*JoinService*](../architecture/components.md#joinservice) using remote attestation.
If successful the *JoinService* will return the encryption key for the stateful disk as part of the initialization response.
This process ensures that Constellation nodes can securely recover and rejoin a cluster autonomously.
In case of a disaster, where the control plane itself becomes unhealthy, Constellation provides a mechanism to recover that cluster and bring it back into a healthy state.
The `constellation recover` command connects to a node, establishes a secure connection using [attested TLS](../architecture/attestation.md#attested-tls-atls), and provides that node with the key to decrypt its stateful disk and continue booting.
This process has to be repeated until enough nodes are back running for establishing a [member quorum for etcd](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance) and the Kubernetes state can be recovered.
## Identify unhealthy clusters
The first step to recovery is identifying when a cluster becomes unhealthy.
Usually, that's first observed when the Kubernetes API server becomes unresponsive.
The causes can vary but are often related to issues in the underlying infrastructure.
Recovery in Constellation becomes necessary if not enough control-plane nodes are in a healthy state to keep the control plane operational.
The health status of the Constellation nodes can be checked and monitored via the cloud service provider.
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each cloud environment.
Once you've identified that your cluster is in an unhealthy state you can use the [recovery](recovery.md#recover-your-cluster) command of the Constellation CLI to restore it.
<tabs>
<tabItem value="azure" label="Azure" default>
In the Azure cloud portal find the cluster's resource group `<cluster-name>-<suffix>`
Inside the resource group check that the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>` has enough members in a *Running* state.
Open the scale set details page, on the left go to `Settings -> Instances` and check the *Status* field.
Second, check the boot logs of these *Instances*.
In the scale set's *Instances* view, open the details page of the desired instance.
Check the serial console output of that instance.
On the left open the *"Support + troubleshooting" -> "Serial console"* page:
In the serial console output search for `Waiting for decryption key`.
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
```shell
{"level":"INFO","ts":"2022-08-01T08:02:20Z","caller":"cmd/main.go:46","msg":"Starting disk-mapper","version":"0.0.0","cloudProvider":"azure"}
{"level":"INFO","ts":"2022-08-01T08:02:20Z","logger":"setupManager","caller":"setup/setup.go:57","msg":"Preparing existing state disk"}
{"level":"INFO","ts":"2022-08-01T08:02:20Z","logger":"keyService","caller":"keyservice/keyservice.go:92","msg":"Waiting for decryption key. Listening on: [::]:9000"}
```
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
If that fails, because the control plane is unhealthy, you will see log messages similar to the following:
```shell
{"level":"INFO","ts":"2022-08-01T08:02:21Z","logger":"keyService","caller":"keyservice/keyservice.go:118","msg":"Received list with JoinService endpoints: [10.9.0.5:30090 10.9.0.6:30090 10.9.0.7:30090 10.9.0.8:30090 10.9.0.9:30090 10.9.0.10:30090 10.9.0.11:30090 10.9.0.12:30090 10.9.0.13:30090 10.9.0.14:30090 10.9.0.15:30090 10.9.0.16:30090 10.9.0.17:30090 10.9.0.18:30090 10.9.0.19:30090 10.9.0.20:30090 10.9.0.21:30090 10.9.0.22:30090 10.9.0.23:30090]"}
{"level":"INFO","ts":"2022-08-01T08:02:21Z","logger":"keyService","caller":"keyservice/keyservice.go:145","msg":"Requesting rejoin ticket","endpoint":"10.9.0.5:30090"}
{"level":"ERROR","ts":"2022-08-01T08:02:21Z","logger":"keyService","caller":"keyservice/keyservice.go:148","msg":"Failed to request rejoin ticket","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.9.0.5:30090: connect: connection refused\"","endpoint":"10.9.0.5:30090"}
```
That means you have to recover that node manually.
Before you continue with the [recovery process](#recover-your-cluster) you need to know the node's IP address and state disk's UUID.
For the IP address, return to the instances *Overview* page and find the *Private IP address*.
For the UUID open the [Cloud logging](troubleshooting.md#azure) explorer.
Type `traces | where message contains "Disk UUID"` and click `Run`.
Find the entry corresponding to that instance `{"instance-name":"<cluster-name>-control-plane-<suffix>"}` and take the UUID from the message field `Disk UUID: <UUID>`.
</tabItem>
<tabItem value="gcp" label="GCP" default>
First, check that the control plane *Instance Group* has enough members in a *Ready* state.
Go to *Instance Groups* and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
Second, check the status of the *VM Instances*.
Go to *VM Instances* and open the details of the desired instance.
Check the serial console output of that instance by opening the *logs -> "Serial port 1 (console)"* page:
![GCP portal serial console link](../_media/recovery-gcp-serial-console-link.png)
In the serial console output search for `Waiting for decryption key`.
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
```shell
{"level":"INFO","ts":"2022-07-29T09:45:55Z","caller":"cmd/main.go:46","msg":"Starting disk-mapper","version":"0.0.0","cloudProvider":"gcp"}
{"level":"INFO","ts":"2022-07-29T09:45:55Z","logger":"setupManager","caller":"setup/setup.go:57","msg":"Preparing existing state disk"}
{"level":"INFO","ts":"2022-07-29T09:45:55Z","logger":"keyService","caller":"keyservice/keyservice.go:92","msg":"Waiting for decryption key. Listening on: [::]:9000"}
```
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
If that fails, because the control plane is unhealthy, you will see log messages similar to the following:
```shell
{"level":"INFO","ts":"2022-07-29T09:46:15Z","logger":"keyService","caller":"keyservice/keyservice.go:118","msg":"Received list with JoinService endpoints: [192.168.178.2:30090]"}
{"level":"INFO","ts":"2022-07-29T09:46:15Z","logger":"keyService","caller":"keyservice/keyservice.go:145","msg":"Requesting rejoin ticket","endpoint":"192.168.178.2:30090"}
{"level":"ERROR","ts":"2022-07-29T09:46:15Z","logger":"keyService","caller":"keyservice/keyservice.go:148","msg":"Failed to request rejoin ticket","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.2:30090: connect: connection refused\"","endpoint":"192.168.178.2:30090"}
```
That means you have to recover that node manually.
Before you continue with the [recovery process](#recover-your-cluster) you need to know the node's IP address and state disk's UUID.
For the IP address go to the *"VM Instance" -> "network interfaces"* page and take the address from *"Primary internal IP address."*
For the UUID open the [Cloud logging](troubleshooting.md#cloud-logging) explorer, you'll find that right above the serial console link (see the picture above).
Search for `Disk UUID: <UUID>`.
</tabItem>
</tabs>
## Recover your cluster
Depending on the size of your cluster and the number of unhealthy control plane nodes the following process needs to be repeated until a [member quorum for etcd](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance) is established.
For example, assume you have 5 control-plane nodes in your cluster and 4 of them have been rebooted due to a maintenance downtime in the cloud environment.
You have to run through the following process for 2 of these nodes and recover them manually to recover the quorum.
From there, your cluster will auto heal the remaining 2 control-plane nodes and the rest of your cluster.
Recovering a node requires the following parameters:
* The node's IP address
* The node's state disk UUID
* Access to the master secret of the cluster
See the [Identify unhealthy clusters](#identify-unhealthy-clusters) description of how to obtain the node's IP address and state disk UUID.
Note that the recovery command needs to connect to the recovering nodes.
Nodes only have private IP addresses in the VPC of the cluster, hence, the command needs to be issued from within the VPC network of the cluster.
The easiest approach is to set up a jump host connected to the VPC network and perform the recovery from there.
Given these prerequisites a node can be recovered like this:
```bash
$ constellation recover -e 34.107.89.208 --disk-uuid b27f817c-6799-4c0d-81d8-57abc8386b70 --master-secret constellation-mastersecret.json
Pushed recovery key.
```
In the serial console output of the node you'll see a similar output to the following:
```shell
[ 3225.621753] EXT4-fs (dm-1): INFO: recovery required on readonly filesystem
[ 3225.628807] EXT4-fs (dm-1): write access will be enabled during recovery
[ 3226.295816] EXT4-fs (dm-1): recovery complete
[ 3226.301618] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[ 3226.338157] systemd[1]: run-state.mount: Deactivated successfully.
[ OK [[ 3226.347833] systemd[1]: Finished Prepare encrypted state disk.
0m] Finished Prepare encrypted state disk.
Startin[ 3226.363705] systemd[1]: Starting OSTree Prepare OS/...
g OSTre[ 3226.370625] ostree-prepare-root[939]: preparing sysroot at /sysroot
e Prepare OS/...
```
After enough control plane nodes have been recovered and the Kubernetes cluster becomes healthy again, the rest of the cluster will start auto healing using the mechanism described above.

View File

@ -0,0 +1,56 @@
# Scale your cluster
Constellation provides all features of a Kubernetes cluster including scaling and autoscaling.
## Worker node scaling
[During cluster initialization](create.md#init) you can choose to deploy the [cluster autoscaler](https://github.com/kubernetes/autoscaler). It automatically provisions additional worker nodes so that all pods have a place to run.
Alternatively, you can choose to manually scale your cluster:
<tabs>
<tabItem value="azure" label="Azure" default>
1. Find your Constellation resource group.
2. Select the `scale-set-workers`.
3. Go to **settings** and **scaling**.
4. Set the new **instance count** and **save**.
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
2. **Edit** the **worker** instance group.
3. Set the new **number of instances** and **save**.
</tabItem>
</tabs>
This works for scaling your worker nodes up and down.
## Control-plane node scaling
Control-plane nodes can **only be scaled manually and only scaled up**!
To increase the number of control-plane nodes, follow these steps:
<tabs>
<tabItem value="azure" label="Azure" default>
1. Find your Constellation resource group.
2. Select the `scale-set-controlplanes`.
3. Go to **settings** and **scaling**.
4. Set the new (increased) **instance count** and **save**.
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
2. **Edit** the **control-plane** instance group.
3. Set the new (increased) **number of instances** and **save**.
</tabItem>
</tabs>
If you scale down the number of control-planes nodes, the removed nodes won't be able to exit the `etcd` cluster correctly. This will endanger the quorum that's required to run a stable Kubernetes control plane.

View File

@ -0,0 +1,59 @@
# Managing SSH Keys
Constellation gives you the capability to create UNIX users which can connect to the cluster nodes over SSH, allowing you to access both control-plane as well as worker nodes. While the data partition is persistent, the system partition is read-only, meaning that users need to be re-created upon each restart of a node. This is where the Access Manager comes into effect, ensuring the automatic (re-)creation of all users whenever a node is restarted.
During the initial creation of the cluster, all users defined in the `ssh-users` section of the Constellation configuration (see the [reference section](../reference/config.md) for details) are automatically created during the initialization process.
For persistence, they're transferred into a ConfigMap called `ssh-users`, residing in the `kube-system` namespace. When no users are initially defined, the ConfigMap will still be created with no entries. After the initial definition in the Constellation configuration, users can be added and removed by modifying the entries of the ConfigMap and performing a restart of a node.
## Access Manager
The Access Manager doesn't restrict users on the use of certain key formats, meaning that all underlying formats the OpenSSH server supports are accepted. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519. No validation is performed on the side of the Access Manager too, passing them directly to the authorized key lists as defined.
Note that all users are automatically created with `sudo` capabilities, so make sure no one unintended has permissions to modify the `ssh-users` ConfigMap.
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Therefore, a complete node restart is important to ensure the correct modification of users on the system and needs to be executed manually after making changes to the ConfigMap.
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`, with the owner of each directory and its content being modified to `root`.
You can update the ConfigMap by:
```bash
kubectl edit configmap -n kube-system ssh-users
```
Or alternatively, by modifying and re-applying it with the definition listed in the examples.
## Examples
An example to create an user called `myuser` as part of the `constellation-config.yaml` looks like this:
```yaml
# Create SSH users on Constellation nodes upon the first initialization of the cluster.
sshUsers:
myuser: "ssh-rsa AAAA...mgNJd9jc="
```
This user is then created upon the first initialization of the cluster, and translated into a ConfigMap as shown below:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-users
namespace: kube-system
data:
myuser: "ssh-rsa AAAA...mgNJd9jc="
```
Entries can be added simply by adding `data` entries:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-users
namespace: kube-system
data:
myuser: "ssh-rsa AAAA...mgNJd9jc="
anotheruser: "ssh-ed25519 AAAA...CldH"
```
Similarly, removing any entries causes users to be evicted upon the next restart of the node.

View File

@ -0,0 +1,282 @@
# Use persistent storage
Persistent storage in Kubernetes requires configuration based on your cloud provider of choice.
For abstraction of container storage, Kubernetes offers [volumes](https://kubernetes.io/docs/concepts/storage/volumes/),
allowing users to mount storage solutions directly into containers.
The [Container Storage Interface (CSI)](https://kubernetes-csi.github.io/docs/) is the standard interface for exposing arbitrary block and file storage systems into containers in Kubernetes.
Cloud providers offer their own CSI-based solutions for cloud storage.
### Confidential storage
Most cloud storage solutions support encryption, such as [GCE Persistent Disks (PD)](https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek).
Constellation supports the available CSI-based storage options for Kubernetes engines in Azure and GCP.
However, their encryption takes place in the storage backend and is managed by the cloud provider.
This mode of storage encryption doesn't provide confidential storage.
Using the default CSI drivers for these storage types means trusting the CSP with your persistent data.
Constellation provides CSI drivers for Azure Disk and GCE PD, offering [encryption on the node level](../architecture/keys.md#storage-encryption). They enable transparent encryption for persistent volumes without needing to trust the cloud backend. Plaintext data never leaves the confidential VM context, offering you confidential storage.
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
## CSI Drivers
Constellation can use the following drivers which offer node level encryption and optional integrity protection.
<tabs>
<tabItem value="azure" label="Azure" default>
1. [Azure Disk Storage](https://github.com/edgelesssys/constellation-azuredisk-csi-driver)
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the example below on how to install the modified Azure Disk CSI driver or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for installation and more information about the Constellation-managed version of the driver. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. [Persistent Disk](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver):
Mount GCP [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
You can use them to bring a volume back to a prior state or provision new volumes.
Follow the examples listed below to setup the modified GCP PD CSI driver, or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
</tabItem>
</tabs>
Note that in case the options above aren't a suitable solution for you, Constellation is compatible with all other CSI-based storage options. For example, you can use [Azure Files](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction) or [GCP Filestore](https://cloud.google.com/filestore) with Constellation out of the box. Constellation is just not providing transparent encryption on the node level for these storage types yet.
## Installation
The following installation guide gives a brief overview of using CSI-based confidential cloud storage for persistent volumes in Constellation.
<tabs>
<tabItem value="azure" label="Azure" default>
1. Install the CSI driver:
```bash
helm install azuredisk-csi-driver charts/edgeless/latest/azuredisk-csi-driver.tgz \
--namespace kube-system \
--set linux.distro=fedora \
--set controller.replicas=1
```
2. Create a [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for your driver
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
A storage class only needs to be created once and can then be used by multiple volumes.
The following snippet creates a simple storage class using a [Standard SSD](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created.
```bash
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: encrypted-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: azuredisk.csi.confidential.cloud
parameters:
skuName: StandardSSD_LRS
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF
```
:::info
By default, integrity protection is disabled for performance reasons. If you want to enable integrity protection, add `csi.storage.k8s.io/fstype: ext4-integrity` to `parameters`. Alternatively, you can use another filesystem by specifying another file system type with the suffix `-integrity`. Note that volume expansion isn't supported for integrity-protected disks.
:::
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. Install the CSI driver:
```bash
git clone https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver.git
cd constellation-gcp-compute-persistent-disk-csi-driver
kubectl apply -k ./deploy/kubernetes/overlays/edgeless/latest
```
2. Create a [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for your driver
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
A storage class only needs to be created once and can then be used by multiple volumes.
The following snippet creates a simple storage class for the GCE PD driver, utilizing [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the storage backend device when the first Pod claiming the volume is created.
```bash
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: encrypted-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: gcp.csi.confidential.cloud
parameters:
type: pd-standard
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF
```
:::info
By default, integrity protection is disabled for performance reasons. If you want to enable integrity protection, add `csi.storage.k8s.io/fstype: ext4-integrity` to `parameters`. Alternatively, you can use another filesystem by specifying another file system type with the suffix `-integrity`. Note that volume expansion isn't supported for integrity-protected disks.
:::
</tabItem>
</tabs>
3. Create a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
A [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) is a request for storage with certain properties.
It can refer to a storage class.
The following creates a persistent volume claim, requesting 20 GB of storage via the previously created storage class:
```bash
cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-example
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: encrypted-storage
resources:
requests:
storage: 20Gi
EOF
```
4. Create a Pod with persistent storage
You can assign a persistent volume claim to an application in need of persistent storage.
The mounted volume will persist restarts.
The following creates a pod that uses the previously created persistent volume claim:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: web-server
namespace: default
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: pvc-example
readOnly: false
EOF
```
### Set the default storage class
The examples above are defined to be automatically set as the default storage class. The default storage class is responsible for all persistent volume claims that don't explicitly request `storageClassName`. In case you need to change the default, follow the steps below:
<tabs>
<tabItem value="azure" label="Azure" default>
1. List the storage classes in your cluster:
```bash
kubectl get storageclass
```
The output is similar to this:
```shell-session
NAME PROVISIONER AGE
some-storage (default) disk.csi.azure.com 1d
encrypted-storage azuredisk.csi.confidential.cloud 1d
```
The default storage class is marked by `(default)`.
2. Mark old default storage class as non default
If you previously used another storage class as the default, you will have to remove that annotation:
```bash
kubectl patch storageclass <name-of-old-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
```
3. Mark new class as the default
```bash
kubectl patch storageclass <name-of-new-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
4. Verify that your chosen storage class is default:
```bash
kubectl get storageclass
```
The output is similar to this:
```shell-session
NAME PROVISIONER AGE
some-storage disk.csi.azure.com 1d
encrypted-storage (default) azuredisk.csi.confidential.cloud 1d
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. List the storage classes in your cluster:
```bash
kubectl get storageclass
```
The output is similar to this:
```shell-session
NAME PROVISIONER AGE
some-storage (default) pd.csi.storage.gke.io 1d
encrypted-storage gcp.csi.confidential.cloud 1d
```
The default storage class is marked by `(default)`.
2. Mark old default storage class as non default
If you previously used another storage class as the default, you will have to remove that annotation:
```bash
kubectl patch storageclass <name-of-old-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
```
3. Mark new class as the default
```bash
kubectl patch storageclass <name-of-new-default> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
4. Verify that your chosen storage class is default:
```bash
kubectl get storageclass
```
The output is similar to this:
```shell-session
NAME PROVISIONER AGE
some-storage pd.csi.storage.gke.io 1d
encrypted-storage (default) gcp.csi.confidential.cloud 1d
```
</tabItem>
</tabs>

View File

@ -0,0 +1,26 @@
# Terminate your cluster
You can terminate your cluster using the CLI.
You need the state file of your running cluster named `constellation-state.json` in the current directory.
:::danger
All ephemeral storage and state of your cluster will be lost. Make sure any data is safely stored in persistent storage. Constellation can recreate your cluster and the associated encryption keys, but won't backup your application data automatically.
:::
Terminate the cluster by running:
```bash
constellation terminate
```
This deletes all resources created by Constellation in your cloud environment.
All local files created by the `create` and `init` commands are deleted as well, except the *master secret* `constellation-mastersecret.json` and the configuration file.
:::caution
Termination can fail if additional resources have been created that depend on the ones managed by Constellation. In this case, you need to delete these additional
resources manually. Just run the `terminate` command again afterward to continue the termination process of the cluster.
:::

View File

@ -0,0 +1,39 @@
# Troubleshooting
This section aids you in finding problems when working with Constellation.
## Cloud logging
To provide information during early stages of the node's boot process, Constellation logs messages into the cloud providers' log systems. Since these offerings **aren't** confidential, only generic information without any sensitive values are stored. This provides administrators with a high level understanding of the current state of a node.
You can view these information in the follow places:
<tabs>
<tabItem value="azure" label="Azure" default>
1. In your Azure subscription find the Constellation resource group.
2. Inside the resource group find the Application Insights resource called `constellation-insights-*`.
3. On the left-hand side go to `Logs`, which is located in the section `Monitoring`.
+ Close the Queries page if it pops up.
5. In the query text field type in `traces`, and click `Run`.
To **find the disk UUIDs** use the following query: `traces | where message contains "Disk UUID"`
</tabItem>
<tabItem value="gcp" label="GCP" default>
1. Select the project that hosts Constellation.
2. Go to the `Compute Engine` service.
3. On the right-hand side of a VM entry select `More Actions` (a stacked ellipsis)
+ Select `View logs`
To **find the disk UUIDs** use the following query: `resource.type="gce_instance" text_payload=~"Disk UUID:.*\n" logName=~".*/constellation-boot-log"`
:::info
Constellation uses the default bucket to store logs. Its [default retention period is 30 days](https://cloud.google.com/logging/quotas#logs_retention_periods).
:::
</tabItem>
</tabs>

View File

@ -0,0 +1,35 @@
# Upgrade you cluster
Constellation provides an easy way to upgrade from one release to the next.
This involves choosing a new VM image to use for all nodes in the cluster and updating the cluster's expected measurements.
## Plan the upgrade
If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull in a list of available updates.
```bash
constellation upgrade plan
```
The command will let you interactively choose from a list of available updates and prepare your Constellation config file for the next step.
If you plan to use the command in scripts, use the `--file` flag to compile the available options into a YAML file.
You can then manually set the chosen upgrade option in your Constellation config file.
:::caution
The `constellation upgrade plan` will only work for official Edgeless release images.
If your cluster is using a custom image or a debug image, the Constellation CLI will fail to find compatible images.
However, you may still use the `upgrade execute` command by manually selecting a compatible image and setting it in your config file.
:::
## Execute the upgrade
Once your config file has been prepared with the new image and measurements, use the `upgrade execute` command to initiate the upgrade.
```bash
constellation upgrade execute
```
After the command has finished, the cluster will automatically replace old nodes using a rolling update strategy to ensure no downtime of the control or data plane.

View File

@ -0,0 +1,78 @@
# Verify your cluster
Constellation's [attestation feature](../architecture/attestation.md) allows you, or a third party, to verify the confidentiality and integrity of your Constellation.
## Fetch measurements
To verify the integrity of Constellation you need trusted measurements to verify against. For each of the released images there are signed measurements, which you can download using the CLI:
```bash
constellation config fetch-measurements
```
This command performs the following steps:
1. Look up the signed measurements for the configured image.
2. Download the measurements.
3. Verify the signature.
4. Write measurements into configuration file.
### Custom arguments
To comply with regulations and policies it may be necessary that you need to generate the measurements yourself. You can either manually write these measurements to the configuration file or download them from a custom location using this command:
```bash
constellation config fetch-measurements -u http://my.storage/measurements.yaml -s http://my.storage/measurements.yaml.sig -p "$(cat cosign.pub)"
```
For more details consult the [CLI reference](../reference/cli.md).
## The *verify* command
Once measurements are configured, this command verifies an attestation statement issued by a Constellation, thereby verifying the integrity and confidentiality of the whole cluster.
The following command performs attestation on the Constellation in your current workspace:
<tabs>
<tabItem value="azure" label="Azure" default>
```bash
constellation verify azure
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation verify gcp
```
</tabItem>
</tabs>
The command makes sure the value passed to `-cluster-id` matches the *clusterID* presented in the attestation statement.
This allows you to verify that you are connecting to a specific Constellation instance
Additionally, the confidential computing capabilities, as well as the VM image, are verified to match the expected configurations.
### Custom arguments
You can provide additional arguments for `verify` to verify any Constellation you have network access to. This requires you to provide:
* The IP address of a running Constellation's [VerificationService](../architecture/components.md#verification-service). The *VerificationService* is exposed via a NodePort service using the external IP address of your cluster. Run `kubectl get nodes -o wide` and look for `EXTERNAL-IP`.
* The Constellation's *clusterID*. See [cluster identity](../architecture/keys.md#cluster-identity) for more details.
<tabs>
<tabItem value="azure" label="Azure" default>
```bash
constellation verify azure -e 192.0.2.1 --cluster-id Q29uc3RlbGxhdGlvbkRvY3VtZW50YXRpb25TZWNyZXQ=
```
</tabItem>
<tabItem value="gcp" label="GCP" default>
```bash
constellation verify gcp -e 192.0.2.1 --cluster-id Q29uc3RlbGxhdGlvbkRvY3VtZW50YXRpb25TZWNyZXQ=
```
</tabItem>
</tabs>

189
docs/docusaurus.config.js Normal file
View File

@ -0,0 +1,189 @@
// @ts-check
// Note: type annotations allow type checking and IDEs autocompletion
const lightCodeTheme = require('prism-react-renderer/themes/github');
const darkCodeTheme = require('prism-react-renderer/themes/dracula');
/** @type {import('@docusaurus/types').Config} */
async function createConfig() {
const mdxMermaid = await import('mdx-mermaid')
return {
title: 'Constellation',
tagline: 'Constellation: The world\'s most secure Kubernetes',
url: 'https://constellation-docs.netlify.app',
baseUrl: '/constellation/',
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'warn',
favicon: 'img/favicon.ico',
// GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these.
organizationName: 'Edgeless Systems', // Usually your GitHub org/user name.
projectName: 'Constellation', // Usually your repo name.
// Even if you don't use internalization, you can use this field to set useful
// metadata like html lang. For example, if your site is Chinese, you may want
// to replace "en" with "zh-Hans".
i18n: {
defaultLocale: 'en',
locales: ['en'],
},
presets: [
[
'classic',
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
remarkPlugins: [[mdxMermaid.default, { mermaid: {
theme: 'base',
themeVariables: {
// general
'fontFamily': '"Open Sans", sans-serif',
'primaryColor': '#90FF99', // edgeless green
'primaryTextColor': '#000000',
'secondaryColor': '#A5A5A5', // edgeless grey
'secondaryTextColor': '#000000',
'tertiaryColor': '#E7E6E6', // edgeless light grey
'tertiaryTextColor': '#000000',
// flowchart
'clusterBorder': '#A5A5A5',
'clusterBkg': '#ffffff',
'edgeLabelBackground': '#ffffff',
// sequence diagram
'activationBorderColor': '#000000',
'actorBorder': '#A5A5A5',
'actorFontFamily': '"Open Sans", sans-serif', // not released by mermaid yet
'noteBkgColor': '#8B04DD', // edgeless purple
'noteTextColor': '#ffffff',
},
startOnLoad: true
}}]],
sidebarPath: require.resolve('./sidebars.js'),
// sidebarPath: 'sidebars.js',
// Please change this to your repo.
// Remove this to remove the "edit this page" links.
editUrl: ({ locale, docPath }) => {
return `https://github.com/edgelesssys/constellation-docs/edit/ref/docusarus/docs/${docPath}`;
},
routeBasePath: "/"
},
blog: false,
theme: {
customCss: require.resolve('./src/css/custom.css'),
},
}),
],
],
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
navbar: {
hideOnScroll: false,
logo: {
alt: 'Constellation Logo',
src: 'img/logos/constellation_oneline.svg',
},
items: [
// left
// Running docs only mode no need for a link here
// {
// type: 'doc',
// docId: 'intro',
// position: 'left',
// label: 'Docs',
// },
// right
{
type: 'docsVersionDropdown',
position: 'right',
},
{
href: 'https://github.com/edgelesssys/constellation',
position: 'right',
className: 'header-github-link',
},
],
},
colorMode: {
defaultMode: 'light',
disableSwitch: true,
respectPrefersColorScheme: false,
},
announcementBar: {
content:
'⭐️ If you like Constellation, give it a star on <a target="_blank" rel="noopener noreferrer" href="https://github.com/edgelesssys/constellation">GitHub</a>! ⭐️',
},
footer: {
style: 'dark',
links: [
{
title: 'Learn',
items: [
{
label: 'Confidential Kubernetes',
to: '/overview/confidential-kubernetes',
},
{
label: 'Install',
to: '/getting-started/install',
},
{
label: 'First steps',
to: '/getting-started/first-steps',
},
],
},
{
title: 'Community',
items: [
{
label: 'GitHub',
href: 'https://github.com/edgelesssys/constellation',
},
{
label: 'Discord',
href: 'https://discord.gg/rH8QTH56JN',
},
{
label: 'Newsletter',
href: 'https://www.edgeless.systems/#newsletter-signup'
},
],
},
{
title: 'Social',
items: [
{
label: 'Blog',
to: 'https://blog.edgeless.systems/',
},
{
label: 'Twitter',
href: 'https://twitter.com/EdgelessSystems',
},
{
label: 'LinkedIn',
href: 'https://www.linkedin.com/company/edgeless-systems/',
},
{
label: 'Youtube',
href: 'https://www.youtube.com/channel/UCOOInN0sCv6icUesisYIDeA',
},
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} Edgeless Systems. Built with Docusaurus.`,
},
prism: {
theme: lightCodeTheme,
darkTheme: darkCodeTheme,
},
}),
}
};
module.exports = createConfig;

View File

@ -1,45 +0,0 @@
# Local image testing with QEMU
> Note: This document describes a manual method of deploying VMs with QEMU look at [terraform/libvirt](/terraform/libvirt) for an automated alternative.
To build our images we use the [CoreOS-Assembler (COSA)](https://github.com/edgelesssys/constellation-coreos-assembler).
COSA comes with support to test images locally. After building your image with `make coreos` you can run the image with `make run`.
Our fork adds extra utility by providing scripts to run an image in QEMU with a vTPM attached, or boot multiple VMs to simulate your own local Constellation cluster.
Begin by starting a COSA docker container
```shell
docker run -it --rm \
--entrypoint bash \
--device /dev/kvm \
--device /dev/net/tun \
--privileged \
-v </path/to/constellation-image.qcow2>:/constellation-image.qcow2 \
ghcr.io/edgelesssys/constellation-coreos-assembler
```
## Run a single image
Using the `run-image` script we can launch a single VM with an attached vTPM.
The script expects an image and a name to run. Optionally one may also provide the path to an existing state disk, if none provided a new disk will be created.
Additionally one may configure QEMU CPU (qemu -smp flag, default=2) and memory (qemu -m flag, default=2G) settings, as well as the size of the created state disk in GB (default 2) using environment variables.
To customize CPU settings use `CONSTELL_CPU=[[cpus=]n][,maxcpus=maxcpus][,sockets=sockets][,dies=dies][,cores=cores][,threads=threads]` \
To customize memory settings use `CONSTELL_MEM=[size=]megs[,slots=n,maxmem=size]` \
To customize state disk size use `CONSTELL_STATE_SIZE=n`
Use the following command to boot a VM with 2 CPUs, 2G RAM, a 4GB state disk with the image in `/constellation/coreos.qcow2`.
Logs and state files will be written to `/tmp/test-vm-01`.
```shell
sudo CONSTELL_CPU=2 CONSTELL_MEM=2G CONSTELL_STATE_SIZE=4 run-image /constellation/coreos.qcow2 test-vm-01
```
The command will create a network bridge and add the VM to the bridge, so the host may communicate with the guest VM, as well as allowing the VM to access the internet.
Press <kbd>Ctrl</kbd>+<kbd>A</kbd> <kbd>X</kbd> to stop the VM, this will remove the VM from the bridge but will keep the bridge alive.
Run the following to remove the bridge.
```shell
sudo delete_network_bridge br-constell-0
```

45
docs/package.json Normal file
View File

@ -0,0 +1,45 @@
{
"name": "constellation-docs",
"version": "1.15.0",
"private": true,
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
"serve": "docusaurus serve",
"write-translations": "docusaurus write-translations",
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "2.0.1",
"@docusaurus/preset-classic": "2.0.1",
"@mdx-js/react": "^1.6.22",
"clsx": "^1.2.1",
"mdx-mermaid": "^1.3.2",
"mermaid": "^9.1.6",
"prism-react-renderer": "^1.3.5",
"react": "^17.0.2",
"react-dom": "^17.0.2"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "2.0.1"
},
"browserslist": {
"production": [
">0.5%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"engines": {
"node": ">=16.14"
}
}

234
docs/sidebars.js Normal file
View File

@ -0,0 +1,234 @@
/**
* Creating a sidebar enables you to:
- create an ordered group of docs
- render a sidebar for each doc of that group
- provide next/previous navigation
The sidebars can be generated from the filesystem, or explicitly defined here.
Create as many sidebars as you want.
*/
// @ts-check
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
const sidebars = {
// By default, Docusaurus generates a sidebar from the docs folder structure
// tutorialSidebar: [{type: 'autogenerated', dirName: '.'}],
// But you can create a sidebar manually
docs: [
{
type: 'doc',
label: 'Welcome to Constellation',
id: 'intro'
},
{
type: 'category',
label: 'Overview',
link: {
type: 'generated-index',
},
items: [
{
type: 'doc',
label: 'Confidential Kubernetes',
id: 'overview/confidential-kubernetes',
},
{
type: 'doc',
label: 'Security benefits',
id: 'overview/security-benefits',
},
{
type: 'doc',
label: 'Product features',
id: 'overview/product',
},
{
type: 'doc',
label: 'Performance',
id: 'overview/benchmarks',
},
{
type: 'doc',
label: 'License',
id: 'overview/license',
},
]
},
{
type: 'category',
label: 'Getting started',
link: {
type: 'generated-index',
},
items: [
{
type: 'doc',
label: 'Install the CLI',
id: 'getting-started/install',
},
{
type: 'doc',
label: 'First steps',
id: 'getting-started/first-steps',
},
{
type: 'category',
label: 'Examples',
link: {
type: 'doc',
id: 'getting-started/examples',
},
items: [
{
type: 'doc',
label: 'Emojivoto',
id: 'getting-started/examples/emojivoto'
},
{
type: 'doc',
label: 'Online Boutique',
id: 'getting-started/examples/online-boutique'
},
{
type: 'doc',
label: 'Horizontal Pod Autoscaling',
id: 'getting-started/examples/horizontal-scaling'
},
]
},
],
},
{
type: 'category',
label: 'Workflows',
link: {
type: 'generated-index',
},
items: [
{
type: 'doc',
label: 'Create your cluster',
id: 'workflows/create',
},
{
type: 'doc',
label: 'Verify your cluster',
id: 'workflows/verify',
},
{
type: 'doc',
label: 'Scale your cluster',
id: 'workflows/scale',
},
{
type: 'doc',
label: 'Upgrade your cluster',
id: 'workflows/upgrade',
},
{
type: 'doc',
label: 'Use persistent storage',
id: 'workflows/storage',
},
{
type: 'doc',
label: 'Managing SSH keys',
id: 'workflows/ssh',
},
{
type: 'doc',
label: 'Troubleshooting',
id: 'workflows/troubleshooting',
},
{
type: 'doc',
label: 'Terminate your cluster',
id: 'workflows/terminate',
},
{
type: 'doc',
label: 'Recover your cluster',
id: 'workflows/recovery',
},
],
},
{
type: 'category',
label: 'Architecture',
link: {
type: 'generated-index',
},
items: [
{
type: 'doc',
label: 'Overview',
id: 'architecture/overview',
},
{
type: 'doc',
label: 'Components',
id: 'architecture/components',
},
{
type: 'doc',
label: 'Attestation',
id: 'architecture/attestation',
},
{
type: 'doc',
label: 'Cluster orchestration',
id: 'architecture/orchestration',
},
{
type: 'doc',
label: 'Versions and support',
id: 'architecture/versions',
},
{
type: 'doc',
label: 'Images',
id: 'architecture/images',
},
{
type: 'doc',
label: 'Keys and cryptographic primitives',
id: 'architecture/keys',
},
{
type: 'doc',
label: 'Encrypted persistent storage',
id: 'architecture/encrypted-storage',
},
{
type: 'doc',
label: 'Networking',
id: 'architecture/networking',
},
],
},
{
type: 'category',
label: 'Reference',
link: {
type: 'generated-index',
},
items: [
{
type: 'doc',
label: 'CLI',
id: 'reference/cli',
},
{
type: 'doc',
label: 'Configuration file',
id: 'reference/config',
},
],
},
],
};
module.exports = sidebars;

53
docs/src/css/custom.css Normal file
View File

@ -0,0 +1,53 @@
/**
* Any CSS included here will be global. The classic template
* bundles Infima by default. Infima is a CSS framework designed to
* work well for content-centric websites.
*/
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #8B04DD;
--ifm-color-primary-dark: #58089E;
--ifm-color-primary-darker: #330663;
--ifm-color-primary-darkest: #1C033C;
--ifm-color-primary-light: #8B04DD;
--ifm-color-primary-lighter: #B873F4;
--ifm-color-primary-lightest: #E3D2FF;
--ifm-code-font-size: 95%;
/* --ifm-footer-background-color: black;
--ifm-footer-link-color: white;
--ifm-footer-title-color: white;*/
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
}
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme='dark'] {
--ifm-color-primary: #90FF99;
--ifm-color-primary-dark: #5BC95B;
--ifm-color-primary-darker: #238723;
--ifm-color-primary-darkest: #124C12;
--ifm-color-primary-light: #90FF99;
--ifm-color-primary-lighter: #D2FFD3;
--ifm-color-primary-lightest: #D2FFD3;
/* --ifm-footer-background-color: black;
--ifm-footer-link-color: white;
--ifm-footer-title-color: white;*/
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
}
/* GitHub */
.header-github-link:hover {
opacity: 0.6;
}
.header-github-link:before {
content: '';
width: 24px;
height: 24px;
display: flex;
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E") no-repeat;
}
html[data-theme='dark'] .header-github-link:before {
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill='white' d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E") no-repeat;
}

View File

@ -0,0 +1,14 @@
import React from 'react';
// Import the original mapper
import MDXComponents from '@theme-original/MDXComponents';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
export default {
// Re-use the default mapping
...MDXComponents,
// Map the "highlight" tag to our <Highlight /> component!
// `Highlight` will receive all props that were passed to `highlight` in MDX
tabs: Tabs,
tabItem: TabItem,
};

0
docs/static/.nojekyll vendored Normal file
View File

View File

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 45 KiB

View File

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 117 KiB

BIN
docs/static/img/favicon.ico vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View File

@ -0,0 +1,4 @@
<svg width="449" height="449" viewBox="0 0 449 449" fill="none" xmlns="http://www.w3.org/2000/svg">
<circle cx="224.5" cy="224.5" r="224.5" fill="#90FF99"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M449 224.5C449 348.488 348.488 449 224.5 449C100.512 449 0 348.488 0 224.5C0 100.512 100.512 0 224.5 0C267.717 0 308.081 12.2114 342.332 33.3718C333.476 42.2375 328 54.4792 328 68C328 95.062 349.938 117 377 117C390.975 117 403.584 111.15 412.51 101.764C435.585 137.039 449 179.203 449 224.5ZM426 68C426 81.087 420.87 92.9757 412.51 101.764C394.44 74.1401 370.446 50.7411 342.332 33.3718C351.201 24.4928 363.459 19 377 19C404.062 19 426 40.938 426 68ZM258.039 319.019C295.741 305.669 319.963 271.28 322.187 233.715L381.607 212.673C388.575 282.686 347.358 351.042 278.067 375.578C196.591 404.429 107.155 361.768 78.3037 280.293C49.4529 198.818 92.1134 109.381 173.589 80.53C243.217 55.8743 318.66 83.4456 357.135 142.837L297.877 163.82C272.561 135.444 231.68 123.61 193.616 137.089C143.378 154.879 117.073 210.027 134.862 260.265C152.652 310.504 207.8 336.809 258.039 319.019Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -0,0 +1,12 @@
extends: existence
message: "Don't use plurals in parentheses such as in '%s'."
link: 'https://developers.google.com/style/plurals-parentheses'
level: error
nonword: true
action:
name: edit
params:
- remove
- '(s)'
tokens:
- '\b\w+\(s\)'

View File

@ -0,0 +1,7 @@
extends: existence
message: "Don't use periods with acronyms or initialisms such as '%s'."
link: 'https://developers.google.com/style/abbreviations'
level: error
nonword: true
tokens:
- '\b(?:[A-Z]\.){3,}'

View File

@ -0,0 +1,11 @@
extends: existence
message: "Don't use internet slang abbreviations such as '%s'."
link: 'https://developers.google.com/style/abbreviations'
ignorecase: true
level: error
tokens:
- 'tl;dr'
- ymmv
- rtfm
- imo
- fwiw

View File

@ -0,0 +1,8 @@
extends: existence
message: "In general, use American spelling instead of '%s'."
link: 'https://developers.google.com/style/spelling'
ignorecase: true
level: warning
tokens:
- '(?:\w+)nised?'
- '(?:\w+)logue'

View File

@ -0,0 +1,8 @@
extends: existence
message: "Put a nonbreaking space between the number and the unit in '%s'."
link: 'https://developers.google.com/style/units-of-measure'
nonword: true
level: error
tokens:
- \d+(?:B|kB|MB|GB|TB)
- \d+(?:ns|ms|s|min|h|d)

View File

@ -0,0 +1,9 @@
extends: existence
message: Use 'AM' or 'PM' (preceded by a space).
link: https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/date-time-terms
level: error
nonword: true
tokens:
- '\d{1,2}[AP]M'
- '\d{1,2} ?[ap]m'
- '\d{1,2} ?[aApP]\.[mM]\.'

View File

@ -0,0 +1,25 @@
extends: existence
message: "Don't use language (such as '%s') that defines people by their disability."
link: https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/accessibility-terms
level: suggestion
ignorecase: true
tokens:
- a victim of
- able-bodied
- affected by
- an epileptic
- crippled
- disabled
- dumb
- handicapped
- handicaps
- healthy
- lame
- maimed
- missing a limb
- mute
- normal
- sight-impaired
- stricken with
- suffers from
- vision-impaired

View File

@ -0,0 +1,64 @@
extends: conditional
message: "'%s' has no definition."
link: https://docs.microsoft.com/en-us/style-guide/acronyms
level: suggestion
ignorecase: false
# Ensures that the existence of 'first' implies the existence of 'second'.
first: '\b([A-Z]{3,5})\b'
second: '(?:\b[A-Z][a-z]+ )+\(([A-Z]{3,5})\)'
# ... with the exception of these:
exceptions:
- API
- ASP
- CLI
- CPU
- CSS
- CSV
- DEBUG
- DOM
- DPI
- FAQ
- GCC
- GDB
- GET
- GPU
- GTK
- GUI
- HTML
- HTTP
- HTTPS
- IDE
- JAR
- JSON
- JSX
- LESS
- LLDB
- NET
- NOTE
- NVDA
- OSS
- PATH
- PDF
- PHP
- POST
- RAM
- REPL
- RSA
- SCM
- SCSS
- SDK
- SQL
- SSH
- SSL
- SVG
- TBD
- TCP
- TODO
- URI
- URL
- USB
- UTF
- XML
- XSS
- YAML
- ZIP

View File

@ -0,0 +1,270 @@
extends: existence
message: "Consider removing '%s'."
link: https://docs.microsoft.com/en-us/style-guide/word-choice/use-simple-words-concise-sentences
ignorecase: true
level: warning
action:
name: remove
tokens:
- abnormally
- absentmindedly
- accidentally
- adventurously
- anxiously
- arrogantly
- awkwardly
- bashfully
- beautifully
- bitterly
- bleakly
- blindly
- blissfully
- boastfully
- boldly
- bravely
- briefly
- brightly
- briskly
- broadly
- busily
- calmly
- carefully
- carelessly
- cautiously
- cheerfully
- cleverly
- closely
- coaxingly
- colorfully
- continually
- coolly
- courageously
- crossly
- cruelly
- curiously
- daintily
- dearly
- deceivingly
- deeply
- defiantly
- deliberately
- delightfully
- diligently
- dimly
- doubtfully
- dreamily
- easily
- elegantly
- energetically
- enormously
- enthusiastically
- excitedly
- extremely
- fairly
- faithfully
- famously
- ferociously
- fervently
- fiercely
- fondly
- foolishly
- fortunately
- frankly
- frantically
- freely
- frenetically
- frightfully
- furiously
- generally
- generously
- gently
- gladly
- gleefully
- gracefully
- gratefully
- greatly
- greedily
- happily
- hastily
- healthily
- heavily
- helplessly
- honestly
- hopelessly
- hungrily
- innocently
- inquisitively
- intensely
- intently
- interestingly
- inwardly
- irritably
- jaggedly
- jealously
- jovially
- joyfully
- joyously
- jubilantly
- judgmentally
- justly
- keenly
- kiddingly
- kindheartedly
- knavishly
- knowingly
- knowledgeably
- lazily
- lightly
- limply
- lively
- loftily
- longingly
- loosely
- loudly
- lovingly
- loyally
- madly
- majestically
- meaningfully
- mechanically
- merrily
- miserably
- mockingly
- mortally
- mysteriously
- naturally
- nearly
- neatly
- nervously
- nicely
- noisily
- obediently
- obnoxiously
- oddly
- offensively
- optimistically
- overconfidently
- painfully
- partially
- patiently
- perfectly
- playfully
- politely
- poorly
- positively
- potentially
- powerfully
- promptly
- properly
- punctually
- quaintly
- queasily
- queerly
- questionably
- quickly
- quietly
- quirkily
- quizzically
- randomly
- rapidly
- rarely
- readily
- really
- reassuringly
- recklessly
- regularly
- reluctantly
- repeatedly
- reproachfully
- restfully
- righteously
- rightfully
- rigidly
- roughly
- rudely
- safely
- scarcely
- scarily
- searchingly
- sedately
- seemingly
- selfishly
- separately
- seriously
- shakily
- sharply
- sheepishly
- shrilly
- shyly
- silently
- sleepily
- slowly
- smoothly
- softly
- solemnly
- solidly
- speedily
- stealthily
- sternly
- strictly
- suddenly
- supposedly
- surprisingly
- suspiciously
- sweetly
- swiftly
- sympathetically
- tenderly
- tensely
- terribly
- thankfully
- thoroughly
- thoughtfully
- tightly
- tremendously
- triumphantly
- truthfully
- ultimately
- unabashedly
- unaccountably
- unbearably
- unethically
- unexpectedly
- unfortunately
- unimpressively
- unnaturally
- unnecessarily
- urgently
- usefully
- uselessly
- utterly
- vacantly
- vaguely
- vainly
- valiantly
- vastly
- verbally
- very
- viciously
- victoriously
- violently
- vivaciously
- voluntarily
- warmly
- weakly
- wearily
- wetly
- wholly
- wildly
- willfully
- wisely
- woefully
- wonderfully
- worriedly
- yawningly
- yearningly
- yieldingly
- youthfully
- zealously
- zestfully
- zestily

View File

@ -0,0 +1,11 @@
extends: existence
message: "In general, don't hyphenate '%s'."
link: https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/auto
ignorecase: true
level: error
action:
name: convert
params:
- simple
tokens:
- 'auto-\w+'

View File

@ -0,0 +1,14 @@
extends: existence
message: "Don't use '%s'. See the A-Z word list for details."
# See the A-Z word list
link: https://docs.microsoft.com/en-us/style-guide
ignorecase: true
level: error
tokens:
- abortion
- and so on
- app(?:lication)?s? (?:developer|program)
- app(?:lication)? file
- backbone
- backend
- contiguous selection

View File

@ -0,0 +1,120 @@
extends: substitution
message: "Consider using '%s' instead of '%s'."
link: https://docs.microsoft.com/en-us/style-guide/word-choice/use-simple-words-concise-sentences
ignorecase: true
level: suggestion
action:
name: replace
swap:
"approximate(?:ly)?": about
abundance: plenty
accelerate: speed up
accentuate: stress
accompany: go with
accomplish: carry out|do
accorded: given
accordingly: so
accrue: add
accurate: right|exact
acquiesce: agree
acquire: get|buy
additional: more|extra
address: discuss
addressees: you
adjacent to: next to
adjustment: change
admissible: allowed
advantageous: helpful
advise: tell
aggregate: total
aircraft: plane
alleviate: ease
allocate: assign|divide
alternatively: or
alternatives: choices|options
ameliorate: improve
amend: change
anticipate: expect
apparent: clear|plain
ascertain: discover|find out
assistance: help
attain: meet
attempt: try
authorize: allow
belated: late
bestow: give
cease: stop|end
collaborate: work together
commence: begin
compensate: pay
component: part
comprise: form|include
concept: idea
concerning: about
confer: give|award
consequently: so
consolidate: merge
constitutes: forms
contains: has
convene: meet
demonstrate: show|prove
depart: leave
designate: choose
desire: want|wish
determine: decide|find
detrimental: bad|harmful
disclose: share|tell
discontinue: stop
disseminate: send|give
eliminate: end
elucidate: explain
employ: use
enclosed: inside|included
encounter: meet
endeavor: try
enumerate: count
equitable: fair
equivalent: equal
exclusively: only
expedite: hurry
facilitate: ease
females: women
finalize: complete|finish
frequently: often
identical: same
incorrect: wrong
indication: sign
initiate: start|begin
itemized: listed
jeopardize: risk
liaise: work with|partner with
maintain: keep|support
methodology: method
modify: change
monitor: check|watch
multiple: many
necessitate: cause
notify: tell
numerous: many
objective: aim|goal
obligate: bind|compel
optimum: best|most
permit: let
portion: part
possess: own
previous: earlier
previously: before
prioritize: rank
procure: buy
provide: give|offer
purchase: buy
relocate: move
solicit: request
state-of-the-art: latest
subsequent: later|next
substantial: large
sufficient: enough
terminate: end
transmit: send
utilization: use
utilize: use

View File

@ -0,0 +1,50 @@
extends: substitution
message: "Use '%s' instead of '%s'."
link: https://docs.microsoft.com/en-us/style-guide/word-choice/use-contractions
level: error
ignorecase: true
action:
name: replace
swap:
are not: aren't
cannot: can't
could not: couldn't
did not: didn't
do not: don't
does not: doesn't
has not: hasn't
have not: haven't
how is: how's
is not: isn't
'it is(?!\.)': it's
'it''s(?=\.)': it is
should not: shouldn't
'that is(?!\.)': that's
'that''s(?=\.)': that is
'they are(?!\.)': they're
'they''re(?=\.)': they are
was not: wasn't
'we are(?!\.)': we're
'we''re(?=\.)': we are
'we have(?!\.)': we've
'we''ve(?=\.)': we have
were not: weren't
'what is(?!\.)': what's
'what''s(?=\.)': what is
'when is(?!\.)': when's
'when''s(?=\.)': when is
'where is(?!\.)': where's
'where''s(?=\.)': where is
will not: won't

View File

@ -0,0 +1,13 @@
extends: existence
message: "Remove the spaces around '%s'."
link: https://docs.microsoft.com/en-us/style-guide/punctuation/dashes-hyphens/emes
ignorecase: true
nonword: true
level: error
action:
name: edit
params:
- remove
- ' '
tokens:
- '[—–]\s|\s[—–]'

View File

@ -0,0 +1,8 @@
extends: existence
message: Use 'July 31, 2016' format, not '%s'.
link: https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/date-time-terms
ignorecase: true
level: error
nonword: true
tokens:
- '\d{1,2} (?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)|May|Jun(?:e)|Jul(?:y)|Aug(?:ust)|Sep(?:tember)?|Oct(?:ober)|Nov(?:ember)?|Dec(?:ember)?) \d{4}'

View File

@ -0,0 +1,40 @@
extends: existence
message: "Don't use ordinal numbers for dates."
link: https://docs.microsoft.com/en-us/style-guide/numbers#numbers-in-dates
level: error
nonword: true
ignorecase: true
raw:
- \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)|May|Jun(?:e)|Jul(?:y)|Aug(?:ust)|Sep(?:tember)?|Oct(?:ober)|Nov(?:ember)?|Dec(?:ember)?)\b\s*
tokens:
- first
- second
- third
- fourth
- fifth
- sixth
- seventh
- eighth
- ninth
- tenth
- eleventh
- twelfth
- thirteenth
- fourteenth
- fifteenth
- sixteenth
- seventeenth
- eighteenth
- nineteenth
- twentieth
- twenty-first
- twenty-second
- twenty-third
- twenty-fourth
- twenty-fifth
- twenty-sixth
- twenty-seventh
- twenty-eighth
- twenty-ninth
- thirtieth
- thirty-first

View File

@ -0,0 +1,8 @@
extends: existence
message: "Always spell out the name of the month."
link: https://docs.microsoft.com/en-us/style-guide/numbers#numbers-in-dates
ignorecase: true
level: error
nonword: true
tokens:
- '\b\d{1,2}/\d{1,2}/(?:\d{4}|\d{2})\b'

View File

@ -0,0 +1,9 @@
extends: existence
message: "In general, don't use an ellipsis."
link: https://docs.microsoft.com/en-us/style-guide/punctuation/ellipses
nonword: true
level: warning
action:
name: remove
tokens:
- '\.\.\.'

View File

@ -0,0 +1,16 @@
extends: existence
message: "Use first person (such as '%s') sparingly."
link: https://docs.microsoft.com/en-us/style-guide/grammar/person
ignorecase: true
level: warning
nonword: true
tokens:
- (?:^|\s)I\s
- (?:^|\s)I,\s
- \bI'd\b
- \bI'll\b
- \bI'm\b
- \bI've\b
- \bme\b
- \bmy\b
- \bmine\b

View File

@ -0,0 +1,12 @@
extends: substitution
message: "Use '%s' instead of '%s'."
link: https://docs.microsoft.com/en-us/style-guide/word-choice/use-us-spelling-avoid-non-english-words
ignorecase: true
level: error
nonword: true
action:
name: replace
swap:
'\b(?:eg|e\.g\.)[\s,]': for example
'\b(?:ie|i\.e\.)[\s,]': that is

View File

@ -0,0 +1,8 @@
extends: existence
message: "Don't use '%s'."
link: https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/grammar/nouns-pronouns.md#pronouns-and-gender
level: error
ignorecase: true
tokens:
- he/she
- s/he

View File

@ -0,0 +1,44 @@
extends: substitution
message: "Consider using '%s' instead of '%s'."
ignorecase: true
level: error
swap:
(?:alumna|alumnus): graduate
(?:alumnae|alumni): graduates
air(?:m[ae]n|wom[ae]n): pilot(s)
anchor(?:m[ae]n|wom[ae]n): anchor(s)
authoress: author
camera(?:m[ae]n|wom[ae]n): camera operator(s)
chair(?:m[ae]n|wom[ae]n): chair(s)
congress(?:m[ae]n|wom[ae]n): member(s) of congress
door(?:m[ae]|wom[ae]n): concierge(s)
draft(?:m[ae]n|wom[ae]n): drafter(s)
fire(?:m[ae]n|wom[ae]n): firefighter(s)
fisher(?:m[ae]n|wom[ae]n): fisher(s)
fresh(?:m[ae]n|wom[ae]n): first-year student(s)
garbage(?:m[ae]n|wom[ae]n): waste collector(s)
lady lawyer: lawyer
ladylike: courteous
landlord: building manager
mail(?:m[ae]n|wom[ae]n): mail carriers
man and wife: husband and wife
man enough: strong enough
mankind: human kind
manmade: manufactured
manpower: personnel
men and girls: men and women
middle(?:m[ae]n|wom[ae]n): intermediary
news(?:m[ae]n|wom[ae]n): journalist(s)
ombuds(?:man|woman): ombuds
oneupmanship: upstaging
poetess: poet
police(?:m[ae]n|wom[ae]n): police officer(s)
repair(?:m[ae]n|wom[ae]n): technician(s)
sales(?:m[ae]n|wom[ae]n): salesperson or sales people
service(?:m[ae]n|wom[ae]n): soldier(s)
steward(?:ess)?: flight attendant
tribes(?:m[ae]n|wom[ae]n): tribe member(s)
waitress: waiter
woman doctor: doctor
woman scientist[s]?: scientist(s)
work(?:m[ae]n|wom[ae]n): worker(s)

View File

@ -0,0 +1,11 @@
extends: existence
message: "For a general audience, use 'address' rather than 'URL'."
link: https://docs.microsoft.com/en-us/style-guide/urls-web-addresses
level: warning
action:
name: replace
params:
- URL
- address
tokens:
- URL

View File

@ -0,0 +1,7 @@
extends: existence
message: "Avoid using acronyms in a title or heading."
link: https://docs.microsoft.com/en-us/style-guide/acronyms#be-careful-with-acronyms-in-titles-and-headings
level: warning
scope: heading
tokens:
- '[A-Z]{2,4}'

Some files were not shown because too many files have changed in this diff Show More