mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-05-17 21:50:24 -04:00
docs: add release v2.6.0
This commit is contained in:
parent
8c87bba755
commit
02694c0648
58 changed files with 7275 additions and 0 deletions
13
docs/versioned_docs/version-2.6/workflows/cert-manager.md
Normal file
13
docs/versioned_docs/version-2.6/workflows/cert-manager.md
Normal file
|
@ -0,0 +1,13 @@
|
|||
# Install cert-manager
|
||||
|
||||
:::caution
|
||||
If you want to use cert-manager with Constellation, pay attention to the following to avoid potential pitfalls.
|
||||
:::
|
||||
|
||||
Constellation ships with cert-manager preinstalled.
|
||||
The default installation is part of the `kube-system` namespace, as all other Constellation-managed microservices.
|
||||
You are free to install more instances of cert-manager into other namespaces.
|
||||
However, be aware that any new installation needs to use the same version as the one installed with Constellation or rely on the same CRD versions.
|
||||
Also remember to set the `installCRDs` value to `false` when installing new cert-manager instances.
|
||||
It will create problems if you have two installations of cert-manager depending on different versions of the installed CRDs.
|
||||
CRDs are cluster-wide resources and cert-manager depends on specific versions of those CRDs for each release.
|
258
docs/versioned_docs/version-2.6/workflows/config.md
Normal file
258
docs/versioned_docs/version-2.6/workflows/config.md
Normal file
|
@ -0,0 +1,258 @@
|
|||
# Configure your cluster
|
||||
|
||||
Before you can create your cluster, you need to configure the identity and access management (IAM) for your cloud service provider (CSP) and choose machine types for the nodes.
|
||||
|
||||
## Creating the configuration file
|
||||
|
||||
You can generate a configuration file for your CSP by using the following CLI command:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
```bash
|
||||
constellation config generate azure
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
```bash
|
||||
constellation config generate gcp
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
```bash
|
||||
constellation config generate aws
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
This creates the file `constellation-conf.yaml` in the current directory.
|
||||
|
||||
:::tip
|
||||
You can also automatically generate a configuration file by adding the `--generate-config` flag to the `constellation iam create` command when [creating an IAM configuration](#creating-an-iam-configuration).
|
||||
:::
|
||||
|
||||
## Choosing a VM type
|
||||
|
||||
Constellation supports the following VM types:
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
By default, Constellation uses `Standard_DC4as_v5` CVMs (4 vCPUs, 16 GB RAM) to create your cluster. Optionally, you can switch to a different VM type by modifying **instanceType** in the configuration file. For CVMs, any VM type with a minimum of 4 vCPUs from the [DCasv5 & DCadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/dcasv5-dcadsv5-series) or [ECasv5 & ECadsv5](https://docs.microsoft.com/en-us/azure/virtual-machines/ecasv5-ecadsv5-series) families is supported.
|
||||
|
||||
You can also run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
By default, Constellation uses `n2d-standard-4` VMs (4 vCPUs, 16 GB RAM) to create your cluster. Optionally, you can switch to a different VM type by modifying **instanceType** in the configuration file. Supported are all machines with a minimum of 4 vCPUs from the [C2D](https://cloud.google.com/compute/docs/compute-optimized-machines#c2d_machine_types) or [N2D](https://cloud.google.com/compute/docs/general-purpose-machines#n2d_machines) family. You can run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
By default, Constellation uses `m6a.xlarge` VMs (4 vCPUs, 16 GB RAM) to create your cluster. Optionally, you can switch to a different VM type by modifying **instanceType** in the configuration file. Supported are all nitroTPM-enabled machines with a minimum of 4 vCPUs (`xlarge` or larger). Refer to the [list of nitroTPM-enabled instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enable-nitrotpm-prerequisites.html) or run `constellation config instance-types` to get the list of all supported options.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
Fill the desired VM type into the **instanceType** field in the `constellation-conf.yml` file.
|
||||
|
||||
## Choosing a Kubernetes version
|
||||
|
||||
To learn which Kubernetes versions can be installed with your current CLI, you can run `constellation config kubernetes-versions`.
|
||||
See also Constellation's [Kubernetes support policy](../architecture/versions.md#kubernetes-support-policy).
|
||||
|
||||
## Creating an IAM configuration
|
||||
|
||||
You can create an IAM configuration for your cluster automatically using the `constellation iam create` command.
|
||||
If you haven't generated a configuration file yet, you can do so by adding the `--generate-config` flag to the command. This creates a configuration file and populates it with the created IAM values.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
You must be authenticated with the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) in the shell session.
|
||||
|
||||
```bash
|
||||
constellation iam create azure --region=westus --resourceGroup=constellTest --servicePrincipal=spTest
|
||||
```
|
||||
|
||||
This command creates IAM configuration on the Azure region `westus` creating a new resource group `constellTest` and a new service principal `spTest`.
|
||||
|
||||
Note that CVMs are currently only supported in a few regions, check [Azure's products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=virtual-machines®ions=all). These are:
|
||||
* `westus`
|
||||
* `eastus`
|
||||
* `northeurope`
|
||||
* `westeurope`
|
||||
|
||||
Paste the output into the corresponding fields of the `constellation-conf.yaml` file.
|
||||
|
||||
:::tip
|
||||
Since `clientSecretValue` is a sensitive value, you can leave it empty in the configuration file and pass it via an environment variable instead. To this end, create the environment variable `CONSTELL_AZURE_CLIENT_SECRET_VALUE` and set it to the secret value.
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
You must be authenticated with the [GCP CLI](https://cloud.google.com/sdk/gcloud) in the shell session.
|
||||
|
||||
```bash
|
||||
constellation iam create gcp --projectID=yourproject-12345 --zone=europe-west2-a --serviceAccountID=constell-test
|
||||
```
|
||||
|
||||
This command creates IAM configuration in the GCP project `yourproject-12345` on the GCP zone `europe-west2-a` creating a new service account `constell-test`.
|
||||
|
||||
Note that only regions offering CVMs of the `C2D` or `N2D` series are supported. You can find a [list of all regions in Google's documentation](https://cloud.google.com/compute/docs/regions-zones#available), which you can filter by machine type `N2D`.
|
||||
|
||||
Paste the output into the corresponding fields of the `constellation-conf.yaml` file.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
You must be authenticated with the [AWS CLI](https://aws.amazon.com/en/cli/) in the shell session.
|
||||
|
||||
```bash
|
||||
constellation iam create aws --zone=eu-central-1a --prefix=constellTest
|
||||
```
|
||||
|
||||
This command creates IAM configuration for the AWS zone `eu-central-1a` using the prefix `constellTest` for all named resources being created.
|
||||
|
||||
Constellation OS images are currently replicated to the following regions:
|
||||
* `eu-central-1`
|
||||
* `us-east-2`
|
||||
* `ap-south-1`
|
||||
|
||||
If you require the OS image to be available in another region, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md&title=Support+new+AWS+image+region:+xx-xxxx-x).
|
||||
|
||||
You can find a list of all [regions in AWS's documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions).
|
||||
|
||||
Paste the output into the corresponding fields of the `constellation-conf.yaml` file.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
<details>
|
||||
<summary>Alternatively, you can manually create the IAM configuration on your CSP.</summary>
|
||||
|
||||
The following describes the configuration fields and how you obtain the required information or create the required resources.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
* **subscription**: The UUID of your Azure subscription, e.g., `8b8bd01f-efd9-4113-9bd1-c82137c32da7`.
|
||||
|
||||
You can view your subscription UUID via `az account show` and read the `id` field. For more information refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription).
|
||||
|
||||
* **tenant**: The UUID of your Azure tenant, e.g., `3400e5a2-8fe2-492a-886c-38cb66170f25`.
|
||||
|
||||
You can view your tenant UUID via `az account show` and read the `tenant` field. For more information refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-ad-tenant).
|
||||
|
||||
* **location**: The Azure datacenter location you want to deploy your cluster in, e.g., `westus`. CVMs are currently only supported in a few regions, check [Azure's products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=virtual-machines®ions=all). These are:
|
||||
|
||||
* `westus`
|
||||
* `eastus`
|
||||
* `northeurope`
|
||||
* `westeurope`
|
||||
|
||||
* **resourceGroup**: [Create a new resource group in Azure](https://portal.azure.com/#create/Microsoft.ResourceGroup) for your Constellation cluster. Set this configuration field to the name of the created resource group.
|
||||
|
||||
* **userAssignedIdentity**: [Create a new managed identity in Azure](https://portal.azure.com/#create/Microsoft.ManagedIdentity). You should create the identity in a different resource group as all resources within the cluster resource group will be deleted on cluster termination.
|
||||
|
||||
Add two role assignments to the identity: `Virtual Machine Contributor` and `Application Insights Component Contributor`. The `scope` of both should refer to the previously created cluster resource group.
|
||||
|
||||
Set the configuration value to the full ID of the created identity, e.g., `/subscriptions/8b8bd01f-efd9-4113-9bd1-c82137c32da7/resourcegroups/constellation-identity/providers/Microsoft.ManagedIdentity/userAssignedIdentities/constellation-identity`. You can get it by opening the `JSON View` from the `Overview` section of the identity.
|
||||
|
||||
The user-assigned identity is used by instances of the cluster to access other cloud resources.
|
||||
For more information about managed identities refer to [Azure's documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
|
||||
|
||||
* **appClientID**: [Create a new app registration in Azure](https://portal.azure.com/#view/Microsoft_AAD_RegisteredApps/CreateApplicationBlade/quickStartType~/null/isMSAApp~/false).
|
||||
|
||||
Set `Supported account types` to `Accounts in this organizational directory only` and leave the `Redirect URI` empty.
|
||||
|
||||
Set the configuration value to the `Application (client) ID`, e.g., `86ec31dd-532b-4a8c-a055-dd23f25fb12f`.
|
||||
|
||||
In the cluster resource group, go to `Access Control (IAM)` and set the created app registration as `Owner`.
|
||||
|
||||
* **clientSecretValue**: In the previously created app registration, go to `Certificates & secrets` and create a new `Client secret`.
|
||||
|
||||
Set the configuration value to the secret value.
|
||||
|
||||
:::tip
|
||||
Since this is a sensitive value, alternatively you can leave `clientSecretValue` empty in the configuration file and pass it via an environment variable instead. To this end, create the environment variable `CONSTELL_AZURE_CLIENT_SECRET_VALUE` and set it to the secret value.
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
* **project**: The ID of your GCP project, e.g., `constellation-129857`.
|
||||
|
||||
You can find it on the [welcome screen of your GCP project](https://console.cloud.google.com/welcome). For more information refer to [Google's documentation](https://support.google.com/googleapi/answer/7014113).
|
||||
|
||||
* **region**: The GCP region you want to deploy your cluster in, e.g., `us-west1`.
|
||||
|
||||
You can find a [list of all regions in Google's documentation](https://cloud.google.com/compute/docs/regions-zones#available).
|
||||
|
||||
* **zone**: The GCP zone you want to deploy your cluster in, e.g., `us-west1-a`.
|
||||
|
||||
You can find a [list of all zones in Google's documentation](https://cloud.google.com/compute/docs/regions-zones#available).
|
||||
|
||||
* **serviceAccountKeyPath**: To configure this, you need to create a GCP [service account](https://cloud.google.com/iam/docs/service-accounts) with the following permissions:
|
||||
|
||||
- `Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1)`
|
||||
- `Compute Network Admin (roles/compute.networkAdmin)`
|
||||
- `Compute Security Admin (roles/compute.securityAdmin)`
|
||||
- `Compute Storage Admin (roles/compute.storageAdmin)`
|
||||
- `Service Account User (roles/iam.serviceAccountUser)`
|
||||
|
||||
Afterward, create and download a new JSON key for this service account. Place the downloaded file in your Constellation workspace, and set the config parameter to the filename, e.g., `constellation-129857-15343dba46cb.json`.
|
||||
|
||||
</tabItem>
|
||||
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
* **region**: The name of your chosen AWS data center region, e.g., `us-east-2`.
|
||||
|
||||
Constellation OS images are currently replicated to the following regions:
|
||||
* `eu-central-1`
|
||||
* `us-east-2`
|
||||
* `ap-south-1`
|
||||
|
||||
If you require the OS image to be available in another region, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md&title=Support+new+AWS+image+region:+xx-xxxx-x).
|
||||
|
||||
You can find a list of all [regions in AWS's documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions).
|
||||
|
||||
* **zone**: The name of your chosen AWS data center availability zone, e.g., `us-east-2a`.
|
||||
|
||||
Learn more about [availability zones in AWS's documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones).
|
||||
|
||||
* **iamProfileControlPlane**: The name of an IAM instance profile attached to all control-plane nodes.
|
||||
|
||||
You can create the resource with [Terraform](https://www.terraform.io/). For that, use the [provided Terraform script](https://github.com/edgelesssys/constellation/tree/release/v2.2/hack/terraform/aws/iam) to generate the necessary profile. The profile name will be provided as Terraform output value: `control_plane_instance_profile`.
|
||||
|
||||
Alternatively, you can create the AWS profile with a tool of your choice. Use the JSON policy in [main.tf](https://github.com/edgelesssys/constellation/tree/release/v2.2/hack/terraform/aws/iam/main.tf) in the resource `aws_iam_policy.control_plane_policy`.
|
||||
|
||||
* **iamProfileWorkerNodes**: The name of an IAM instance profile attached to all worker nodes.
|
||||
|
||||
You can create the resource with [Terraform](https://www.terraform.io/). For that, use the [provided Terraform script](https://github.com/edgelesssys/constellation/tree/release/v2.2/hack/terraform/aws/iam) to generate the necessary profile. The profile name will be provided as Terraform output value: `worker_nodes_instance_profile`.
|
||||
|
||||
Alternatively, you can create the AWS profile with a tool of your choice. Use the JSON policy in [main.tf](https://github.com/edgelesssys/constellation/tree/release/v2.2/hack/terraform/aws/iam/main.tf) in the resource `aws_iam_policy.worker_node_policy`.
|
||||
|
||||
</tabItem>
|
||||
|
||||
</tabs>
|
||||
</details>
|
||||
|
||||
Now that you've configured your CSP, you can [create your cluster](./create.md).
|
||||
|
||||
## Deleting an IAM configuration
|
||||
|
||||
You can keep a created IAM configuration and reuse it for new clusters. Alternatively, you can also delete it if you don't want to use it anymore.
|
||||
|
||||
Delete the IAM configuration by executing the following command in the same directory where you executed `constellation iam create` (the directory that contains [`constellation-iam-terraform`](../reference/terraform.md) as a subdirectory):
|
||||
```bash
|
||||
constellation iam destroy
|
||||
```
|
92
docs/versioned_docs/version-2.6/workflows/create.md
Normal file
92
docs/versioned_docs/version-2.6/workflows/create.md
Normal file
|
@ -0,0 +1,92 @@
|
|||
# Create your cluster
|
||||
|
||||
Creating your cluster requires two steps:
|
||||
|
||||
1. Creating the necessary resources in your cloud environment
|
||||
2. Bootstrapping the Constellation cluster and setting up a connection
|
||||
|
||||
See the [architecture](../architecture/orchestration.md) section for details on the inner workings of this process.
|
||||
|
||||
:::tip
|
||||
If you don't have a cloud subscription, check out [MiniConstellation](../getting-started/first-steps-local.md), which lets you set up a local Constellation cluster using virtualization.
|
||||
:::
|
||||
|
||||
## The *create* step
|
||||
|
||||
This step creates the necessary resources for your cluster in your cloud environment.
|
||||
Before you create the cluster, make sure to have a [valid configuration file](./config.md).
|
||||
|
||||
### Create
|
||||
|
||||
<tabs groupId="provider">
|
||||
<tabItem value="cli" label="CLI">
|
||||
|
||||
Choose the initial size of your cluster.
|
||||
The following command creates a cluster with one control-plane and two worker nodes:
|
||||
|
||||
```bash
|
||||
constellation create --control-plane-nodes 1 --worker-nodes 2
|
||||
```
|
||||
|
||||
For details on the flags, consult the command help via `constellation create -h`.
|
||||
|
||||
*create* stores your cluster's state in a [`constellation-terraform`](../architecture/orchestration.md#cluster-creation-process) directory in your workspace.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="terraform" label="Terraform">
|
||||
|
||||
Terraform allows for an easier GitOps integration as well as meeting regulatory requirements.
|
||||
Since the Constellation CLI also uses Terraform under the hood, you can reuse the same Terraform files.
|
||||
|
||||
:::info
|
||||
Familiarize with the [Terraform usage policy](../reference/terraform.md) before manually interacting with Terraform to create a cluster.
|
||||
Please also refrain from changing the Terraform resource definitions, as Constellation is tightly coupled to them.
|
||||
:::
|
||||
|
||||
Download the Terraform files for the selected CSP from the [GitHub repository](https://github.com/edgelesssys/constellation/tree/main/cli/internal/terraform/terraform).
|
||||
|
||||
Create a `terraform.tfvars` file.
|
||||
There, define all needed variables found in `variables.tf` using the values from the `constellation-config.yaml`.
|
||||
|
||||
To find the image reference for your CSP and region, execute:
|
||||
|
||||
```bash
|
||||
CONSTELL_VER=vX.Y.Z
|
||||
curl -s https://cdn.confidential.cloud/constellation/v1/ref/-/stream/stable/$CONSTELL_VER/image/info.json | jq
|
||||
```
|
||||
|
||||
Initialize and apply Terraform to create the configured infrastructure:
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
terraform apply
|
||||
```
|
||||
|
||||
The Constellation [init step](#the-init-step) requires the already created `constellation-config.yaml` and the `constellation-id.json`.
|
||||
Create the `constellation-id.json` using the output from the Terraform state and the `constellation-conf.yaml`:
|
||||
|
||||
```bash
|
||||
CONSTELL_IP=$(terraform output ip)
|
||||
CONSTELL_INIT_SECRET=$(terraform output initSecret | jq -r | tr -d '\n' | base64)
|
||||
CONSTELL_CSP=$(cat constellation-conf.yaml | yq ".provider | keys | .[0]")
|
||||
jq --null-input --arg cloudprovider "$CONSTELL_CSP" --arg ip "$CONSTELL_IP" --arg initsecret "$CONSTELL_INIT_SECRET" '{"cloudprovider":$cloudprovider,"ip":$ip,"initsecret":$initsecret}' > constellation-id.json
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## The *init* step
|
||||
|
||||
The following command initializes and bootstraps your cluster:
|
||||
|
||||
```bash
|
||||
constellation init
|
||||
```
|
||||
|
||||
Next, configure `kubectl` for your cluster:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
```
|
||||
|
||||
🏁 That's it. You've successfully created a Constellation cluster.
|
1
docs/versioned_docs/version-2.6/workflows/lb.md
Normal file
1
docs/versioned_docs/version-2.6/workflows/lb.md
Normal file
|
@ -0,0 +1 @@
|
|||
# Expose services
|
148
docs/versioned_docs/version-2.6/workflows/recovery.md
Normal file
148
docs/versioned_docs/version-2.6/workflows/recovery.md
Normal file
|
@ -0,0 +1,148 @@
|
|||
# Recover your cluster
|
||||
|
||||
Recovery of a Constellation cluster means getting it back into a healthy state after too many concurrent node failures in the control plane.
|
||||
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/images.md#state-disk).
|
||||
|
||||
Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes.
|
||||
The `constellation recover` command securely connects to all nodes in need of recovery using [attested TLS](../architecture/attestation.md#attested-tls-atls) and provides them with the keys to decrypt their state disks and continue booting.
|
||||
|
||||
## Identify unhealthy clusters
|
||||
|
||||
The first step to recovery is identifying when a cluster becomes unhealthy.
|
||||
Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
|
||||
|
||||
You can check the health status of the nodes via the cloud service provider (CSP).
|
||||
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
|
||||
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
In the Azure portal, find the cluster's resource group.
|
||||
Inside the resource group, open the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>`.
|
||||
On the left, go to **Settings** > **Instances** and check that enough members are in a *Running* state.
|
||||
|
||||
Second, check the boot logs of these *Instances*.
|
||||
In the scale set's *Instances* view, open the details page of the desired instance.
|
||||
On the left, go to **Support + troubleshooting** > **Serial console**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:41Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"azure"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["10.9.0.5:30090","10.9.0.6:30090"]}
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"10.9.0.5:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T09:57:03Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.9.0.5:30090: i/o timeout\"","endpoint":"10.9.0.5:30090"}
|
||||
{"level":"INFO","ts":"2022-09-08T09:57:03Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"10.9.0.6:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.9.0.6:30090: i/o timeout\"","endpoint":"10.9.0.6:30090"}
|
||||
{"level":"ERROR","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
|
||||
```
|
||||
|
||||
This means that you have to recover the node manually.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
First, check that the control plane *Instance Group* has enough members in a *Ready* state.
|
||||
In the GCP Console, go to **Instance Groups** and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
|
||||
|
||||
Second, check the status of the *VM Instances*.
|
||||
Go to **VM Instances** and open the details of the desired instance.
|
||||
Check the serial console output of that instance by opening the **Logs** > **Serial port 1 (console)** page:
|
||||
|
||||

|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.4:30090: connect: connection refused\"","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.2:30090: i/o timeout\"","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"ERROR","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
|
||||
```
|
||||
|
||||
This means that you have to recover the node manually.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
First, open the AWS console to view all Auto Scaling Groups (ASGs) in the region of your cluster. Select the ASG of the control plane `<cluster-name>-<UID>-control-plane` and check that enough members are in a *Running* state.
|
||||
|
||||
Second, check the boot logs of these *Instances*. In the ASG's *Instance management* view, select each desired instance. In the upper right corner, select **Action > Monitor and troubleshoot > Get system log**.
|
||||
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","caller":"cmd/main.go:55","msg":"Starting disk-mapper","version":"2.0.0","cloudProvider":"gcp"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/microservices.md#joinservice) and obtain the decryption key.
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.4:30090: connect: connection refused\"","endpoint":"192.168.178.4:30090"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:96","msg":"Requesting rejoin ticket","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"WARN","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:101","msg":"Failed to rejoin on endpoint","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.2:30090: i/o timeout\"","endpoint":"192.168.178.2:30090"}
|
||||
{"level":"ERROR","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
|
||||
```
|
||||
|
||||
This means that you have to recover the node manually.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Recover a cluster
|
||||
|
||||
Recovering a cluster requires the following parameters:
|
||||
|
||||
* The `constellation-id.json` file in your working directory or the cluster's load balancer IP address
|
||||
* The master secret of the cluster
|
||||
|
||||
A cluster can be recovered like this:
|
||||
|
||||
```bash
|
||||
$ constellation recover --master-secret constellation-mastersecret.json
|
||||
Pushed recovery key.
|
||||
Pushed recovery key.
|
||||
Pushed recovery key.
|
||||
Recovered 3 control-plane nodes.
|
||||
```
|
||||
|
||||
In the serial console output of the node you'll see a similar output to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer","caller":"recoveryserver/server.go:93","msg":"Received recover call"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer","caller":"recoveryserver/server.go:125","msg":"Received state disk key and measurement secret, shutting down server"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:26:59Z","logger":"recoveryServer.gRPC","caller":"zap/server_interceptors.go:61","msg":"finished streaming call with code OK","grpc.start_time":"2022-09-08T10:26:59Z","system":"grpc","span.kind":"server","grpc.service":"recoverproto.API","grpc.method":"Recover","peer.address":"192.0.2.3:41752","grpc.code":"OK","grpc.time_ms":15.701}
|
||||
{"level":"INFO","ts":"2022-09-08T10:27:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:87","msg":"RejoinClient stopped"}
|
||||
```
|
87
docs/versioned_docs/version-2.6/workflows/sbom.md
Normal file
87
docs/versioned_docs/version-2.6/workflows/sbom.md
Normal file
|
@ -0,0 +1,87 @@
|
|||
# Consume software bill of materials (SBOMs)
|
||||
|
||||
Constellation builds produce a [software bill of materials (SBOM)](https://www.ntia.gov/SBOM) for each generated [artifact](../architecture/microservices.md).
|
||||
You can use SBOMs to make informed decisions about dependencies and vulnerabilities in a given application. Enterprises rely on SBOMs to maintain an inventory of used applications, which allows them to take data-driven approaches to managing risks related to vulnerabilities.
|
||||
|
||||
SBOMs for Constellation are generated using [Syft](https://github.com/anchore/syft), signed using [Cosign](https://github.com/sigstore/cosign), and stored with the produced artifact.
|
||||
|
||||
:::note
|
||||
The public key for Edgeless Systems' long-term code-signing key is:
|
||||
```
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEf8F1hpmwE+YCFXzjGtaQcrL6XZVT
|
||||
JmEe5iSLvG1SyQSAew7WdMKF6o9t8e2TFuCkzlOhhlws2OHWbiFZnFWCFw==
|
||||
-----END PUBLIC KEY-----
|
||||
```
|
||||
The public key is also available for download at https://edgeless.systems/es.pub and in the Twitter profile [@EdgelessSystems](https://twitter.com/EdgelessSystems).
|
||||
|
||||
Make sure the key is available in a file named `cosign.pub` to execute the following examples.
|
||||
:::
|
||||
|
||||
## Verify and download SBOMs
|
||||
|
||||
The following sections detail how to work with each type of artifact to verify and extract the SBOM.
|
||||
|
||||
### Constellation CLI
|
||||
|
||||
The SBOM for Constellation CLI is made available on the [GitHub release page](https://github.com/edgelesssys/constellation/releases). The SBOM (`constellation.spdx.sbom`) and corresponding signature (`constellation.spdx.sbom.sig`) are valid for each Constellation CLI for a given version, regardless of architecture and operating system.
|
||||
|
||||
```bash
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/download/v2.2.0/constellation.spdx.sbom
|
||||
curl -LO https://github.com/edgelesssys/constellation/releases/download/v2.2.0/constellation.spdx.sbom.sig
|
||||
cosign verify-blob --key cosign.pub --signature constellation.spdx.sbom.sig constellation.spdx.sbom
|
||||
```
|
||||
|
||||
### Container Images
|
||||
|
||||
SBOMs for container images are [attached to the image using Cosign](https://docs.sigstore.dev/cosign/other_types#sboms-software-bill-of-materials) and uploaded to the same registry.
|
||||
|
||||
As a consumer, use cosign to download and verify the SBOM:
|
||||
|
||||
```bash
|
||||
# Verify and download the attestation statement
|
||||
cosign verify-attestation ghcr.io/edgelesssys/constellation/verification-service@v2.2.0 --type 'https://cyclonedx.org/bom' --key cosign.pub --output-file verification-service.att.json
|
||||
# Extract SBOM from attestation statement
|
||||
jq -r .payload verification-service.att.json | base64 -d > verification-service.cyclonedx.sbom
|
||||
```
|
||||
|
||||
A successful verification should result in similar output:
|
||||
|
||||
```shell-session
|
||||
$ cosign verify-attestation ghcr.io/edgelesssys/constellation/verification-service@v2.2.0 --type 'https://cyclonedx.org/bom' --key cosign.pub --output-file verification-service.sbom
|
||||
|
||||
Verification for ghcr.io/edgelesssys/constellation/verification-service@v2.2.0 --
|
||||
The following checks were performed on each of these signatures:
|
||||
- The cosign claims were validated
|
||||
- The signatures were verified against the specified public key
|
||||
$ jq -r .payload verification-service.sbom | base64 -d > verification-service.cyclonedx.sbom
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
This example considers only the `verification-service`. The same approach works for all containers in the [Constellation container registry](https://github.com/orgs/edgelesssys/packages?repo_name=constellation).
|
||||
|
||||
:::
|
||||
|
||||
<!--
|
||||
TODO: Once mkosi is implemented
|
||||
## Operating System
|
||||
-->
|
||||
|
||||
## Vulnerability scanning
|
||||
|
||||
You can use a plethora of tools to consume SBOMs. This section provides suggestions for tools that are popular and known to produce reliable results, but any tool that consumes [SPDX](https://spdx.dev/) or [CycloneDX](https://cyclonedx.org/) files should work.
|
||||
|
||||
Syft is able to [convert between the two formats](https://github.com/anchore/syft#format-conversion-experimental) in case you require a specific type.
|
||||
|
||||
### Grype
|
||||
|
||||
[Grype](https://github.com/anchore/grype) is a CLI tool that lends itself well for integration into CI/CD systems or local developer machines. It's also able to consume the signed attestation statement directly and does the verification in one go.
|
||||
|
||||
```bash
|
||||
grype att:verification-service.sbom --key cosign.pub --add-cpes-if-none -q
|
||||
```
|
||||
|
||||
### Dependency Track
|
||||
|
||||
[Dependency Track](https://dependencytrack.org/) is one of the oldest and most mature solutions when it comes to managing software inventory and vulnerabilities. Once imported, it continuously scans SBOMs for new vulnerabilities. It supports the CycloneDX format and provides direct guidance on how to comply with [U.S. Executive Order 14028](https://docs.dependencytrack.org/usage/executive-order-14028/).
|
112
docs/versioned_docs/version-2.6/workflows/scale.md
Normal file
112
docs/versioned_docs/version-2.6/workflows/scale.md
Normal file
|
@ -0,0 +1,112 @@
|
|||
# Scale your cluster
|
||||
|
||||
Constellation provides all features of a Kubernetes cluster including scaling and autoscaling.
|
||||
|
||||
## Worker node scaling
|
||||
|
||||
### Autoscaling
|
||||
|
||||
Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of
|
||||
worker nodes:
|
||||
|
||||
```bash
|
||||
worker_group=$(kubectl get scalinggroups -o json | jq -r '.items[].metadata.name | select(contains("worker"))')
|
||||
echo "The name of your worker scaling group is '$worker_group'"
|
||||
```
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource to `true`:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run.
|
||||
You can configure the minimum and maximum number of worker nodes in the scaling group by patching the `min` or
|
||||
`max` fields of the scaling group resource:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler will now never provision more than 5 worker nodes.
|
||||
|
||||
If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the
|
||||
following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of
|
||||
and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of
|
||||
worker nodes before and after the deployment:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx --replicas 150
|
||||
kubectl -n kube-system get nodes
|
||||
kubectl rollout status deployment nginx
|
||||
kubectl -n kube-system get nodes
|
||||
```
|
||||
|
||||
### Manual scaling
|
||||
|
||||
Alternatively, you can manually scale your cluster up or down:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-workers`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new **instance count** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **worker** instance group.
|
||||
3. Set the new **number of instances** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
:::caution
|
||||
|
||||
Scaling isn't yet implemented for AWS. If you require this feature, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md)!
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Control-plane node scaling
|
||||
|
||||
Control-plane nodes can **only be scaled manually and only scaled up**!
|
||||
|
||||
To increase the number of control-plane nodes, follow these steps:
|
||||
|
||||
<tabs groupId="csp">
|
||||
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-controlplanes`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new (increased) **instance count** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **control-plane** instance group.
|
||||
3. Set the new (increased) **number of instances** and **save**.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
:::caution
|
||||
|
||||
Scaling isn't yet implemented for AWS. If you require this feature, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md)!
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
If you scale down the number of control-planes nodes, the removed nodes won't be able to exit the `etcd` cluster correctly. This will endanger the quorum that's required to run a stable Kubernetes control plane.
|
295
docs/versioned_docs/version-2.6/workflows/storage.md
Normal file
295
docs/versioned_docs/version-2.6/workflows/storage.md
Normal file
|
@ -0,0 +1,295 @@
|
|||
# Use persistent storage
|
||||
|
||||
Persistent storage in Kubernetes requires cloud-specific configuration.
|
||||
For abstraction of container storage, Kubernetes offers [volumes](https://kubernetes.io/docs/concepts/storage/volumes/),
|
||||
allowing users to mount storage solutions directly into containers.
|
||||
The [Container Storage Interface (CSI)](https://kubernetes-csi.github.io/docs/) is the standard interface for exposing arbitrary block and file storage systems into containers in Kubernetes.
|
||||
Cloud service providers (CSPs) offer their own CSI-based solutions for cloud storage.
|
||||
|
||||
## Confidential storage
|
||||
|
||||
Most cloud storage solutions support encryption, such as [GCE Persistent Disks (PD)](https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek).
|
||||
Constellation supports the available CSI-based storage options for Kubernetes engines in Azure and GCP.
|
||||
However, their encryption takes place in the storage backend and is managed by the CSP.
|
||||
Thus, using the default CSI drivers for these storage types means trusting the CSP with your persistent data.
|
||||
|
||||
To address this, Constellation provides CSI drivers for Azure Disk and GCE PD, offering [encryption on the node level](../architecture/keys.md#storage-encryption). They enable transparent encryption for persistent volumes without needing to trust the cloud backend. Plaintext data never leaves the confidential VM context, offering you confidential storage.
|
||||
|
||||
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
|
||||
|
||||
## CSI drivers
|
||||
|
||||
Constellation supports the following drivers, which offer node-level encryption and optional integrity protection.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
**Constellation CSI driver for Azure Disk**:
|
||||
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for more information. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
**Constellation CSI driver for GCP Persistent Disk**:
|
||||
Mount [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
|
||||
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
|
||||
You can use them to bring a volume back to a prior state or provision new volumes.
|
||||
Follow the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
:::caution
|
||||
|
||||
Confidential storage isn't yet implemented for AWS. If you require this feature, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md)!
|
||||
|
||||
You may use other (non-confidential) CSI drivers that are compatible with Kubernetes on AWS.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
Note that in case the options above aren't a suitable solution for you, Constellation is compatible with all other CSI-based storage options. For example, you can use [Azure Files](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction) or [GCP Filestore](https://cloud.google.com/filestore) with Constellation out of the box. Constellation is just not providing transparent encryption on the node level for these storage types yet.
|
||||
|
||||
## Installation
|
||||
|
||||
The Constellation CLI automatically installs Constellation's CSI driver for the selected CSP in your cluster.
|
||||
If you don't need a CSI driver or wish to deploy your own, you can disable the automatic installation by setting `deployCSIDriver` to `false` in your Constellation config file.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
Azure comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [Standard SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [Premium SSDs](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
The default storage class is set to `encrypted-rwo` for performance reasons.
|
||||
If you want integrity-protected storage, set the `storageClassName` parameter of your persistent volume claim to `integrity-encrypted-rwo`.
|
||||
|
||||
Alternatively, you can create your own storage class with integrity protection enabled by adding `csi.storage.k8s.io/fstype: ext4-integrity` to the class `parameters`.
|
||||
Or use another filesystem by specifying another file system type with the suffix `-integrity`, e.g., `csi.storage.k8s.io/fstype: xfs-integrity`.
|
||||
|
||||
Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
GCP comes with two storage classes by default.
|
||||
|
||||
* `encrypted-rwo`
|
||||
* Uses [standard persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* `integrity-encrypted-rwo`
|
||||
* Uses [performance (SSD) persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs)
|
||||
* ext-4 filesystem
|
||||
* Encryption of all data written to disk
|
||||
* Integrity protection of data written to disk
|
||||
|
||||
For more information on encryption algorithms and key sizes, refer to [cryptographic algorithms](../architecture/encrypted-storage.md#cryptographic-algorithms).
|
||||
|
||||
:::info
|
||||
|
||||
The default storage class is set to `encrypted-rwo` for performance reasons.
|
||||
If you want integrity-protected storage, set the `storageClassName` parameter of your persistent volume claim to `integrity-encrypted-rwo`.
|
||||
|
||||
Alternatively, you can create your own storage class with integrity protection enabled by adding `csi.storage.k8s.io/fstype: ext4-integrity` to the class `parameters`.
|
||||
Or use another filesystem by specifying another file system type with the suffix `-integrity`, e.g., `csi.storage.k8s.io/fstype: xfs-integrity`.
|
||||
|
||||
Note that volume expansion isn't supported for integrity-protected disks.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
:::caution
|
||||
|
||||
Confidential storage isn't yet implemented for AWS. If you require this feature, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md)!
|
||||
|
||||
You may use other (non-confidential) CSI drivers that are compatible with Kubernetes on AWS.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
1. Create a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
|
||||
|
||||
A [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) is a request for storage with certain properties.
|
||||
It can refer to a storage class.
|
||||
The following creates a persistent volume claim, requesting 20 GB of storage via the `encrypted-rwo` storage class:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-example
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: encrypted-rwo
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
2. Create a Pod with persistent storage
|
||||
|
||||
You can assign a persistent volume claim to an application in need of persistent storage.
|
||||
The mounted volume will persist restarts.
|
||||
The following creates a pod that uses the previously created persistent volume claim:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: web-server
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: web-server
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/www/html
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-example
|
||||
readOnly: false
|
||||
EOF
|
||||
```
|
||||
|
||||
### Change the default storage class
|
||||
|
||||
The default storage class is responsible for all persistent volume claims that don't explicitly request `storageClassName`.
|
||||
Constellation creates a storage class with encryption enabled and sets this as the default class.
|
||||
In case you wish to change it, follow the steps below:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. List the storage classes in your cluster:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
encrypted-rwo (default) azuredisk.csi.confidential.cloud Delete Immediate true 1d
|
||||
integrity-encrypted-rwo azuredisk.csi.confidential.cloud Delete Immediate false 1d
|
||||
```
|
||||
|
||||
The default storage class is marked by `(default)`.
|
||||
|
||||
2. Mark old default storage class as non default
|
||||
|
||||
If you previously used another storage class as the default, you will have to remove that annotation:
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass encrypted-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
```
|
||||
|
||||
3. Mark new class as the default
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass integrity-encrypted-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
4. Verify that your chosen storage class is default:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
encrypted-rwo azuredisk.csi.confidential.cloud Delete Immediate true 1d
|
||||
integrity-encrypted-rwo (default) azuredisk.csi.confidential.cloud Delete Immediate false 1d
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. List the storage classes in your cluster:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
encrypted-rwo (default) gcp.csi.confidential.cloud Delete Immediate true 1d
|
||||
integrity-encrypted-rwo gcp.csi.confidential.cloud Delete Immediate false 1d
|
||||
```
|
||||
|
||||
The default storage class is marked by `(default)`.
|
||||
|
||||
2. Mark old default storage class as non default
|
||||
|
||||
If you previously used another storage class as the default, you will have to remove that annotation:
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass encrypted-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
```
|
||||
|
||||
3. Mark new class as the default
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass integrity-encrypted-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
4. Verify that your chosen storage class is default:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```shell-session
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
encrypted-rwo gcp.csi.confidential.cloud Delete Immediate true 1d
|
||||
integrity-encrypted-rwo (default) gcp.csi.confidential.cloud Delete Immediate false 1d
|
||||
```
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
:::caution
|
||||
|
||||
Confidential storage isn't yet implemented for AWS. If you require this feature, [let us know](https://github.com/edgelesssys/constellation/issues/new?assignees=&labels=&template=feature_request.md)!
|
||||
|
||||
You may use other (non-confidential) CSI drivers that are compatible with Kubernetes on AWS.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
52
docs/versioned_docs/version-2.6/workflows/terminate.md
Normal file
52
docs/versioned_docs/version-2.6/workflows/terminate.md
Normal file
|
@ -0,0 +1,52 @@
|
|||
# Terminate your cluster
|
||||
|
||||
You can terminate your cluster using the CLI. For this, you need the Terraform state directory named [`constellation-terraform`](../reference/terraform.md) in the current directory.
|
||||
|
||||
:::danger
|
||||
|
||||
All ephemeral storage and state of your cluster will be lost. Make sure any data is safely stored in persistent storage. Constellation can recreate your cluster and the associated encryption keys, but won't backup your application data automatically.
|
||||
|
||||
:::
|
||||
|
||||
<tabs groupId="provider">
|
||||
<tabItem value="cli" label="CLI">
|
||||
Terminate the cluster by running:
|
||||
|
||||
```bash
|
||||
constellation terminate
|
||||
```
|
||||
|
||||
Or without confirmation (e.g., for automation purposes):
|
||||
|
||||
```bash
|
||||
constellation terminate --yes
|
||||
```
|
||||
|
||||
This deletes all resources created by Constellation in your cloud environment.
|
||||
All local files created by the `create` and `init` commands are deleted as well, except for `constellation-mastersecret.json` and the configuration file.
|
||||
|
||||
:::caution
|
||||
|
||||
Termination can fail if additional resources have been created that depend on the ones managed by Constellation. In this case, you need to delete these additional
|
||||
resources manually. Just run the `terminate` command again afterward to continue the termination process of the cluster.
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="terraform" label="Terraform">
|
||||
Terminate the cluster by running:
|
||||
|
||||
```bash
|
||||
terraform destroy
|
||||
```
|
||||
|
||||
Delete all files that are no longer needed:
|
||||
|
||||
```bash
|
||||
rm constellation-id.json constellation-admin.conf
|
||||
```
|
||||
|
||||
Only the `constellation-mastersecret.json` and the configuration file remain.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
103
docs/versioned_docs/version-2.6/workflows/troubleshooting.md
Normal file
103
docs/versioned_docs/version-2.6/workflows/troubleshooting.md
Normal file
|
@ -0,0 +1,103 @@
|
|||
# Troubleshooting
|
||||
|
||||
This section aids you in finding problems when working with Constellation.
|
||||
|
||||
## Azure: Resource Providers can't be registered
|
||||
|
||||
On Azure, you may receive the following error when running `create` or `terminate` with limited IAM permissions:
|
||||
```shell-session
|
||||
Error: Error ensuring Resource Providers are registered.
|
||||
|
||||
Terraform automatically attempts to register the Resource Providers it supports to
|
||||
ensure it's able to provision resources.
|
||||
|
||||
If you don't have permission to register Resource Providers you may wish to use the
|
||||
"skip_provider_registration" flag in the Provider block to disable this functionality.
|
||||
|
||||
[...]
|
||||
```
|
||||
|
||||
To continue, please ensure that the [required resource providers](../getting-started/install.md#required-permissions) have been registered in your subscription by your administrator.
|
||||
|
||||
Afterward, set `ARM_SKIP_PROVIDER_REGISTRATION=true` as an environment variable and either run `create` or `terminate` again.
|
||||
For example:
|
||||
```bash
|
||||
ARM_SKIP_PROVIDER_REGISTRATION=true constellation create --control-plane-nodes 1 --worker-nodes 2 -y
|
||||
```
|
||||
|
||||
Or alternatively, for `terminate`:
|
||||
```bash
|
||||
ARM_SKIP_PROVIDER_REGISTRATION=true constellation terminate
|
||||
```
|
||||
|
||||
## Cloud logging
|
||||
|
||||
To provide information during early stages of the node's boot process, Constellation logs messages into the cloud providers' log systems. Since these offerings **aren't** confidential, only generic information without any sensitive values are stored. This provides administrators with a high level understanding of the current state of a node.
|
||||
|
||||
You can view these information in the follow places:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. In your Azure subscription find the Constellation resource group.
|
||||
2. Inside the resource group find the Application Insights resource called `constellation-insights-*`.
|
||||
3. On the left-hand side go to `Logs`, which is located in the section `Monitoring`.
|
||||
+ Close the Queries page if it pops up.
|
||||
5. In the query text field type in `traces`, and click `Run`.
|
||||
|
||||
To **find the disk UUIDs** use the following query: `traces | where message contains "Disk UUID"`
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. Select the project that hosts Constellation.
|
||||
2. Go to the `Compute Engine` service.
|
||||
3. On the right-hand side of a VM entry select `More Actions` (a stacked ellipsis)
|
||||
+ Select `View logs`
|
||||
|
||||
To **find the disk UUIDs** use the following query: `resource.type="gce_instance" text_payload=~"Disk UUID:.*\n" logName=~".*/constellation-boot-log"`
|
||||
|
||||
:::info
|
||||
|
||||
Constellation uses the default bucket to store logs. Its [default retention period is 30 days](https://cloud.google.com/logging/quotas#logs_retention_periods).
|
||||
|
||||
:::
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="aws" label="AWS">
|
||||
|
||||
1. Open [AWS CloudWatch](https://console.aws.amazon.com/cloudwatch/home)
|
||||
2. Select [Log Groups](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups)
|
||||
3. Select the log group that matches the name of your cluster.
|
||||
4. Select the log stream for control or worker type nodes.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Connect to nodes
|
||||
|
||||
Debugging via a shell on a node is [directly supported by Kubernetes](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#node-shell-session).
|
||||
|
||||
1. Figure out which node to connect to:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
# or to see more information, such as IPs:
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
2. Connect to the node:
|
||||
|
||||
```sh
|
||||
kubectl debug node/constell-worker-xksa0-000000 -it --image=busybox
|
||||
```
|
||||
|
||||
You will be presented with a prompt.
|
||||
|
||||
The nodes file system is mounted at `/host`.
|
||||
|
||||
3. Once finished, clean up the debug pod:
|
||||
|
||||
```sh
|
||||
kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj
|
||||
```
|
53
docs/versioned_docs/version-2.6/workflows/trusted-launch.md
Normal file
53
docs/versioned_docs/version-2.6/workflows/trusted-launch.md
Normal file
|
@ -0,0 +1,53 @@
|
|||
# Use Azure trusted launch VMs
|
||||
|
||||
Constellation also supports [trusted launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/trusted-launch) on Microsoft Azure. Trusted launch VMs don't offer the same level of security as Confidential VMs, but are available in more regions and in larger quantities. The main difference between trusted launch VMs and normal VMs is that the former offer vTPM-based remote attestation. When used with trusted launch VMs, Constellation relies on vTPM-based remote attestation to verify nodes.
|
||||
|
||||
:::caution
|
||||
|
||||
Trusted launch VMs don't provide runtime encryption and don't keep the cloud service provider (CSP) out of your trusted computing base.
|
||||
|
||||
:::
|
||||
|
||||
Constellation supports trusted launch VMs with instance types `Standard_D*_v4` and `Standard_E*_v4`. Run `constellation config instance-types` for a list of all supported instance types.
|
||||
|
||||
## VM images
|
||||
|
||||
Azure currently doesn't support [community galleries for trusted launch VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/share-gallery-community). Thus, you need to manually import the Constellation node image into your cloud subscription.
|
||||
|
||||
The latest image is available at <https://cdn.confidential.cloud/constellation/images/azure/trusted-launch/v2.2.0/constellation.img>. Simply adjust the version number to download a newer version.
|
||||
|
||||
After you've downloaded the image, create a resource group `constellation-images` in your Azure subscription and import the image.
|
||||
You can use a script to do this:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/edgelesssys/constellation/main/hack/importAzure.sh
|
||||
chmod +x importAzure.sh
|
||||
AZURE_IMAGE_VERSION=2.2.0 AZURE_RESOURCE_GROUP_NAME=constellation-images AZURE_IMAGE_FILE=./constellation.img ./importAzure.sh
|
||||
```
|
||||
|
||||
The script creates the following resources:
|
||||
1. A new image gallery with the default name `constellation-import`
|
||||
2. A new image definition with the default name `constellation`
|
||||
3. The actual image with the provided version. In this case `2.2.0`
|
||||
|
||||
Once the import is completed, use the `ID` of the image version in your `constellation-conf.yaml` for the `image` field. Set `confidentialVM` to `false`.
|
||||
|
||||
Fetch the image measurements:
|
||||
|
||||
```bash
|
||||
IMAGE_VERSION=2.2.0
|
||||
URL=https://public-edgeless-constellation.s3.us-east-2.amazonaws.com//communitygalleries/constellationcvm-b3782fa0-0df7-4f2f-963e-fc7fc42663df/images/constellation/versions/$IMAGE_VERSION/measurements.yaml
|
||||
constellation config fetch-measurements -u$URL -s$URL.sig
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
The [constellation create](create.md) command will issue a warning because manually imported images aren't recognized as production grade images:
|
||||
|
||||
```shell-session
|
||||
Configured image doesn't look like a released production image. Double check image before deploying to production.
|
||||
```
|
||||
|
||||
Please ignore this warning.
|
||||
|
||||
:::
|
53
docs/versioned_docs/version-2.6/workflows/upgrade.md
Normal file
53
docs/versioned_docs/version-2.6/workflows/upgrade.md
Normal file
|
@ -0,0 +1,53 @@
|
|||
# Upgrade your cluster
|
||||
|
||||
Constellation provides an easy way to upgrade all components of your cluster, without disrupting it's availability.
|
||||
Specifically, you can upgrade the Kubernetes version, the nodes' image, and the Constellation microservices.
|
||||
You configure the desired versions in your local Constellation configuration and trigger upgrades with the `upgrade apply` command.
|
||||
To learn about available versions you use the `upgrade check` command.
|
||||
Which versions are available depends on the CLI version you are using.
|
||||
|
||||
## Update the CLI
|
||||
|
||||
Each CLI comes with a set of supported microservice and Kubernetes versions.
|
||||
Most importantly, a given CLI version can only upgrade a cluster of the previous minor version, but not older ones.
|
||||
This means that you have to upgrade your CLI and cluster one minor version at a time.
|
||||
|
||||
For example, if you are currently on CLI version v2.6 and the latest version is v2.8, you should
|
||||
* upgrade the CLI to v2.7,
|
||||
* upgrade the cluster to v2.7,
|
||||
* and only then continue upgrading the CLI (and the cluster) to v2.8 after.
|
||||
|
||||
Also note that if your current Kubernetes version isn't supported by the next CLI version, use your current CLI to upgrade to a newer Kubernetes version first.
|
||||
|
||||
To learn which Kubernetes versions are supported by a particular CLI, run [constellation config kubernetes-versions](../reference/cli.md#constellation-config-kubernetes-versions).
|
||||
|
||||
## Migrate the configuration
|
||||
|
||||
The Constellation configuration file is located in the file `constellation-conf.yaml` in your workspace.
|
||||
Refer to the [migration reference](../reference/config-migration.md) to check if you need to update fields in your configuration file.
|
||||
|
||||
## Check for upgrades
|
||||
|
||||
To learn which versions the current CLI can upgrade to and what's installed in your cluster, run:
|
||||
|
||||
```bash
|
||||
constellation upgrade check
|
||||
```
|
||||
|
||||
You can either enter the reported target versions into your config manually or run the above command with the `--write-config` flag.
|
||||
When using this flag, the `kubernetesVersion`, `image`, and `microserviceVersion` fields are overwritten with the smallest available upgrade.
|
||||
|
||||
## Apply the upgrade
|
||||
|
||||
Once you updated your config with the desired versions, you can trigger the upgrade with this command:
|
||||
|
||||
```bash
|
||||
constellation upgrade apply
|
||||
```
|
||||
|
||||
Microservice upgrades will be finished within a few minutes, depending on the cluster size.
|
||||
If you are interested, you can monitor pods restarting in the `kube-system` namespace with your tool of choice.
|
||||
|
||||
Image and Kubernetes upgrades take longer.
|
||||
For each node in your cluster, a new node has to be created and joined.
|
||||
The process usually takes up to ten minutes per node.
|
114
docs/versioned_docs/version-2.6/workflows/verify-cli.md
Normal file
114
docs/versioned_docs/version-2.6/workflows/verify-cli.md
Normal file
|
@ -0,0 +1,114 @@
|
|||
# Verify the CLI
|
||||
|
||||
Edgeless Systems uses [sigstore](https://www.sigstore.dev/) and [SLSA](https://slsa.dev) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/.
|
||||
|
||||
:::note
|
||||
The public key for Edgeless Systems' long-term code-signing key is:
|
||||
```
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEf8F1hpmwE+YCFXzjGtaQcrL6XZVT
|
||||
JmEe5iSLvG1SyQSAew7WdMKF6o9t8e2TFuCkzlOhhlws2OHWbiFZnFWCFw==
|
||||
-----END PUBLIC KEY-----
|
||||
```
|
||||
The public key is also available for download at https://edgeless.systems/es.pub and in the Twitter profile [@EdgelessSystems](https://twitter.com/EdgelessSystems).
|
||||
:::
|
||||
|
||||
The Rekor transparency log is a public append-only ledger that verifies and records signatures and associated metadata. The Rekor transparency log enables everyone to observe the sequence of (software) signatures issued by Edgeless Systems and many other parties. The transparency log allows for the public identification of dubious or malicious signatures.
|
||||
|
||||
You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as described in the following.
|
||||
|
||||
:::info
|
||||
You don't need to verify the Constellation node images. This is done automatically by your CLI and the rest of Constellation.
|
||||
:::
|
||||
|
||||
## Verify the signature
|
||||
|
||||
First, [install the Cosign CLI](https://docs.sigstore.dev/cosign/installation). Next, [download](https://github.com/edgelesssys/constellation/releases) and verify the signature that accompanies your CLI executable, for example:
|
||||
|
||||
```shell-session
|
||||
$ cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
|
||||
|
||||
Verified OK
|
||||
```
|
||||
|
||||
The above performs an offline verification of the provided public key, signature, and executable. To also verify that a corresponding entry exists in the public Rekor transparency log, add the variable `COSIGN_EXPERIMENTAL=1`:
|
||||
|
||||
```shell-session
|
||||
$ COSIGN_EXPERIMENTAL=1 cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
|
||||
|
||||
tlog entry verified with uuid: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13 index: 3477047
|
||||
Verified OK
|
||||
```
|
||||
|
||||
🏁 You now know that your CLI executable was officially released and signed by Edgeless Systems.
|
||||
|
||||
### Optional: Manually inspect the transparency log
|
||||
|
||||
To further inspect the public Rekor transparency log, [install the Rekor CLI](https://docs.sigstore.dev/rekor/installation). A search for the CLI executable should give a single UUID. (Note that this UUID contains the UUID from the previous `cosign` command.)
|
||||
|
||||
```shell-session
|
||||
$ rekor-cli search --artifact constellation-linux-amd64
|
||||
|
||||
Found matching entries (listed by UUID):
|
||||
362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
```
|
||||
|
||||
With this UUID you can get the full entry from the transparency log:
|
||||
|
||||
```shell-session
|
||||
$ rekor-cli get --uuid=362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
|
||||
LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d
|
||||
Index: 3477047
|
||||
IntegratedTime: 2022-09-12T22:28:16Z
|
||||
UUID: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
Body: {
|
||||
"HashedRekordObj": {
|
||||
"data": {
|
||||
"hash": {
|
||||
"algorithm": "sha256",
|
||||
"value": "40e137b9b9b8204d672642fd1e181c6d5ccb50cfc5cc7fcbb06a8c2c78f44aff"
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"content": "MEUCIQCSER3mGj+j5Pr2kOXTlCIHQC3gT30I7qkLr9Awt6eUUQIgcLUKRIlY50UN8JGwVeNgkBZyYD8HMxwC/LFRWoMn180=",
|
||||
"publicKey": {
|
||||
"content": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFZjhGMWhwbXdFK1lDRlh6akd0YVFjckw2WFpWVApKbUVlNWlTTHZHMVN5UVNBZXc3V2RNS0Y2bzl0OGUyVEZ1Q2t6bE9oaGx3czJPSFdiaUZabkZXQ0Z3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg=="
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
The field `publicKey` should contain Edgeless Systems' public key in Base64 encoding.
|
||||
|
||||
You can get an exhaustive list of artifact signatures issued by Edgeless Systems via the following command:
|
||||
|
||||
```bash
|
||||
rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509
|
||||
```
|
||||
|
||||
Edgeless Systems monitors this list to detect potential unauthorized use of its private key.
|
||||
|
||||
## Verify the provenance
|
||||
|
||||
Provenance attests that a software artifact was produced by a specific repository and build system invocation. For more information on provenance visit [slsa.dev](https://slsa.dev/provenance/v0.2) and learn about the [adoption of SLSA for Constellation](../reference/slsa.md).
|
||||
|
||||
Just as checking its signature proves that the CLI hasn't been manipulated, checking the provenance proves that the artifact was produced by the expected build process and hasn't been tampered with.
|
||||
|
||||
To verify the provenance, first install the [slsa-verifier](https://github.com/slsa-framework/slsa-verifier). Then make sure you have the provenance file (`constellation.intoto.jsonl`) and Constellation CLI downloaded. Both are available on the [GitHub release page](https://github.com/edgelesssys/constellation/releases).
|
||||
|
||||
:::info
|
||||
The same provenance file is valid for all Constellation CLI executables of a given version independent of the target platform.
|
||||
:::
|
||||
|
||||
Use the verifier to perform the check:
|
||||
|
||||
```shell-session
|
||||
$ slsa-verifier verify-artifact constellation-linux-amd64 \
|
||||
--provenance-path constellation.intoto.jsonl \
|
||||
--source-uri github.com/edgelesssys/constellation
|
||||
|
||||
Verified signature against tlog entry index 7771317 at URL: https://rekor.sigstore.dev/api/v1/log/entries/24296fb24b8ad77af2c04c8b4ae0d5bc5...
|
||||
Verified build using builder https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2 at commit 18e9924b416323c37b9cdfd6cc728de8a947424a
|
||||
PASSED: Verified SLSA provenance
|
||||
```
|
96
docs/versioned_docs/version-2.6/workflows/verify-cluster.md
Normal file
96
docs/versioned_docs/version-2.6/workflows/verify-cluster.md
Normal file
|
@ -0,0 +1,96 @@
|
|||
# Verify your cluster
|
||||
|
||||
Constellation's [attestation feature](../architecture/attestation.md) allows you, or a third party, to verify the integrity and confidentiality of your Constellation cluster.
|
||||
|
||||
## Fetch measurements
|
||||
|
||||
To verify the integrity of Constellation you need trusted measurements to verify against. For each node image released by Edgeless Systems, there are signed measurements, which you can download using the CLI:
|
||||
|
||||
```bash
|
||||
constellation config fetch-measurements
|
||||
```
|
||||
|
||||
This command performs the following steps:
|
||||
|
||||
1. Download the signed measurements for the configured image. By default, this will use Edgeless Systems' public measurement registry.
|
||||
2. Verify the signature of the measurements. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub).
|
||||
3. Write measurements into configuration file.
|
||||
|
||||
The configuration file then contains a list of `measurements` similar to the following:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
measurements:
|
||||
0:
|
||||
expected: "0f35c214608d93c7a6e68ae7359b4a8be5a0e99eea9107ece427c4dea4e439cf"
|
||||
warnOnly: false
|
||||
4:
|
||||
expected: "02c7a67c01ec70ffaf23d73a12f749ab150a8ac6dc529bda2fe1096a98bf42ea"
|
||||
warnOnly: false
|
||||
5:
|
||||
expected: "e6949026b72e5045706cd1318889b3874480f7a3f7c5c590912391a2d15e6975"
|
||||
warnOnly: true
|
||||
8:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
9:
|
||||
expected: "f0a6e8601b00e2fdc57195686cd4ef45eb43a556ac1209b8e25d993213d68384"
|
||||
warnOnly: false
|
||||
11:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
12:
|
||||
expected: "da99eb6cf7c7fbb692067c87fd5ca0b7117dc293578e4fea41f95d3d3d6af5e2"
|
||||
warnOnly: false
|
||||
13:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
14:
|
||||
expected: "d7c4cc7ff7933022f013e03bdee875b91720b5b86cf1753cad830f95e791926f"
|
||||
warnOnly: true
|
||||
15:
|
||||
expected: "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
warnOnly: false
|
||||
# ...
|
||||
```
|
||||
|
||||
Each entry specifies the expected value of the Constellation node, and whether the measurement should be enforced (`warnOnly: false`), or only a warning should be logged (`warnOnly: true`).
|
||||
By default, the subset of the [available measurements](../architecture/attestation.md#runtime-measurements) that can be locally reproduced and verified is enforced.
|
||||
|
||||
During attestation, the validating side (CLI or [join service](../architecture/microservices.md#joinservice)) compares each measurement reported by the issuing side (first node or joining node) individually.
|
||||
For mismatching measurements that have set `warnOnly` to `true` only a warning is emitted.
|
||||
For mismatching measurements that have set `warnOnly` to `false` an error is emitted and attestation fails.
|
||||
If attestation fails for a new node, it isn't permitted to join the cluster.
|
||||
|
||||
## The *verify* command
|
||||
|
||||
:::note
|
||||
The steps below are purely optional. They're automatically executed by `constellation init` when you initialize your cluster. The `constellation verify` command mostly has an illustrative purpose.
|
||||
:::
|
||||
|
||||
The `verify` command obtains and verifies an attestation statement from a running Constellation cluster.
|
||||
|
||||
```bash
|
||||
constellation verify [--cluster-id ...]
|
||||
```
|
||||
|
||||
From the attestation statement, the command verifies the following properties:
|
||||
|
||||
* The cluster is using the correct Confidential VM (CVM) type.
|
||||
* Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
|
||||
* The unique ID of the cluster matches the one from your `constellation-id.json` file or passed in via `--cluster-id`.
|
||||
|
||||
Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape.
|
||||
|
||||
### Custom arguments
|
||||
|
||||
The `verify` command also allows you to verify any Constellation deployment that you have network access to. For this you need the following:
|
||||
|
||||
* The IP address of a running Constellation cluster's [VerificationService](../architecture/microservices.md#verificationservice). The `VerificationService` is exposed via a `NodePort` service using the external IP address of your cluster. Run `kubectl get nodes -o wide` and look for `EXTERNAL-IP`.
|
||||
* The cluster's *clusterID*. See [cluster identity](../architecture/keys.md#cluster-identity) for more details.
|
||||
|
||||
For example:
|
||||
|
||||
```shell-session
|
||||
constellation verify -e 192.0.2.1 --cluster-id Q29uc3RlbGxhdGlvbkRvY3VtZW50YXRpb25TZWNyZXQ=
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue