docs: publish to 2.0

This commit is contained in:
Thomas Tendyck 2022-09-28 16:31:47 +02:00 committed by Thomas Tendyck
parent c7e8fe0bd6
commit 27e8604a9b
9 changed files with 69 additions and 84 deletions

View File

@ -180,7 +180,7 @@ The following steps guide you through the process of creating a cluster and depl
:::tip :::tip
On Azure, you may need to wait 15+ min. at this point for role assignments to propagate. On Azure, you may need to wait 15+ minutes at this point for role assignments to propagate.
::: :::

View File

@ -11,19 +11,9 @@ See the [architecture](../architecture/orchestration.md) section for details on
This step creates the necessary resources for your cluster in your cloud environment. This step creates the necessary resources for your cluster in your cloud environment.
### Prerequisites
Before creating your cluster you need to decide on
* the initial size of your cluster (the number of control-plane and worker nodes)
* the machine type of your nodes (depending on the availability in your cloud environment)
* whether to enable autoscaling for your cluster (automatically adding and removing nodes depending on resource demands)
You can find the currently supported machine types for your cloud environment in the [installation guide](../architecture/orchestration.md).
### Configuration ### Configuration
Constellation can generate a configuration file for your cloud provider: Generate a configuration file for your cloud service provider (CSP):
<tabs groupId="csp"> <tabs groupId="csp">
<tabItem value="azure" label="Azure"> <tabItem value="azure" label="Azure">
@ -42,27 +32,28 @@ constellation config generate gcp
</tabItem> </tabItem>
</tabs> </tabs>
This creates the file `constellation-conf.yaml` in the current directory. You must edit it before you can execute the next steps. This creates the file `constellation-conf.yaml` in the current directory. [Fill in your CSP-specific information](../getting-started/first-steps.md#create-a-cluster) before you continue.
Next, download the latest trusted measurements for your configured image. Next, download the trusted measurements for your configured image.
```bash ```bash
constellation config fetch-measurements constellation config fetch-measurements
``` ```
For more details, see the [verification section](../workflows/verify-cluster.md). For details, see the [verification section](../workflows/verify-cluster.md).
### Create ### Create
Choose the initial size of your cluster.
The following command creates a cluster with one control-plane and two worker nodes: The following command creates a cluster with one control-plane and two worker nodes:
```bash ```bash
constellation create --control-plane-nodes 1 --worker-nodes 2 -y constellation create --control-plane-nodes 1 --worker-nodes 2
``` ```
For details on the flags and a list of supported instance types, consult the command help via `constellation create -h`. For details on the flags, consult the command help via `constellation create -h`.
*create* will store your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory. *create* stores your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory.
## The *init* step ## The *init* step
@ -78,11 +69,10 @@ To enable autoscaling in your cluster, add the `--autoscale` flag:
constellation init --autoscale constellation init --autoscale
``` ```
Next, configure `kubectl` for your Constellation cluster: Next, configure `kubectl` for your cluster:
```bash ```bash
export KUBECONFIG="$PWD/constellation-admin.conf" export KUBECONFIG="$PWD/constellation-admin.conf"
kubectl get nodes -o wide
``` ```
🏁 That's it. You've successfully created a Constellation cluster. 🏁 That's it. You've successfully created a Constellation cluster.

View File

@ -1,8 +1,8 @@
# Recover your cluster # Recover your cluster
Recovery of a Constellation cluster means getting a cluster back into a healthy state after too many concurrent node failures in the control plane. Recovery of a Constellation cluster means getting it back into a healthy state after too many concurrent node failures in the control plane.
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions. Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [stateful disk](../architecture/images.md#stateful-disk). Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/images.md#state-disk).
Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes. Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes.
The `constellation recover` command connects to a node, establishes a secure connection using [attested TLS](../architecture/attestation.md#attested-tls-atls), and provides that node with the key to decrypt its stateful disk and continue booting. The `constellation recover` command connects to a node, establishes a secure connection using [attested TLS](../architecture/attestation.md#attested-tls-atls), and provides that node with the key to decrypt its stateful disk and continue booting.
@ -13,23 +13,22 @@ This process has to be repeated until enough nodes are back running for establis
The first step to recovery is identifying when a cluster becomes unhealthy. The first step to recovery is identifying when a cluster becomes unhealthy.
Usually, this can be first observed when the Kubernetes API server becomes unresponsive. Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
The health status of the Constellation nodes can be checked and monitored via the cloud service provider (CSP). You can check the health status of the nodes via the cloud service provider (CSP).
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging). Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each cloud environment. In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
<tabs groupId="csp"> <tabs groupId="csp">
<tabItem value="azure" label="Azure"> <tabItem value="azure" label="Azure">
In the Azure cloud portal find the cluster's resource group `<cluster-name>-<suffix>` In the Azure portal, find the cluster's resource group.
Inside the resource group check that the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>` has enough members in a *Running* state. Inside the resource group, open the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>`.
Open the scale set details page, on the left go to `Settings -> Instances` and check the *Status* field. On the left, go to **Settings** > **Instances** and check that enough members are in a *Running* state.
Second, check the boot logs of these *Instances*. Second, check the boot logs of these *Instances*.
In the scale set's *Instances* view, open the details page of the desired instance. In the scale set's *Instances* view, open the details page of the desired instance.
Check the serial console output of that instance. On the left, go to **Support + troubleshooting** > **Serial console**.
On the left open the *"Support + troubleshooting" -> "Serial console"* page:
In the serial console output search for `Waiting for decryption key`. In the serial console output, search for `Waiting for decryption key`.
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk): Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
```json ```json
@ -40,7 +39,7 @@ Similar output to the following means your node was restarted and needs to decry
``` ```
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key. The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
If that fails, because the control plane is unhealthy, you will see log messages similar to the following: If this fails due to an unhealthy control plane, you will see log messages similar to the following:
```json ```json
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["10.9.0.5:30090","10.9.0.6:30090"]} {"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["10.9.0.5:30090","10.9.0.6:30090"]}
@ -51,21 +50,21 @@ If that fails, because the control plane is unhealthy, you will see log messages
{"level":"ERROR","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"} {"level":"ERROR","ts":"2022-09-08T09:57:23Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
``` ```
This means that you have to recover the node manually. For this, you need its IP address, which can be obtained from the *Overview* page under *Private IP address*. This means that you have to recover the node manually. For this, you need its IP address, which you can obtain from the *Overview* page under *Private IP address*.
</tabItem> </tabItem>
<tabItem value="gcp" label="GCP"> <tabItem value="gcp" label="GCP">
First, check that the control plane *Instance Group* has enough members in a *Ready* state. First, check that the control plane *Instance Group* has enough members in a *Ready* state.
Go to *Instance Groups* and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`. In the GCP Console, go to **Instance Groups** and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
Second, check the status of the *VM Instances*. Second, check the status of the *VM Instances*.
Go to *VM Instances* and open the details of the desired instance. Go to **VM Instances** and open the details of the desired instance.
Check the serial console output of that instance by opening the *logs -> "Serial port 1 (console)"* page: Check the serial console output of that instance by opening the **Logs** > **Serial port 1 (console)** page:
![GCP portal serial console link](../_media/recovery-gcp-serial-console-link.png) ![GCP portal serial console link](../_media/recovery-gcp-serial-console-link.png)
In the serial console output search for `Waiting for decryption key`. In the serial console output, search for `Waiting for decryption key`.
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk): Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
```json ```json
@ -73,11 +72,10 @@ Similar output to the following means your node was restarted and needs to decry
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"} {"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"} {"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"} {"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
``` ```
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key. The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
If that fails, because the control plane is unhealthy, you will see log messages similar to the following: If this fails due to an unhealthy control plane, you will see log messages similar to the following:
```json ```json
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]} {"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]}
@ -88,12 +86,12 @@ If that fails, because the control plane is unhealthy, you will see log messages
{"level":"ERROR","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"} {"level":"ERROR","ts":"2022-09-08T10:22:13Z","logger":"rejoinClient","caller":"rejoinclient/client.go:110","msg":"Failed to rejoin on all endpoints"}
``` ```
This means that you have to recover the node manually. For this, you need its IP address, which can be obtained from the *"VM Instance" -> "network interfaces"* page under *"Primary internal IP address."* This means that you have to recover the node manually. For this, you need its IP address, which you can obtain from the **VM Instance** > **network interfaces** table under *Primary internal IP address*.
</tabItem> </tabItem>
</tabs> </tabs>
## Recover your cluster ## Recover a cluster
The following process needs to be repeated until a [member quorum for etcd](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance) is established. The following process needs to be repeated until a [member quorum for etcd](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance) is established.
For example, assume you have 5 control-plane nodes in your cluster and 4 of them have been rebooted due to a maintenance downtime in the cloud environment. For example, assume you have 5 control-plane nodes in your cluster and 4 of them have been rebooted due to a maintenance downtime in the cloud environment.
@ -102,10 +100,9 @@ From there, your cluster will auto heal the remaining 2 control-plane nodes and
Recovering a node requires the following parameters: Recovering a node requires the following parameters:
* The node's IP address * The node's IP address (see [Identify unhealthy clusters](#identify-unhealthy-clusters) on how to obtain it)
* Access to the master secret of the cluster * The master secret of the cluster
See the [Identify unhealthy clusters](#identify-unhealthy-clusters) description of how to obtain the node's IP address.
Note that the recovery command needs to connect to the recovering nodes. Note that the recovery command needs to connect to the recovering nodes.
Nodes only have private IP addresses in the VPC of the cluster, hence, the command needs to be issued from within the VPC network of the cluster. Nodes only have private IP addresses in the VPC of the cluster, hence, the command needs to be issued from within the VPC network of the cluster.
The easiest approach is to set up a jump host connected to the VPC network and perform the recovery from there. The easiest approach is to set up a jump host connected to the VPC network and perform the recovery from there.

View File

@ -4,7 +4,7 @@ Constellation provides all features of a Kubernetes cluster including scaling an
## Worker node scaling ## Worker node scaling
[During cluster initialization](create.md#init) you can choose to deploy the [cluster autoscaler](https://github.com/kubernetes/autoscaler). It automatically provisions additional worker nodes so that all pods have a place to run. Alternatively, you can choose to manually scale your cluster up or down: [During cluster initialization](create.md#the-init-step) you can choose to deploy the [cluster autoscaler](https://github.com/kubernetes/autoscaler). It automatically provisions additional worker nodes so that all pods have a place to run. Alternatively, you can choose to manually scale your cluster up or down:
<tabs groupId="csp"> <tabs groupId="csp">
<tabItem value="azure" label="Azure"> <tabItem value="azure" label="Azure">

View File

@ -1,8 +1,8 @@
# Manage SSH Keys # Manage SSH keys
Constellation gives you the capability to create UNIX users which can connect to the cluster nodes over SSH, allowing you to access both control-plane and worker nodes. While the nodes' data partitions are persistent, the system partitions are read-only. Consequently, users need to be re-created upon each restart of a node. This is where the Access Manager comes into effect, ensuring the automatic (re-)creation of all users whenever a node is restarted. Constellation allows you to create UNIX users that can connect to both control-plane and worker nodes over SSH. As the system partitions are read-only, users need to be re-created upon each restart of a node. This is automated by the *Access Manager*.
During the initial creation of the cluster, all users defined in the `ssh-users` section of the Constellation configuration file are automatically created during the initialization process. For persistence, the users are stored in a ConfigMap called `ssh-users`, residing in the `kube-system` namespace. For a running cluster, users can be added and removed by modifying the entries of the ConfigMap and performing a restart of a node. On cluster initialization, users defined in the `ssh-users` section of the Constellation configuration file are created and stored in the `ssh-users` ConfigMap in the `kube-system` namespace. For a running cluster, you can add or remove users by modifying the ConfigMap and restarting a node.
## Access Manager ## Access Manager
The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519. The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519.
@ -11,9 +11,9 @@ The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using t
All users are automatically created with `sudo` capabilities. All users are automatically created with `sudo` capabilities.
::: :::
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a complete node restart is required after making changes to the ConfigMap. The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a node restart is required after making changes to the ConfigMap.
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`, with the owner of each directory and its content being modified to `root`. When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`.
You can update the ConfigMap by: You can update the ConfigMap by:
```bash ```bash
@ -23,7 +23,7 @@ kubectl edit configmap -n kube-system ssh-users
Or alternatively, by modifying and re-applying it with the definition listed in the examples. Or alternatively, by modifying and re-applying it with the definition listed in the examples.
## Examples ## Examples
An example to create an user called `myuser` as part of the `constellation-config.yaml` looks like this: You can add a user `myuser` in `constellation-config.yaml` like this:
```yaml ```yaml
# Create SSH users on Constellation nodes upon the first initialization of the cluster. # Create SSH users on Constellation nodes upon the first initialization of the cluster.
@ -43,7 +43,7 @@ data:
myuser: "ssh-rsa AAAA...mgNJd9jc=" myuser: "ssh-rsa AAAA...mgNJd9jc="
``` ```
Entries can be added simply by adding `data` entries: You can add users by adding `data` entries:
```yaml ```yaml
apiVersion: v1 apiVersion: v1

View File

@ -17,26 +17,24 @@ To address this, Constellation provides CSI drivers for Azure Disk and GCE PD, o
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md). For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
## CSI Drivers ## CSI drivers
Constellation supports the following drivers, which offer node-level encryption and optional integrity protection. Constellation supports the following drivers, which offer node-level encryption and optional integrity protection.
<tabs groupId="csp"> <tabs groupId="csp">
<tabItem value="azure" label="Azure"> <tabItem value="azure" label="Azure">
1. [Azure Disk Storage](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) **Constellation CSI driver for Azure Disk**:
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for more information. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the example below on how to install the modified Azure Disk CSI driver or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for installation and more information about the Constellation-managed version of the driver. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
</tabItem> </tabItem>
<tabItem value="gcp" label="GCP"> <tabItem value="gcp" label="GCP">
1. [Persistent Disk](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver): **Constellation CSI driver for GCP Persistent Disk**:
Mount [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
Mount GCP [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time. This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
You can use them to bring a volume back to a prior state or provision new volumes. You can use them to bring a volume back to a prior state or provision new volumes.
Follow the examples listed below to setup the modified GCP PD CSI driver, or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration. Follow the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
</tabItem> </tabItem>
</tabs> </tabs>
@ -63,7 +61,7 @@ The following installation guide gives an overview of how to securely use CSI-ba
A storage class configures the driver responsible for provisioning storage for persistent volume claims. A storage class configures the driver responsible for provisioning storage for persistent volume claims.
A storage class only needs to be created once and can then be used by multiple volumes. A storage class only needs to be created once and can then be used by multiple volumes.
The following snippet creates a simple storage class using a [Standard SSD](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created. The following snippet creates a simple storage class using [Standard SSDs](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created.
```bash ```bash
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
@ -94,7 +92,7 @@ The following installation guide gives an overview of how to securely use CSI-ba
A storage class configures the driver responsible for provisioning storage for persistent volume claims. A storage class configures the driver responsible for provisioning storage for persistent volume claims.
A storage class only needs to be created once and can then be used by multiple volumes. A storage class only needs to be created once and can then be used by multiple volumes.
The following snippet creates a simple storage class for the GCE PD driver, utilizing [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the storage backend device when the first Pod claiming the volume is created. The following snippet creates a simple storage class using [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the backing storage device when the first Pod claiming the volume is created.
```bash ```bash
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -

View File

@ -1,25 +1,25 @@
# Upgrade your cluster # Upgrade your cluster
Constellation provides an easy way to upgrade from one release to the next. Constellation provides an easy way to upgrade to the next release.
This involves choosing a new VM image to use for all nodes in the cluster and updating the cluster's expected measurements. This involves choosing a new VM image to use for all nodes in the cluster and updating the cluster's expected measurements.
## Plan the upgrade ## Plan the upgrade
If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull in a list of available updates. If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull a list of available updates.
```bash ```bash
constellation upgrade plan constellation upgrade plan
``` ```
The command will let you interactively choose from a list of available updates and prepare your Constellation config file for the next step. The command lets you interactively choose from a list of available updates and prepares your Constellation config file for the next step.
If you plan to use the command in scripts, use the `--file` flag to compile the available options into a YAML file. To use the command in scripts, use the `--file` flag to compile the available options into a YAML file.
You can then manually set the chosen upgrade option in your Constellation config file. You can then set the chosen upgrade option in your Constellation config file.
:::caution :::caution
The `constellation upgrade plan` will only work for official Edgeless release images. `constellation upgrade plan` only works for official Edgeless release images.
If your cluster is using a custom image or a debug image, the Constellation CLI will fail to find compatible images. If your cluster is using a custom image, the Constellation CLI will fail to find compatible images.
However, you may still use the `upgrade execute` command by manually selecting a compatible image and setting it in your config file. However, you may still use the `upgrade execute` command by manually selecting a compatible image and setting it in your config file.
::: :::

View File

@ -1,6 +1,6 @@
# Verify the CLI # Verify the CLI
Edgeless Systems uses [sigstore](https://www.sigstore.dev/) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are automatically uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/. Edgeless Systems uses [sigstore](https://www.sigstore.dev/) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/.
:::note :::note
The public key for Edgeless Systems' long-term code-signing key is: The public key for Edgeless Systems' long-term code-signing key is:
@ -15,7 +15,7 @@ The public key is also available for download at https://edgeless.systems/es.pub
The Rekor transparency log is a public append-only ledger that verifies and records signatures and associated metadata. The Rekor transparency log enables everyone to observe the sequence of (software) signatures issued by Edgeless Systems and many other parties. The transparency log allows for the public identification of dubious or malicious signatures. The Rekor transparency log is a public append-only ledger that verifies and records signatures and associated metadata. The Rekor transparency log enables everyone to observe the sequence of (software) signatures issued by Edgeless Systems and many other parties. The transparency log allows for the public identification of dubious or malicious signatures.
You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as is described in the following. You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as described in the following.
:::info :::info
You don't need to verify the Constellation node images. This is done automatically by your CLI and the rest of Constellation. You don't need to verify the Constellation node images. This is done automatically by your CLI and the rest of Constellation.
@ -36,7 +36,7 @@ The above performs an offline verification of the provided public key, signature
```shell-session ```shell-session
$ COSIGN_EXPERIMENTAL=1 cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64 $ COSIGN_EXPERIMENTAL=1 cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
tlog entry verified with uuid: 0629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147 index: 3334047 tlog entry verified with uuid: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13 index: 3477047
Verified OK Verified OK
``` ```
@ -50,28 +50,28 @@ To further inspect the public Rekor transparency log, [install the Rekor CLI](ht
$ rekor-cli search --artifact constellation-linux-amd64 $ rekor-cli search --artifact constellation-linux-amd64
Found matching entries (listed by UUID): Found matching entries (listed by UUID):
362f8ecba72f43260629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147 362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
``` ```
With this UUID you can get the full entry from the transparency log: With this UUID you can get the full entry from the transparency log:
```shell-session ```shell-session
$ rekor-cli get --uuid=362f8ecba72f43260629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147 $ rekor-cli get --uuid=362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d
Index: 3334047 Index: 3477047
IntegratedTime: 2022-08-31T08:36:25Z IntegratedTime: 2022-09-12T22:28:16Z
UUID: 0629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147 UUID: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
Body: { Body: {
"HashedRekordObj": { "HashedRekordObj": {
"data": { "data": {
"hash": { "hash": {
"algorithm": "sha256", "algorithm": "sha256",
"value": "7cdc7a7101b215058264279b8d8f624e4e48b6b42cd54857a5e02daf1a1b014c" "value": "40e137b9b9b8204d672642fd1e181c6d5ccb50cfc5cc7fcbb06a8c2c78f44aff"
} }
}, },
"signature": { "signature": {
"content": "MEYCIQDdL8fuhtFk6ON4b6kW6bvLMXqvw37nm8/UiLcYKjogsAIhAODZCdS1HgHvFJ5KFxT1JZzRN2wPdn3HZsiP0+3q6zsL", "content": "MEUCIQCSER3mGj+j5Pr2kOXTlCIHQC3gT30I7qkLr9Awt6eUUQIgcLUKRIlY50UN8JGwVeNgkBZyYD8HMxwC/LFRWoMn180=",
"publicKey": { "publicKey": {
"content": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFZjhGMWhwbXdFK1lDRlh6akd0YVFjckw2WFpWVApKbUVlNWlTTHZHMVN5UVNBZXc3V2RNS0Y2bzl0OGUyVEZ1Q2t6bE9oaGx3czJPSFdiaUZabkZXQ0Z3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==" "content": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFZjhGMWhwbXdFK1lDRlh6akd0YVFjckw2WFpWVApKbUVlNWlTTHZHMVN5UVNBZXc3V2RNS0Y2bzl0OGUyVEZ1Q2t6bE9oaGx3czJPSFdiaUZabkZXQ0Z3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg=="
} }
@ -84,7 +84,7 @@ The field `publicKey` should contain Edgeless Systems' public key in Base64 enco
You can get an exhaustive list of artifact signatures issued by Edgeless Systems via the following command: You can get an exhaustive list of artifact signatures issued by Edgeless Systems via the following command:
```bash ```bash
$ rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509 rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509
``` ```
Edgeless Systems monitors this list to detect potential unauthorized use of its private key. Edgeless Systems monitors this list to detect potential unauthorized use of its private key.

View File

@ -12,7 +12,7 @@ constellation config fetch-measurements
This command performs the following steps: This command performs the following steps:
1. Download the signed measurements for the configured image. By default, this will use Edgeless Systems' public measurement registry. 1. Download the signed measurements for the configured image. By default, this will use Edgeless Systems' public measurement registry.
2. Verify the signed images. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub). 2. Verify the signature of the measurements. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub).
3. Write measurements into configuration file. 3. Write measurements into configuration file.
## The *verify* command ## The *verify* command
@ -30,7 +30,7 @@ constellation verify [--cluster-id ...]
From the attestation statement, the command verifies the following properties: From the attestation statement, the command verifies the following properties:
* The cluster is using the correct Confidential VM (CVM) type. * The cluster is using the correct Confidential VM (CVM) type.
* Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step. * Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
* The unique ID of the cluster matches the one passed in via `--cluster-id`. * The unique ID of the cluster matches the one from your `constellation-id.json` file or passed in via `--cluster-id`.
Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape. Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape.