mirror of
https://github.com/edgelesssys/constellation.git
synced 2024-10-01 01:36:09 -04:00
docs: fixes and rewording of workflows
This commit is contained in:
parent
dfa4dd9b85
commit
c7e8fe0bd6
@ -11,18 +11,9 @@ See the [architecture](../architecture/orchestration.md) section for details on
|
||||
|
||||
This step creates the necessary resources for your cluster in your cloud environment.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before creating your cluster you need to decide on
|
||||
|
||||
* the initial size of your cluster (the number of control-plane and worker nodes)
|
||||
* the machine type of your nodes (depending on the availability in your cloud environment)
|
||||
|
||||
You can find the currently supported machine types for your cloud environment in the [installation guide](../architecture/orchestration.md).
|
||||
|
||||
### Configuration
|
||||
|
||||
Constellation can generate a configuration file for your cloud provider:
|
||||
Generate a configuration file for your cloud service provider (CSP):
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
@ -41,27 +32,28 @@ constellation config generate gcp
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
This creates the file `constellation-conf.yaml` in the current directory. You must edit it before you can execute the next steps.
|
||||
This creates the file `constellation-conf.yaml` in the current directory. [Fill in your CSP-specific information](../getting-started/first-steps.md#create-a-cluster) before you continue.
|
||||
|
||||
Next, download the latest trusted measurements for your configured image.
|
||||
Next, download the trusted measurements for your configured image.
|
||||
|
||||
```bash
|
||||
constellation config fetch-measurements
|
||||
```
|
||||
|
||||
For more details, see the [verification section](../workflows/verify-cluster.md).
|
||||
For details, see the [verification section](../workflows/verify-cluster.md).
|
||||
|
||||
### Create
|
||||
|
||||
Choose the initial size of your cluster.
|
||||
The following command creates a cluster with one control-plane and two worker nodes:
|
||||
|
||||
```bash
|
||||
constellation create --control-plane-nodes 1 --worker-nodes 2 -y
|
||||
constellation create --control-plane-nodes 1 --worker-nodes 2
|
||||
```
|
||||
|
||||
For details on the flags and a list of supported instance types, consult the command help via `constellation create -h`.
|
||||
For details on the flags, consult the command help via `constellation create -h`.
|
||||
|
||||
*create* will store your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory.
|
||||
*create* stores your cluster's configuration to a file named [`constellation-state.json`](../architecture/orchestration.md#installation-process) in your current directory.
|
||||
|
||||
## The *init* step
|
||||
|
||||
@ -71,11 +63,10 @@ The following command initializes and bootstraps your cluster:
|
||||
constellation init
|
||||
```
|
||||
|
||||
Next, configure `kubectl` for your Constellation cluster:
|
||||
Next, configure `kubectl` for your cluster:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG="$PWD/constellation-admin.conf"
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
🏁 That's it. You've successfully created a Constellation cluster.
|
||||
|
@ -1,35 +1,33 @@
|
||||
# Recover your cluster
|
||||
|
||||
Recovery of a Constellation cluster means getting a cluster back into a healthy state after too many concurrent node failures in the control plane.
|
||||
Recovery of a Constellation cluster means getting it back into a healthy state after too many concurrent node failures in the control plane.
|
||||
Reasons for an unhealthy cluster can vary from a power outage, or planned reboot, to migration of nodes and regions.
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [stateful disk](../architecture/images.md#stateful-disk).
|
||||
Recovery events are rare, because Constellation is built for high availability and automatically and securely replaces failed nodes. When a node is replaced, Constellation's control plane first verifies the new node before it sends the node the cryptographic keys required to decrypt its [state disk](../architecture/images.md#state-disk).
|
||||
|
||||
Constellation provides a recovery mechanism for cases where the control plane has failed and is unable to replace nodes.
|
||||
The `constellation recover` command connects to a node, establishes a secure connection using [attested TLS](../architecture/attestation.md#attested-tls-atls), and provides that node with the key to decrypt its stateful disk and continue booting.
|
||||
This process has to be repeated until enough nodes are back running for establishing a [member quorum for etcd](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance) and the Kubernetes state can be recovered.
|
||||
The `constellation recover` command securely connects to all nodes in need of recovery using [attested TLS](../architecture/attestation.md#attested-tls-atls) and provides them with the keys to decrypt their state disks and continue booting.
|
||||
|
||||
## Identify unhealthy clusters
|
||||
|
||||
The first step to recovery is identifying when a cluster becomes unhealthy.
|
||||
Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
|
||||
|
||||
The health status of the Constellation nodes can be checked and monitored via the cloud service provider (CSP).
|
||||
You can check the health status of the nodes via the cloud service provider (CSP).
|
||||
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
|
||||
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each cloud environment.
|
||||
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
In the Azure cloud portal find the cluster's resource group `<cluster-name>-<suffix>`
|
||||
Inside the resource group check that the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>` has enough members in a *Running* state.
|
||||
Open the scale set details page, on the left go to `Settings -> Instances` and check the *Status* field.
|
||||
In the Azure portal, find the cluster's resource group.
|
||||
Inside the resource group, open the control plane *Virtual machine scale set* `constellation-scale-set-controlplanes-<suffix>`.
|
||||
On the left, go to **Settings** > **Instances** and check that enough members are in a *Running* state.
|
||||
|
||||
Second, check the boot logs of these *Instances*.
|
||||
In the scale set's *Instances* view, open the details page of the desired instance.
|
||||
Check the serial console output of that instance.
|
||||
On the left open the *"Support + troubleshooting" -> "Serial console"* page:
|
||||
On the left, go to **Support + troubleshooting** > **Serial console**.
|
||||
|
||||
In the serial console output search for `Waiting for decryption key`.
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
@ -40,7 +38,7 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
|
||||
If that fails, because the control plane is unhealthy, you will see log messages similar to the following:
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T09:56:43Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["10.9.0.5:30090","10.9.0.6:30090"]}
|
||||
@ -57,15 +55,15 @@ This means that you have to recover the node manually.
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
First, check that the control plane *Instance Group* has enough members in a *Ready* state.
|
||||
Go to *Instance Groups* and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
|
||||
In the GCP Console, go to **Instance Groups** and check the group for the cluster's control plane `<cluster-name>-control-plane-<suffix>`.
|
||||
|
||||
Second, check the status of the *VM Instances*.
|
||||
Go to *VM Instances* and open the details of the desired instance.
|
||||
Check the serial console output of that instance by opening the *logs -> "Serial port 1 (console)"* page:
|
||||
Go to **VM Instances** and open the details of the desired instance.
|
||||
Check the serial console output of that instance by opening the **Logs** > **Serial port 1 (console)** page:
|
||||
|
||||
![GCP portal serial console link](../_media/recovery-gcp-serial-console-link.png)
|
||||
|
||||
In the serial console output search for `Waiting for decryption key`.
|
||||
In the serial console output, search for `Waiting for decryption key`.
|
||||
Similar output to the following means your node was restarted and needs to decrypt the [state disk](../architecture/images.md#state-disk):
|
||||
|
||||
```json
|
||||
@ -73,11 +71,10 @@ Similar output to the following means your node was restarted and needs to decry
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"setupManager","caller":"setup/setup.go:72","msg":"Preparing existing state disk"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:65","msg":"Starting RejoinClient"}
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"recoveryServer","caller":"recoveryserver/server.go:59","msg":"Starting RecoveryServer"}
|
||||
|
||||
```
|
||||
|
||||
The node will then try to connect to the [*JoinService*](../architecture/components.md#joinservice) and obtain the decryption key.
|
||||
If that fails, because the control plane is unhealthy, you will see log messages similar to the following:
|
||||
If this fails due to an unhealthy control plane, you will see log messages similar to the following:
|
||||
|
||||
```json
|
||||
{"level":"INFO","ts":"2022-09-08T10:21:53Z","logger":"rejoinClient","caller":"rejoinclient/client.go:77","msg":"Received list with JoinService endpoints","endpoints":["192.168.178.4:30090","192.168.178.2:30090"]}
|
||||
@ -93,12 +90,12 @@ This means that you have to recover the node manually.
|
||||
</tabItem>
|
||||
</tabs>
|
||||
|
||||
## Recover your cluster
|
||||
## Recover a cluster
|
||||
|
||||
Recovering a cluster requires the following parameters:
|
||||
|
||||
* The `constellation-id.json` file in your working directory or the cluster's load balancer IP address
|
||||
* Access to the master secret of the cluster
|
||||
* The master secret of the cluster
|
||||
|
||||
A cluster can be recovered like this:
|
||||
|
||||
|
@ -46,7 +46,7 @@ kubectl -n kube-system get nodes
|
||||
|
||||
### Manual scaling
|
||||
|
||||
Alternatively, you can choose to manually scale your cluster up or down:
|
||||
Alternatively, you can manually scale your cluster up or down:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
@ -1,8 +1,8 @@
|
||||
# Manage SSH Keys
|
||||
# Manage SSH keys
|
||||
|
||||
Constellation gives you the capability to create UNIX users which can connect to the cluster nodes over SSH, allowing you to access both control-plane and worker nodes. While the nodes' data partitions are persistent, the system partitions are read-only. Consequently, users need to be re-created upon each restart of a node. This is where the Access Manager comes into effect, ensuring the automatic (re-)creation of all users whenever a node is restarted.
|
||||
Constellation allows you to create UNIX users that can connect to both control-plane and worker nodes over SSH. As the system partitions are read-only, users need to be re-created upon each restart of a node. This is automated by the *Access Manager*.
|
||||
|
||||
During the initial creation of the cluster, all users defined in the `ssh-users` section of the Constellation configuration file are automatically created during the initialization process. For persistence, the users are stored in a ConfigMap called `ssh-users`, residing in the `kube-system` namespace. For a running cluster, users can be added and removed by modifying the entries of the ConfigMap and performing a restart of a node.
|
||||
On cluster initialization, users defined in the `ssh-users` section of the Constellation configuration file are created and stored in the `ssh-users` ConfigMap in the `kube-system` namespace. For a running cluster, you can add or remove users by modifying the ConfigMap and restarting a node.
|
||||
|
||||
## Access Manager
|
||||
The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519.
|
||||
@ -11,9 +11,9 @@ The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using t
|
||||
All users are automatically created with `sudo` capabilities.
|
||||
:::
|
||||
|
||||
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a complete node restart is required after making changes to the ConfigMap.
|
||||
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a node restart is required after making changes to the ConfigMap.
|
||||
|
||||
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`, with the owner of each directory and its content being modified to `root`.
|
||||
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`.
|
||||
|
||||
You can update the ConfigMap by:
|
||||
```bash
|
||||
@ -23,7 +23,7 @@ kubectl edit configmap -n kube-system ssh-users
|
||||
Or alternatively, by modifying and re-applying it with the definition listed in the examples.
|
||||
|
||||
## Examples
|
||||
An example to create an user called `myuser` as part of the `constellation-config.yaml` looks like this:
|
||||
You can add a user `myuser` in `constellation-config.yaml` like this:
|
||||
|
||||
```yaml
|
||||
# Create SSH users on Constellation nodes upon the first initialization of the cluster.
|
||||
@ -43,7 +43,7 @@ data:
|
||||
myuser: "ssh-rsa AAAA...mgNJd9jc="
|
||||
```
|
||||
|
||||
Entries can be added simply by adding `data` entries:
|
||||
You can add users by adding `data` entries:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -17,26 +17,24 @@ To address this, Constellation provides CSI drivers for Azure Disk and GCE PD, o
|
||||
|
||||
For more details see [encrypted persistent storage](../architecture/encrypted-storage.md).
|
||||
|
||||
## CSI Drivers
|
||||
## CSI drivers
|
||||
|
||||
Constellation supports the following drivers, which offer node-level encryption and optional integrity protection.
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure">
|
||||
|
||||
1. [Azure Disk Storage](https://github.com/edgelesssys/constellation-azuredisk-csi-driver)
|
||||
|
||||
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the example below on how to install the modified Azure Disk CSI driver or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for installation and more information about the Constellation-managed version of the driver. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
|
||||
**Constellation CSI driver for Azure Disk**:
|
||||
Mount Azure [Disk Storage](https://azure.microsoft.com/en-us/services/storage/disks/#overview) into your Constellation cluster. See the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-azuredisk-csi-driver) for more information. Since Azure Disks are mounted as ReadWriteOnce, they're only available to a single pod.
|
||||
|
||||
</tabItem>
|
||||
<tabItem value="gcp" label="GCP">
|
||||
|
||||
1. [Persistent Disk](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver):
|
||||
|
||||
Mount GCP [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
|
||||
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
|
||||
You can use them to bring a volume back to a prior state or provision new volumes.
|
||||
Follow the examples listed below to setup the modified GCP PD CSI driver, or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
|
||||
**Constellation CSI driver for GCP Persistent Disk**:
|
||||
Mount [Persistent Disk](https://cloud.google.com/persistent-disk) block storage into your Constellation cluster.
|
||||
This includes support for [volume snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots), which let you create copies of your volume at a specific point in time.
|
||||
You can use them to bring a volume back to a prior state or provision new volumes.
|
||||
Follow the instructions on how to [install the Constellation CSI driver](#installation) or check out the [repository](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver) for information about the configuration.
|
||||
|
||||
</tabItem>
|
||||
</tabs>
|
||||
@ -63,7 +61,7 @@ The following installation guide gives an overview of how to securely use CSI-ba
|
||||
|
||||
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
|
||||
A storage class only needs to be created once and can then be used by multiple volumes.
|
||||
The following snippet creates a simple storage class using a [Standard SSD](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created.
|
||||
The following snippet creates a simple storage class using [Standard SSDs](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#standard-ssds) as the backing storage device when the first Pod claiming the volume is created.
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
@ -94,7 +92,7 @@ The following installation guide gives an overview of how to securely use CSI-ba
|
||||
|
||||
A storage class configures the driver responsible for provisioning storage for persistent volume claims.
|
||||
A storage class only needs to be created once and can then be used by multiple volumes.
|
||||
The following snippet creates a simple storage class for the GCE PD driver, utilizing [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the storage backend device when the first Pod claiming the volume is created.
|
||||
The following snippet creates a simple storage class using [balanced persistent disks](https://cloud.google.com/compute/docs/disks#pdspecs) as the backing storage device when the first Pod claiming the volume is created.
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl apply -f -
|
||||
|
@ -1,25 +1,25 @@
|
||||
# Upgrade your cluster
|
||||
|
||||
Constellation provides an easy way to upgrade from one release to the next.
|
||||
Constellation provides an easy way to upgrade to the next release.
|
||||
This involves choosing a new VM image to use for all nodes in the cluster and updating the cluster's expected measurements.
|
||||
|
||||
## Plan the upgrade
|
||||
|
||||
If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull in a list of available updates.
|
||||
If you don't already know the image you want to upgrade to, use the `upgrade plan` command to pull a list of available updates.
|
||||
|
||||
```bash
|
||||
constellation upgrade plan
|
||||
```
|
||||
|
||||
The command will let you interactively choose from a list of available updates and prepare your Constellation config file for the next step.
|
||||
The command lets you interactively choose from a list of available updates and prepares your Constellation config file for the next step.
|
||||
|
||||
If you plan to use the command in scripts, use the `--file` flag to compile the available options into a YAML file.
|
||||
You can then manually set the chosen upgrade option in your Constellation config file.
|
||||
To use the command in scripts, use the `--file` flag to compile the available options into a YAML file.
|
||||
You can then set the chosen upgrade option in your Constellation config file.
|
||||
|
||||
:::caution
|
||||
|
||||
The `constellation upgrade plan` will only work for official Edgeless release images.
|
||||
If your cluster is using a custom image or a debug image, the Constellation CLI will fail to find compatible images.
|
||||
`constellation upgrade plan` only works for official Edgeless release images.
|
||||
If your cluster is using a custom image, the Constellation CLI will fail to find compatible images.
|
||||
However, you may still use the `upgrade execute` command by manually selecting a compatible image and setting it in your config file.
|
||||
|
||||
:::
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Verify the CLI
|
||||
|
||||
Edgeless Systems uses [sigstore](https://www.sigstore.dev/) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are automatically uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/.
|
||||
Edgeless Systems uses [sigstore](https://www.sigstore.dev/) to ensure supply-chain security for the Constellation CLI and node images ("artifacts"). sigstore consists of three components: [Cosign](https://docs.sigstore.dev/cosign/overview), [Rekor](https://docs.sigstore.dev/rekor/overview), and Fulcio. Edgeless Systems uses Cosign to sign artifacts. All signatures are uploaded to the public Rekor transparency log, which resides at https://rekor.sigstore.dev/.
|
||||
|
||||
:::note
|
||||
The public key for Edgeless Systems' long-term code-signing key is:
|
||||
@ -15,7 +15,7 @@ The public key is also available for download at https://edgeless.systems/es.pub
|
||||
|
||||
The Rekor transparency log is a public append-only ledger that verifies and records signatures and associated metadata. The Rekor transparency log enables everyone to observe the sequence of (software) signatures issued by Edgeless Systems and many other parties. The transparency log allows for the public identification of dubious or malicious signatures.
|
||||
|
||||
You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as is described in the following.
|
||||
You should always ensure that (1) your CLI executable was signed with the private key corresponding to the above public key and that (2) there is a corresponding entry in the Rekor transparency log. Both can be done as described in the following.
|
||||
|
||||
:::info
|
||||
You don't need to verify the Constellation node images. This is done automatically by your CLI and the rest of Constellation.
|
||||
@ -36,7 +36,7 @@ The above performs an offline verification of the provided public key, signature
|
||||
```shell-session
|
||||
$ COSIGN_EXPERIMENTAL=1 cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-linux-amd64.sig constellation-linux-amd64
|
||||
|
||||
tlog entry verified with uuid: 0629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147 index: 3334047
|
||||
tlog entry verified with uuid: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13 index: 3477047
|
||||
Verified OK
|
||||
```
|
||||
|
||||
@ -50,28 +50,28 @@ To further inspect the public Rekor transparency log, [install the Rekor CLI](ht
|
||||
$ rekor-cli search --artifact constellation-linux-amd64
|
||||
|
||||
Found matching entries (listed by UUID):
|
||||
362f8ecba72f43260629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147
|
||||
362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
```
|
||||
|
||||
With this UUID you can get the full entry from the transparency log:
|
||||
|
||||
```shell-session
|
||||
$ rekor-cli get --uuid=362f8ecba72f43260629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147
|
||||
$ rekor-cli get --uuid=362f8ecba72f4326afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
|
||||
LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d
|
||||
Index: 3334047
|
||||
IntegratedTime: 2022-08-31T08:36:25Z
|
||||
UUID: 0629f03c379219f4ae1b99819fd4c266a39490a338ec24321198ba6ccc16f147
|
||||
Index: 3477047
|
||||
IntegratedTime: 2022-09-12T22:28:16Z
|
||||
UUID: afaba7f6635b3e058888692841848e5514357315be9528474b23f5dcccb82b13
|
||||
Body: {
|
||||
"HashedRekordObj": {
|
||||
"data": {
|
||||
"hash": {
|
||||
"algorithm": "sha256",
|
||||
"value": "7cdc7a7101b215058264279b8d8f624e4e48b6b42cd54857a5e02daf1a1b014c"
|
||||
"value": "40e137b9b9b8204d672642fd1e181c6d5ccb50cfc5cc7fcbb06a8c2c78f44aff"
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"content": "MEYCIQDdL8fuhtFk6ON4b6kW6bvLMXqvw37nm8/UiLcYKjogsAIhAODZCdS1HgHvFJ5KFxT1JZzRN2wPdn3HZsiP0+3q6zsL",
|
||||
"content": "MEUCIQCSER3mGj+j5Pr2kOXTlCIHQC3gT30I7qkLr9Awt6eUUQIgcLUKRIlY50UN8JGwVeNgkBZyYD8HMxwC/LFRWoMn180=",
|
||||
"publicKey": {
|
||||
"content": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFZjhGMWhwbXdFK1lDRlh6akd0YVFjckw2WFpWVApKbUVlNWlTTHZHMVN5UVNBZXc3V2RNS0Y2bzl0OGUyVEZ1Q2t6bE9oaGx3czJPSFdiaUZabkZXQ0Z3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg=="
|
||||
}
|
||||
@ -84,7 +84,7 @@ The field `publicKey` should contain Edgeless Systems' public key in Base64 enco
|
||||
You can get an exhaustive list of artifact signatures issued by Edgeless Systems via the following command:
|
||||
|
||||
```bash
|
||||
$ rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509
|
||||
rekor-cli search --public-key https://edgeless.systems/es.pub --pki-format x509
|
||||
```
|
||||
|
||||
Edgeless Systems monitors this list to detect potential unauthorized use of its private key.
|
||||
|
@ -12,7 +12,7 @@ constellation config fetch-measurements
|
||||
|
||||
This command performs the following steps:
|
||||
1. Download the signed measurements for the configured image. By default, this will use Edgeless Systems' public measurement registry.
|
||||
2. Verify the signed images. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub).
|
||||
2. Verify the signature of the measurements. This will use Edgeless Systems' [public key](https://edgeless.systems/es.pub).
|
||||
3. Write measurements into configuration file.
|
||||
|
||||
## The *verify* command
|
||||
@ -30,7 +30,7 @@ constellation verify [--cluster-id ...]
|
||||
From the attestation statement, the command verifies the following properties:
|
||||
* The cluster is using the correct Confidential VM (CVM) type.
|
||||
* Inside the CVMs, the correct node images are running. The node images are identified through the measurements obtained in the previous step.
|
||||
* The unique ID of the cluster matches the one passed in via `--cluster-id`.
|
||||
* The unique ID of the cluster matches the one from your `constellation-id.json` file or passed in via `--cluster-id`.
|
||||
|
||||
Once the above properties are verified, you know that you are talking to the right Constellation cluster and it's in a good and trustworthy shape.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user