dev-docs: move into top-level dir (#924)

This commit is contained in:
Moritz Eckert 2023-01-10 14:18:41 +01:00 committed by GitHub
parent c19e894d43
commit b2f8f72f1e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
11 changed files with 5 additions and 5 deletions

View file

@ -0,0 +1,114 @@
# Build
The following are instructions for building all components in the constellation repository, except for images. A manual on how to build images locally can be found in the [image README](/image/README.md).
Prerequisites:
* 20 GB disk space
* [Latest version of Go](https://go.dev/doc/install).
* [Docker](https://docs.docker.com/engine/install/). Can be installed with these commands on Ubuntu 22.04: `sudo apt update && sudo apt install docker.io`. As the build spawns docker containers your user account either needs to be in the `docker` group (Add with `sudo usermod -a -G docker $USER`) or you have to run builds with `sudo`. When using `sudo` remember that your root user might (depending on your distro and local config) not have the go binary in it's PATH. The current PATH can be forwarded to the root env with `sudo env PATH=$PATH <cmd>`.
* Packages on Ubuntu:
```sh
sudo apt install build-essential cmake libssl-dev pkg-config libcryptsetup12 libcryptsetup-dev
```
* Packages on Fedora:
```sh
sudo dnf install @development-tools pkg-config cmake openssl-devel cryptsetup-libs cryptsetup-devel
```
```sh
mkdir build
cd build
cmake ..
make -j`nproc`
```
# Test
You can run all integration and unitttests like this:
```sh
ctest -j `nproc`
```
You can limit the execution of tests to specific targets with e.g. `ctest -R unit-*` to only execute unit tests.
Some of the tests rely on libvirt and won't work if you don't have a virtualization capable CPU. You can find instructions on setting up libvirt in our [QEMU README](qemu.md).
# Deploy
> :warning: Debug images are not safe to use in production environments. :warning:
The debug images will open an additional **unsecured** port (4000) which accepts any binary to be run on the target machine. **Make sure that those machines are not exposed to the internet.**
## Cloud
To familiarize yourself with debugd and learn how to deploy a cluster using it, read [this](/debugd/README.md) manual.
If you want to deploy a cluster for production, please refer to our user documentation [here](https://docs.edgeless.systems/constellation/getting-started/first-steps#create-a-cluster).
## Locally
In case you want to have quicker iteration cycles during development you might want to setup a local cluster.
You can do this by utilizing our QEMU setup.
Instructions on how to set it up can be found in the [QEMU README](qemu.md).
# Verification
In order to verify your cluster we describe a [verification workflow](https://docs.edgeless.systems/constellation/workflows/verify-cluster) in our official docs.
Apart from that you can also reproduce some of the measurements described in the [docs](https://docs.edgeless.systems/constellation/architecture/attestation#runtime-measurements) locally.
To do so we built a tool that creates a VM, collects the PCR values and reports them to you.
To run the tool execute the following command in `/hack/image-measurement`:
```sh
go run . -path <image_path> -type <image_type>
```
`<image_path>` needs to point to a valid image file.
The image can be either in raw or QEMU's `qcow2` format.
This format is specified in the `<image_type>` argument.
You can compare the values of PCR 4, 8 and 9 to the ones you are seeing in your `constellation-conf.yaml`.
The PCR values depend on the image you specify in the `path` argument.
Therefore, if you want to verify a cluster deployed with a release image you will have to download the images first.
After collecting the measurements you can put them into your `constellation-conf.yaml` under the `measurements` key in order to enforce them.
# Image export
To download an image you will have to export it first.
Below you find general instructions on how to do this for GCP and Azure.
## GCP
In order to download an image you will have to export it to a bucket you have access to:
* "Owner" permissions on the project
* "Storage Admin" permissions on the bucket
* Export with:
```bash
gcloud compute images export --image=<image_path> --destination-uri=<bucket_uri> --export-format=qcow2 --project=<image_project>
```
* Click on "Download" on the created object
## Azure
To download an image from Azure you will have to create a disk from the image and generate a download link for that disk:
```bash
#!/usr/bin/env bash
VERSION=0.0.1
TARGET_DISK=export-${VERSION}
az disk create -g <resource_group> -l <target_region> -n $TARGET_DISK --hyper-v-generation V2 --os-type Linux --sku standard_lrs --security-type TrustedLaunch --gallery-image-reference <image_path>
az disk grant-access -n $TARGET_DISK -g constellation-images --access-level Read --duration-in-seconds 3600 | jq -r .accessSas
```
Depending on you internet connection you might have to modify the duration value.
The duration value specifies for how long the link is usable.

View file

@ -0,0 +1,106 @@
# Actions & Workflows
## Manual Trigger (workflow_dispatch)
It is currently not possible to run a `workflow_dispatch` based workflow on a specific branch, while it is not yet available in `main` branch, from the WebUI. If you would like to test your pipeline changes on a branch, use the [GitHub CLI](https://github.com/cli/cli):
```bash
gh workflow run e2e-test-manual.yml \
--ref feat/e2e_pipeline \ # On your specific branch!
-F cloudProvider=gcp \ # With your ...
-F controlNodesCount=1 -F workerNodesCount=2 \ # ... settings
-F machineType=n2d-standard-4
```
### E2E Test Suites
Here are some examples for test suits you might want to run. Values for `sonobuoyTestSuiteCmd`:
* `--mode quick`
* Runs a set of tests that are known to be quick to execute! (<1 min)
* `--e2e-focus "Services should be able to create a functioning NodePort service"`
* Runs a specific test
* `--mode certified-conformance`
* For K8s conformance certification test suite
Check [Sonobuoy docs](https://sonobuoy.io/docs/latest/e2eplugin/) for more examples.
When using `--mode` be aware that `--e2e-focus` and `e2e-skip` will be overwritten. [Check in the source code](https://github.com/vmware-tanzu/sonobuoy/blob/e709787426316423a4821927b1749d5bcc90cb8c/cmd/sonobuoy/app/modes.go#L130) what the different modes do.
## Local Development
Using [***act***](https://github.com/nektos/act) you can run GitHub actions locally.
**These instructions are for internal use.**
In case you want to use the E2E actions externally, you need to adjust other configuration parameters.
Check the assignments made in the [/.github/actions/e2e_test/action.yml](E2E action) and adjust any hard-coded values.
### Specific Jobs
```bash
act -j e2e-test-gcp
```
### Simulate a `workflow_dispatch` event
Create a new JSON file to describe the event ([relevant issue](https://github.com/nektos/act/issues/332), there are [no further information about structure of this file](https://github.com/nektos/act/blob/master/pkg/model/github_context.go#L11)):
```json
{
"action": "workflow_dispatch",
"inputs": {
"workerNodesCount": "2",
"controlNodesCount": "1",
"cloudProvider": "gcp",
"machineType": "n2d-standard-4",
"sonobuoyTestSuiteCmd": "--mode quick"
}
}
```
Then run *act* with the event as input:
```bash
act -j e2e-test-manual --eventpath event.json
```
### Authorizing GCP
For creating Kubernetes clusters in GCP a local copy of the service account secret is required.
1. [Create a new service account key](https://console.cloud.google.com/iam-admin/serviceaccounts/details/112741463528383500960/keys?authuser=0&project=constellation-331613&supportedpurview=project)
2. Create a compact (one line) JSON representation of the file `jq -c`
3. Store in a GitHub Action Secret called `GCP_SERVICE_ACCOUNT` or create a local secret file for *act* to consume:
```bash
$ cat secrets.env
GCP_SERVICE_ACCOUNT={"type":"service_account", ... }
$ act --secret-file secrets.env
```
In addition, you need to create a Service Account which Constellation itself is supposed to use. Refer to [First steps](https://docs.edgeless.systems/constellation/getting-started/first-steps#create-a-cluster) in the documentation on how to create it. What you need here specifically is the `gcpServiceAccountKey`, which needs to be stored in a secret called `GCP_CLUSTER_SERVICE_ACCOUNT`.
### Authorizing Azure
Create a new service principal:
```bash
az ad sp create-for-rbac --name "github-actions-e2e-tests" --role contributor --scopes /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435 --sdk-auth
az role assignment create --role "User Access Administrator" --scope /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435 --assignee <SERVICE_PRINCIPAL_CLIENT_ID>
```
Next, add API permissions to Managed Identity:
* Not possible through portal; requires PowerShell
* <https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/grant-graph-api-permission-to-managed-identity-object/ba-p/2792127>
* `$GraphAppId` in this article is for Microsoft Graph. Azure AD Graph is `00000002-0000-0000-c000-000000000000`
* Note that changing permissions can take between few seconds to several hours
Afterward, you need to define a few secrets either as Github Action Secrets or in a secrets file for *act* as described before.
The following secrets need to be defined:
* `AZURE_E2E_CREDENTIALS`: The output of `az ad sp ...`
* `AZURE_E2E_CLIENT_SECRET`: The client secret value for the registered app on Azure (which is defined as `appClientID`).
For information on how to achieve this, refer to the [First steps](https://docs.edgeless.systems/constellation/getting-started/first-steps) in the documentation for Constellation.

133
dev-docs/workflows/qemu.md Normal file
View file

@ -0,0 +1,133 @@
# Local image testing with QEMU / libvirt
To create local testing clusters using QEMU, some prerequisites have to be met:
- [qcow2 constellation image](/image/README.md)
- [qemu-metadata-api container image](/hack/qemu-metadata-api/README.md)
Deploying the VMs requires `libvirt` to be installed and configured correctly.
You may either use [your local libvirt setup](#local-libvirt-setup) if it meets the requirements, or use a [containerized libvirt in docker](#containerized-libvirt).
## Containerized libvirt
Constellation will automatically deploy a containerized libvirt instance, if no connection URI is defined in the Constellation config file.
Follow the steps in our [libvirt readme](../../cli/internal/libvirt/README.md) if you wish to build your own image.
## Local libvirt setup
<details>
<summary>Ubuntu</summary>
### Install required packages
[General reference](https://ubuntu.com/server/docs/virtualization-libvirt)
```shell-session
sudo apt install qemu-kvm libvirt-daemon-system xsltproc
sudo systemctl enable libvirtd
sudo usermod -a -G libvirt $USER
# reboot
```
### Setup emulated TPM
Using a virtual TPM (vTPM) with QEMU only works if swtpm is version 0.7 or newer!
Ubuntu 22.04 currently ships swtpm 0.6.3, so you need to install swtpm [from launchpad](https://launchpad.net/~stefanberger/+archive/ubuntu/swtpm-jammy/).
1. Uninstall current version of swtpm (if installed)
```shell-session
sudo apt remove swtpm swtpm-tools
```
2. Add ppa (this command shows the ppa for Ubuntu 22.04 jammy but others are available)
```shell-session
sudo add-apt-repository ppa:stefanberger/swtpm-jammy
sudo apt update
```
3. Install swtpm
```shell-session
sudo apt install swtpm swtpm-tools
```
4. Patch configuration under `/etc/swtpm_setup.conf`
```shell-session
# Program invoked for creating certificates
create_certs_tool = /usr/bin/swtpm_localca
```
5. Patch ownership of `/var/lib/swtpm-localca`
```shell-session
sudo chown -R swtpm:root /var/lib/swtpm-localca
```
</details>
<details>
<summary>Fedora</summary>
```shell-session
sudo dnf install -y dnf-plugins-core
sudo dnf -y install qemu-kvm libvirt-daemon-config-network libvirt-daemon-kvm xsltproc swtpm
sudo usermod -a -G libvirt $USER
# reboot
```
</details>
### Update libvirt settings
Open `/etc/libvirt/qemu.conf` and change the following settings:
```shell-session
security_driver = "none"
```
Then restart libvirt
```shell-session
sudo systemctl restart libvirtd
```
## Troubleshooting
### VMs are not properly cleaned up after a failed `constellation create` command
Terraform may fail to remove your VMs, in which case you need to do so manually.
- List all domains: `virsh list --all`
- Destroy domains with nvram: `virsh undefine --nvram <name>`
### VMs have no internet access
`iptables` rules may prevent your VMs form properly accessing the internet.
Make sure your rules are not dropping forwarded packages.
List your rules:
```shell
sudo iptables -S
```
The output may look similar to the following:
```shell
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
```
If your `FORWARD` chain is set to `DROP`, you will need to update your rules:
```shell
sudo iptables -P FORWARD ACCEPT
```

View file

@ -0,0 +1,147 @@
# Release Checklist
This checklist will prepare `v1.3.0` from `v1.2.0`. Adjust your version numbers accordingly.
1. Merge ready PRs
2. Search the code for TODOs and FIXMEs that should be resolved before releasing.
3. Create docs release (new major or minor release)
```sh
cd docs
npm install
npm run docusaurus docs:version 1.3
# push upstream via PR
```
4. Create a new branch `release/v1.3` (new minor version) or use the existing one (new patch version)
5. On this branch, prepare the following things:
1. (new patch version) `cherry-pick` (only) the required commits from `main`
2. Use [Build micro-service manual](https://github.com/edgelesssys/constellation/actions/workflows/build-micro-service-manual.yml) and run the pipeline once for each micro-service with the following parameters:
* branch: `release/v1.3`
* Container image tag: `v1.3.0`
* Version of the image to build: `1.3.0`
```sh
ver=1.3.0
```
```sh
minor=$(echo $ver | cut -d '.' -f 1,2)
echo $minor # should be 1.3
```
```sh
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=join-service -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=kmsserver -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=verification-service -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=qemu-metadata-api -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
```
3. Use [Build operator manual](https://github.com/edgelesssys/constellation/actions/workflows/build-operator-manual.yml) and run the pipeline once with the following parameters:
* branch: `release/v1.3`
* Container image tag: `v1.3.0`
```sh
# Alternative from CLI
gh workflow run build-operator-manual.yml --ref release/v$minor -F imageTag=v$ver --repo edgelesssys/constellation
```
4. Review and update changelog with all changes since last release. [GitHub's diff view](https://github.com/edgelesssys/constellation/compare/v2.0.0...main) helps a lot!
1. Rename the "Unreleased" heading to "[v1.3.0] - YYYY-MM-DD" and link the version to the upcoming release tag.
2. Create a new block for unreleased changes
5. Update project version in [CMakeLists.txt](/CMakeLists.txt) to `1.3.0` (without v).
6. Update the `version` key in [constellation-services/Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/Chart.yaml) and [operators/Chart.yaml](/cli/internal/helm/charts/edgeless/operators/Chart.yaml). Also update the `version` key for all subcharts, e.g. [Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/charts/kms/Chart.yaml). Lastly, update the `dependencies.*.version` key for all dependencies in the main charts [constellation-services/Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/Chart.yaml) and [operators/Chart.yaml](/cli/internal/helm/charts/edgeless/operators/Chart.yaml).
7. Update [default image versions in enterprise config](/internal/config/images_enterprise.go)
8. When the microservice builds are finished update versions in [versions.go](../../internal/versions/versions.go#L33-L39) to `v1.3.0`, **add the container hashes** and **push your changes**.
```sh
# crane: https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md
crane digest ghcr.io/edgelesssys/constellation/node-operator:v$ver
crane digest ghcr.io/edgelesssys/constellation/join-service:v$ver
crane digest ghcr.io/edgelesssys/constellation/access-manager:v$ver
crane digest ghcr.io/edgelesssys/constellation/kmsserver:v$ver
crane digest ghcr.io/edgelesssys/constellation/verification-service:v$ver
crane digest ghcr.io/edgelesssys/constellation/qemu-metadata-api:v$ver
```
9. Create a [production OS image](/.github/workflows/build-os-image.yml)
```sh
gh workflow run build-os-image.yml --ref release/v$minor -F imageVersion=v$ver -F isRelease=true -F stream=stable
```
10. [Generate measurements](/.github/workflows/generate-measurements.yml) for the images.
```sh
gh workflow run generate-measurements.yml --ref release/v$minor -F osImage=v$ver -F isDebugImage=false -F signMeasurements=true
```
11. Update expected measurements in [`measurements_enterprise.go`](/internal/attestation/measurements/measurements_enterprise.go) using the generated measurements from step 12 and **push your changes**.
12. Run manual E2E tests using [Linux](/.github/workflows/e2e-test-manual.yml) and [macOS](/.github/workflows/e2e-test-manual-macos.yml) to confirm functionality and stability.
```sh
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=aws -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=aws -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=azure -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=azure -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=gcp -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=gcp -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
gh workflow run e2e-mini.yml --ref release/v$minor
```
13. Create a new tag on this release branch.
```sh
git tag v$ver
git push origin refs/tags/v$ver
```
14. Run [Release CLI](https://github.com/edgelesssys/constellation/actions/workflows/release-cli.yml) action on the tag.
```sh
gh workflow run release-cli.yml --ref v$ver
```
* The previous step will create a draft release. Check build output for link to draft release. Review & approve.
6. Check if the Constellation OS image is available via the versions API.
```sh
curl -s "https://cdn.confidential.cloud/constellation/v1/ref/-/stream/stable/versions/minor/v${minor}/image.json"
# list of versions should contain the new version
```
7. Export, download and make image available in S3 for trusted launch users. To achieve this:
```sh
TARGET_DISK=export-${ver}
az disk create -g constellation-images -l westus -n ${TARGET_DISK} --hyper-v-generation V2 --os-type Linux --sku standard_lrs --security-type TrustedLaunch --gallery-image-reference /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435/resourceGroups/CONSTELLATION-IMAGES/providers/Microsoft.Compute/galleries/Constellation/images/constellation/versions/${ver}
```
* Find the created resource in Azure
* Go to `Settings` -> `Export` and `Generate URLs`
* Download both the disk image (first link) and VM state (second link)
* Rename disk (`abcd`) to `constellation.img`.
* Rename state (UUID) to `constellation.vmgs`.
* Go to [AWS S3 bucket for trusted launch](https://s3.console.aws.amazon.com/s3/buckets/cdn-constellation-backend?prefix=constellation/images/azure/trusted-launch/&region=eu-central-1), create a new folder with the given version number.
* Upload both image and state into the newly created folder.
* Delete the disk in Azure!
8. To bring updated version numbers and other changes (if any) to main, create a new branch `feat/release` from `release/v1.3`, rebase it onto main, and create a PR to main
9. Milestones management
1. Create a new milestone for the next release
2. Add the next release manager and an approximate release date to the milestone description
3. Close the milestone for the release
4. Move open issues and PRs from closed milestone to next milestone
10. If the release is a minor version release, tag the latest commit on main as the start of the next pre-release phase.
```sh
nextMinorVer=$(echo "${ver}" | awk -F. -v OFS=. '{$2 += 1 ; print}')
git checkout main
git pull
git tag v${nextMinorVer}-pre
git push origin refs/tags/v${nextMinorVer}-pre
```
11. Test Constellation mini up

View file

@ -0,0 +1,26 @@
# Terraform development
## iamlive
[iamlive](https://github.com/iann0036/iamlive) dynamically determines the minimal
permissions to call a set of AWS API calls.
It uses a local proxy to intercept API calls and incrementally generate the AWS
policy.
In one session start `iamlive`:
```sh
iamlive -mode proxy -bind-addr 0.0.0.0:10080 -force-wildcard-resource -output-file iamlive.policy.json
```
In another session execute terraform:
```sh
PREFIX="record-iam"
terraform init
HTTP_PROXY=http://127.0.0.1:10080 HTTPS_PROXY=http://127.0.0.1:10080 AWS_CA_BUNDLE="${HOME}/.iamlive/ca.pem" terraform apply -auto-approve -var name_prefix=${PREFIX}
HTTP_PROXY=http://127.0.0.1:10080 HTTPS_PROXY=http://127.0.0.1:10080 AWS_CA_BUNDLE="${HOME}/.iamlive/ca.pem" terraform destroy -auto-approve -var name_prefix=${PREFIX}
```
`iamlive` will present the generated policy, and after \<CTRL-C\> the `iamlive` process it will also write it to the specified file.

View file

@ -0,0 +1,52 @@
# Upgrading Kubernetes
Constellation is a Kubernetes distribution. As such, dependencies on Kubernetes versions exist in multiple places:
- The desired Kubernetes version deployed by `kubeadm init`
- Kubernetes resources (deployments made while initializing Kubernetes, including the `cloud-controller-manager`, `cluster-autoscaler` and more)
- Kubernetes go dependencies for the bootstrapper code
## Understand what has changed
Before adding support for a new Kubernetes version, it is a very good idea to [read the release notes](https://kubernetes.io/releases/notes/) and to identify breaking changes.
## Upgrading Kubernetes resources
Everything related to Kubernetes versions is tracked in [the versions file](/internal/versions/versions.go). Add a new `ValidK8sVersion` and fill out the `VersionConfigs` entry for that version.
During cluster initialization, multiple Kubernetes resources are deployed. Some of these should be upgraded with Kubernetes.
You can check available version tags for container images using [the container registry tags API](https://docs.docker.com/registry/spec/api/#listing-image-tags):
```sh
curl -qL https://registry.k8s.io/v2/autoscaling/cluster-autoscaler/tags/list | jq .tags
curl -qL https://registry.k8s.io/v2/cloud-controller-manager/tags/list | jq .tags
curl -qL https://registry.k8s.io/v2/provider-aws/cloud-controller-manager/tags/list | jq .tags
curl -qL https://mcr.microsoft.com/v2/oss/kubernetes/azure-cloud-controller-manager/tags/list | jq .tags
curl -qL https://mcr.microsoft.com/v2/oss/kubernetes/azure-cloud-node-manager/tags/list | jq .tags
# [...]
```
Normally renovate will handle the upgrading of Kubernetes dependencies.
## Test the new Kubernetes version
- Setup a Constellation cluster using the new image with the new bootstrapper binary and check if Kubernetes is deployed successfully.
```sh
# should print the new k8s version for every node
kubectl get nodes -o wide
# read the logs for pods deployed in the kube-system namespace and ensure they are healthy
kubectl -n kube-system get pods
kubectl -n kube-system logs [...]
kubectl -n kube-system describe pods
```
- Read the logs of the main Kubernetes components by getting a shell on the nodes and scan for errors / deprecation warnings:
```sh
journalctl -u kubelet
journalctl -u containerd
```
- Conduct e2e tests
- [Run the sonobuoy test suite against your branch](https://sonobuoy.io/)
- [Run CI e2e tests](/.github/docs/README.md)