mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-07-22 06:50:43 -04:00
dev-docs: move into top-level dir (#924)
(cherry picked from commit b2f8f72f1e
)
This commit is contained in:
parent
13e6917f3a
commit
d2bcf2194a
11 changed files with 5 additions and 5 deletions
136
dev-docs/conventions.md
Normal file
136
dev-docs/conventions.md
Normal file
|
@ -0,0 +1,136 @@
|
|||
# Process conventions
|
||||
|
||||
## Pull request process
|
||||
|
||||
Submissions should remain focused in scope and avoid containing unrelated commits.
|
||||
For pull requests, we employ the following workflow:
|
||||
|
||||
1. Fork the repository to your own GitHub account
|
||||
2. Create a branch locally with a descriptive name
|
||||
3. Commit changes to the branch
|
||||
4. Write your code according to our development guidelines
|
||||
5. Push changes to your fork
|
||||
6. Clean up your commit history
|
||||
7. Open a PR in our repository and summarize the changes in the description
|
||||
|
||||
## Reporting issues and bugs, asking questions
|
||||
|
||||
This project uses the GitHub issue tracker. Please check the existing issues before submitting to avoid duplicates.
|
||||
|
||||
To report a security issue, contact security@edgeless.systems.
|
||||
|
||||
Your bug report should cover the following points:
|
||||
|
||||
* A quick summary and/or background of the issue
|
||||
* Steps to reproduce (be specific, e.g., provide sample code)
|
||||
* What you expected would happen
|
||||
* What actually happens
|
||||
* Further notes:
|
||||
* Thoughts on possible causes
|
||||
* Tested workarounds or fixes
|
||||
|
||||
## Major changes and feature requests
|
||||
|
||||
You should discuss larger changes and feature requests with the maintainers. Please open an issue describing your plans.
|
||||
|
||||
[Run CI e2e tests](/.github/docs/README.md)
|
||||
|
||||
# Go code conventions
|
||||
|
||||
## General
|
||||
|
||||
Adhere to the style and best practices described in [Effective Go](https://golang.org/doc/effective_go.html). Read [Common Review Comments](https://github.com/golang/go/wiki/CodeReviewComments) for further information.
|
||||
|
||||
## Linting
|
||||
|
||||
This projects uses [golangci-lint](https://golangci-lint.run/) for linting.
|
||||
You can [install golangci-lint](https://golangci-lint.run/usage/install/#linux-and-windows) locally,
|
||||
but there is also a CI action to ensure compliance.
|
||||
|
||||
To locally run all configured linters, execute
|
||||
|
||||
```sh
|
||||
golangci-lint run ./...
|
||||
```
|
||||
|
||||
It is also recommended to use golangci-lint (and [gofumpt](https://github.com/mvdan/gofumpt) as formatter) in your IDE, by adding the recommended VS Code Settings or by [configuring it yourself](https://golangci-lint.run/usage/integrations/#editor-integration)
|
||||
|
||||
## Nested Go modules
|
||||
|
||||
As this project contains nested Go modules, we use a Go work file to ease integration with IDEs. You can find an introduction in the [Go workspace tutorial](https://go.dev/doc/tutorial/workspaces).
|
||||
|
||||
## Recommended VS Code Settings
|
||||
|
||||
The following can be added to your personal `settings.json`, but it is recommended to add it to
|
||||
the `<REPOSITORY>/.vscode/settings.json` repo, so the settings will only affect this repository.
|
||||
|
||||
```jsonc
|
||||
// Use gofumpt as formatter.
|
||||
"gopls": {
|
||||
"formatting.gofumpt": true,
|
||||
},
|
||||
// Use golangci-lint as linter. Make sure you've installed it.
|
||||
"go.lintTool":"golangci-lint",
|
||||
"go.lintFlags": ["--fast"],
|
||||
// You can easily show Go test coverage by running a package test.
|
||||
"go.coverageOptions": "showUncoveredCodeOnly",
|
||||
// Executing unit tests with race detection.
|
||||
// You can add preferences like "-v" or "-count=1"
|
||||
"go.testFlags": ["-race"],
|
||||
// Enable language features for files with build tags.
|
||||
// Attention! This leads to integration test being executed when
|
||||
// running a package test within a package containing integration
|
||||
// tests.
|
||||
"go.buildTags": "integration",
|
||||
```
|
||||
|
||||
Additionally, we use the [Redhat YAML formatter](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) to have uniform formatting in our `.yaml` files.
|
||||
|
||||
## PR conventions
|
||||
|
||||
Our changelog is generated from PR titles. Which PR is listed in which category is determined by labels, see the [release.yml](/.github/release.yml).
|
||||
|
||||
The PR title should be structured in one of the following ways:
|
||||
|
||||
```
|
||||
<module>: <title>
|
||||
```
|
||||
|
||||
Where the `<module>` is
|
||||
|
||||
- the top level directory of the microservice or component, e.g., `joinservice`, `disk-mapper`, `upgrade-agent` but also `docs` and `rfc`
|
||||
- in internal, the second level directory
|
||||
- `deps` for dependency upgrades
|
||||
- `ci` for things that are CI related
|
||||
|
||||
and `<title>` is all lower case (except proper names, including acronyms).
|
||||
Ticket numbers shouldn't be part of the title.
|
||||
|
||||
In case the scope of your PR is too wide, use the alternative format.
|
||||
|
||||
```
|
||||
<Title>
|
||||
```
|
||||
|
||||
and `<Title>` starts with a capital letter.
|
||||
|
||||
## Naming convention
|
||||
|
||||
### Network
|
||||
|
||||
IP addresses:
|
||||
|
||||
* ip: numeric IP address
|
||||
* host: either IP address or hostname
|
||||
* endpoint: host+port
|
||||
|
||||
### Keys
|
||||
|
||||
* key: symmetric key
|
||||
* pubKey: public key
|
||||
* privKey: private key
|
||||
|
||||
# Shell script code conventions
|
||||
|
||||
We use [shellcheck](https://github.com/koalaman/shellcheck) to ensure code quality.
|
||||
You might want to install an [IDE extension](https://marketplace.visualstudio.com/items?itemName=timonwong.shellcheck).
|
55
dev-docs/howto/longhorn.md
Normal file
55
dev-docs/howto/longhorn.md
Normal file
|
@ -0,0 +1,55 @@
|
|||
# Longhorn on Constellatioin
|
||||
|
||||
To build Longhorn compatible images, apply the following changes. Those stem from [their installation guide](https://longhorn.io/docs/1.3.2/deploy/install/#installation-requirements).
|
||||
|
||||
```diff
|
||||
diff --git a/image/mkosi.conf.d/azure.conf b/image/mkosi.conf.d/azure.conf
|
||||
index bc4b707b..6de2254a 100644
|
||||
--- a/image/mkosi.conf.d/azure.conf
|
||||
+++ b/image/mkosi.conf.d/azure.conf
|
||||
@@ -1,3 +1,5 @@
|
||||
[Content]
|
||||
Packages=
|
||||
WALinuxAgent-udev
|
||||
+ nfs-utils
|
||||
+ iscsi-initiator-utils
|
||||
diff --git a/image/mkosi.skeleton/etc/fstab b/image/mkosi.skeleton/etc/fstab
|
||||
index e22f0b24..2e212267 100644
|
||||
--- a/image/mkosi.skeleton/etc/fstab
|
||||
+++ b/image/mkosi.skeleton/etc/fstab
|
||||
@@ -1,5 +1,6 @@
|
||||
/dev/mapper/state /run/state ext4 defaults,x-systemd.makefs,x-mount.mkdir 0 0
|
||||
/run/state/var /var none defaults,bind,x-mount.mkdir 0 0
|
||||
+/run/state/iscsi /etc/iscsi none defaults,bind,x-mount.mkdir 0 0
|
||||
/run/state/kubernetes /etc/kubernetes none defaults,bind,x-mount.mkdir 0 0
|
||||
/run/state/etccni /etc/cni/ none defaults,bind,x-mount.mkdir 0 0
|
||||
/run/state/opt /opt none defaults,bind,x-mount.mkdir 0 0
|
||||
diff --git a/image/mkosi.skeleton/usr/lib/systemd/system-preset/30-constellation.preset b/image/mkosi.skeleton/usr/lib/systemd/system-preset/30-constellation.preset
|
||||
index 24072c48..7b7498d6 100644
|
||||
--- a/image/mkosi.skeleton/usr/lib/systemd/system-preset/30-constellation.preset
|
||||
+++ b/image/mkosi.skeleton/usr/lib/systemd/system-preset/30-constellation.preset
|
||||
@@ -4,3 +4,5 @@ enable containerd.service
|
||||
enable kubelet.service
|
||||
enable systemd-networkd.service
|
||||
enable tpm-pcrs.service
|
||||
+enable iscsibefore.service
|
||||
+enable iscsid.service
|
||||
diff --git a/image/mkosi.skeleton/usr/lib/systemd/system/iscsibefore.service b/image/mkosi.skeleton/usr/lib/systemd/system/iscsibefore.service
|
||||
new file mode 100644
|
||||
index 00000000..355a2f83
|
||||
--- /dev/null
|
||||
+++ b/image/mkosi.skeleton/usr/lib/systemd/system/iscsibefore.service
|
||||
@@ -0,0 +1,12 @@
|
||||
+[Unit]
|
||||
+Description=before iscsid
|
||||
+Before=iscsid.service
|
||||
+ConditionPathExists=!/etc/iscsi/initiatorname.iscsi
|
||||
+
|
||||
+[Service]
|
||||
+Type=oneshot
|
||||
+ExecStart=/bin/bash -c "echo \"InitiatorName=$(/sbin/iscsi-iname)\" > /etc/iscsi/initiatorname.iscsi"
|
||||
+RemainAfterExit=yes
|
||||
+
|
||||
+[Install]
|
||||
+WantedBy=multi-user.target
|
||||
```
|
255
dev-docs/howto/nfs.md
Normal file
255
dev-docs/howto/nfs.md
Normal file
|
@ -0,0 +1,255 @@
|
|||
# Deploying NFS in Constellation using Rook
|
||||
|
||||
This document describes how to deploy NFS in Constellation using Rook.
|
||||
|
||||
## Create a Cluster
|
||||
|
||||
The cluster needs at least 3 worker nodes, default machines are powerful enough.
|
||||
|
||||
```bash
|
||||
constellation create --name nfs -c 1 -w 3
|
||||
```
|
||||
|
||||
## Deploy CSI Driver
|
||||
|
||||
> **_NOTE:_** For additional integrity protection, use our [Constellation CSI drivers](https://docs.edgeless.systems/constellation/workflows/storage) with integrity protection enabled. With this option there is no need to enable encryption on Cephs side in the step [Deploy Rook](#deploy-rook).
|
||||
|
||||
We need block storage form somewhere. We will use the official Azure CSI for that. We need to create the azure config secret again with the expected fields. Replace "XXX" with the corresponding value from the secret `azureconfig`.
|
||||
|
||||
```bash
|
||||
kubectl create secret generic -n kube-system --from-literal=cloud-config='{"cloud":"AzurePublicCloud","useInstanceMetadata":true,"vmType":"vmss","tenantId":"XXX","subscriptionId":"XXX","resourceGroup":"XXX","location":"XXX", "aadClientId":"XXX","aadClientSecret":"XXX"}' azure-config
|
||||
|
||||
helm repo add azuredisk-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts
|
||||
helm repo update azuredisk-csi-driver
|
||||
helm install azuredisk-csi-driver azuredisk-csi-driver/azuredisk-csi-driver --namespace kube-system --set linux.distro=fedora --set controller.cloudConfigSecretName=azure-config --set node.cloudConfigSecretName=azure-config
|
||||
```
|
||||
|
||||
## Deploy the StorageClass
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: managed-premium
|
||||
provisioner: disk.csi.azure.com
|
||||
parameters:
|
||||
skuName: Premium_LRS
|
||||
cachingmode: ReadOnly
|
||||
kind: Managed
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
```
|
||||
|
||||
## Deploy Rook
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rook/rook.git
|
||||
cd rook/deploy/examples
|
||||
kubectl apply -f common.yaml -f crds.yaml -f operator.yaml
|
||||
kubectl rollout status -n rook-ceph deployment/rook-ceph-operator
|
||||
```
|
||||
|
||||
Apply the following changes to `cluster-on-pvc.yaml`:
|
||||
|
||||
```diff
|
||||
euler@work:~/projects/rook/deploy/examples$ git diff cluster-on-pvc.yaml
|
||||
diff --git a/deploy/examples/cluster-on-pvc.yaml b/deploy/examples/cluster-on-pvc.yaml
|
||||
index ee4976be2..b5cf294cb 100644
|
||||
--- a/deploy/examples/cluster-on-pvc.yaml
|
||||
+++ b/deploy/examples/cluster-on-pvc.yaml
|
||||
@@ -16,7 +16,7 @@ spec:
|
||||
mon:
|
||||
# Set the number of mons to be started. Generally recommended to be 3.
|
||||
# For highest availability, an odd number of mons should be specified.
|
||||
- count: 3
|
||||
+ count: 1
|
||||
# The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
|
||||
# Mons should only be allowed on the same node for test environments where data loss is acceptable.
|
||||
allowMultiplePerNode: false
|
||||
@@ -28,7 +28,7 @@ spec:
|
||||
# size appropriate for monitor data will be used.
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
- storageClassName: gp2
|
||||
+ storageClassName: managed-premium
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
@@ -59,13 +59,13 @@ spec:
|
||||
# Certain storage class in the Cloud are slow
|
||||
# Rook can configure the OSD running on PVC to accommodate that by tuning some of the Ceph internal
|
||||
# Currently, "gp2" has been identified as such
|
||||
- tuneDeviceClass: true
|
||||
+ tuneDeviceClass: false
|
||||
# Certain storage class in the Cloud are fast
|
||||
# Rook can configure the OSD running on PVC to accommodate that by tuning some of the Ceph internal
|
||||
# Currently, "managed-premium" has been identified as such
|
||||
- tuneFastDeviceClass: false
|
||||
+ tuneFastDeviceClass: true
|
||||
# whether to encrypt the deviceSet or not
|
||||
- encrypted: false
|
||||
+ encrypted: true
|
||||
# Since the OSDs could end up on any node, an effort needs to be made to spread the OSDs
|
||||
# across nodes as much as possible. Unfortunately the pod anti-affinity breaks down
|
||||
# as soon as you have more than one OSD per node. The topology spread constraints will
|
||||
@@ -100,7 +100,7 @@ spec:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
# IMPORTANT: If you don't have zone labels, change this to another key such as kubernetes.io/hostname
|
||||
- topologyKey: topology.kubernetes.io/zone
|
||||
+ topologyKey: kubernetes.io/hostname
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
@@ -127,7 +127,7 @@ spec:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
# IMPORTANT: Change the storage class depending on your environment
|
||||
- storageClassName: gp2
|
||||
+ storageClassName: managed-premium
|
||||
volumeMode: Block
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
```
|
||||
|
||||
Now apply the yaml:
|
||||
|
||||
```bash
|
||||
kubectl apply -f cluster-on-pvc.yaml
|
||||
```
|
||||
|
||||
Verify the health of the ceph cluster:
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f toolbox.yaml
|
||||
$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
|
||||
cluster:
|
||||
id: 7c220b31-29f7-4f17-a291-3ef39a9553b3
|
||||
health: HEALTH_OK
|
||||
|
||||
services:
|
||||
mon: 3 daemons, quorum a,b,c (age 2m)
|
||||
mgr: a(active, since 72s)
|
||||
osd: 3 osds: 3 up (since 61s), 3 in (since 81s)
|
||||
|
||||
data:
|
||||
pools: 1 pools, 1 pgs
|
||||
objects: 2 objects, 449 KiB
|
||||
usage: 62 MiB used, 30 GiB / 30 GiB avail
|
||||
pgs: 1 active+clean
|
||||
```
|
||||
|
||||
Deploy the filesystem:
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f filesystem.yaml
|
||||
$ kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
|
||||
cluster:
|
||||
id: 7c220b31-29f7-4f17-a291-3ef39a9553b3
|
||||
health: HEALTH_OK
|
||||
|
||||
services:
|
||||
mon: 3 daemons, quorum a,b,c (age 3m)
|
||||
mgr: a(active, since 2m)
|
||||
mds: 1/1 daemons up, 1 hot standby
|
||||
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)
|
||||
|
||||
data:
|
||||
volumes: 1/1 healthy
|
||||
pools: 3 pools, 34 pgs
|
||||
objects: 24 objects, 451 KiB
|
||||
usage: 63 MiB used, 30 GiB / 30 GiB avail
|
||||
pgs: 34 active+clean
|
||||
|
||||
io:
|
||||
client: 853 B/s rd, 1 op/s rd, 0 op/s wr
|
||||
|
||||
progress:
|
||||
```
|
||||
|
||||
Deploy the StorageClass:
|
||||
|
||||
```bash
|
||||
kubectl apply -f csi/cephfs/storageclass.yaml
|
||||
```
|
||||
|
||||
Rescale the monitor count to 3:
|
||||
|
||||
```bash
|
||||
kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"mon":{"count":3}}}'
|
||||
```
|
||||
|
||||
## Use the NFS
|
||||
|
||||
The following deployment will create a PVC based on NFS and mount it into 3 pods.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: nfs
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
storageClassName: rook-cephfs
|
||||
---
|
||||
# from https://github.com/Azure/kubernetes-volume-drivers/tree/master/nfs
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: statefulset-nfs
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
serviceName: statefulset-nfs
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: statefulset-nfs
|
||||
image: nginx
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-c"
|
||||
- "sleep 9999999"
|
||||
volumeMounts:
|
||||
- name: persistent-storage
|
||||
mountPath: /mnt/nfs
|
||||
volumes:
|
||||
- name: persistent-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: nfs
|
||||
readOnly: false
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
## Verify Ceph OSD encryption
|
||||
|
||||
To verify that Ceph created an encrypted device, [log into a node](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container) via `kubectl debug`.
|
||||
|
||||
```bash
|
||||
$ ls /dev/mapper/
|
||||
control root set1-data-1flnzz-block-dmcrypt state state_dif
|
||||
|
||||
$ cryptsetup status /dev/mapper/set1-data-1flnzz-block-dmcrypt
|
||||
/dev/mapper/set1-data-1flnzz-block-dmcrypt is active and is in use.
|
||||
type: LUKS2
|
||||
cipher: aes-xts-plain64
|
||||
keysize: 512 bits
|
||||
key location: dm-crypt
|
||||
device: /dev/sdc
|
||||
sector size: 512
|
||||
offset: 32768 sectors
|
||||
size: 20938752 sectors
|
||||
mode: read/write
|
||||
flags: discards
|
||||
```
|
22
dev-docs/layout.md
Normal file
22
dev-docs/layout.md
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Repository Layout
|
||||
|
||||
Core components:
|
||||
|
||||
* [cli](/cli): The CLI is used to manage a Constellation cluster
|
||||
* [bootstrapper](/bootstrapper): The bootstrapper is a node agent whose most important task is to bootstrap a node
|
||||
* [image](/image): Build files for the Constellation disk image
|
||||
* [kms](/kms): Constellation's key management client and server
|
||||
* [csi](/csi): Package used by CSI plugins to create and mount encrypted block devices
|
||||
* [disk-mapper](/disk-mapper): Contains the disk-mapper that maps the encrypted node data disk during boot
|
||||
|
||||
Development components:
|
||||
|
||||
* [3rdparty](/3rdparty): Contains the third party dependencies used by Constellation
|
||||
* [debugd](/debugd): Debug daemon and client
|
||||
* [hack](/hack): Development tools
|
||||
* [proto](/proto): Proto files generator
|
||||
|
||||
Additional repositories:
|
||||
|
||||
* [constellation-azuredisk-csi-driver](https://github.com/edgelesssys/constellation-azuredisk-csi-driver): Azure CSI driver with encryption on node
|
||||
* [constellation-gcp-compute-persistent-disk-csi-driver](https://github.com/edgelesssys/constellation-gcp-compute-persistent-disk-csi-driver): GCP CSI driver with encryption on node
|
114
dev-docs/workflows/build-test-run.md
Normal file
114
dev-docs/workflows/build-test-run.md
Normal file
|
@ -0,0 +1,114 @@
|
|||
# Build
|
||||
|
||||
The following are instructions for building all components in the constellation repository, except for images. A manual on how to build images locally can be found in the [image README](/image/README.md).
|
||||
|
||||
Prerequisites:
|
||||
|
||||
* 20 GB disk space
|
||||
* [Latest version of Go](https://go.dev/doc/install).
|
||||
* [Docker](https://docs.docker.com/engine/install/). Can be installed with these commands on Ubuntu 22.04: `sudo apt update && sudo apt install docker.io`. As the build spawns docker containers your user account either needs to be in the `docker` group (Add with `sudo usermod -a -G docker $USER`) or you have to run builds with `sudo`. When using `sudo` remember that your root user might (depending on your distro and local config) not have the go binary in it's PATH. The current PATH can be forwarded to the root env with `sudo env PATH=$PATH <cmd>`.
|
||||
|
||||
* Packages on Ubuntu:
|
||||
|
||||
```sh
|
||||
sudo apt install build-essential cmake libssl-dev pkg-config libcryptsetup12 libcryptsetup-dev
|
||||
```
|
||||
|
||||
* Packages on Fedora:
|
||||
|
||||
```sh
|
||||
sudo dnf install @development-tools pkg-config cmake openssl-devel cryptsetup-libs cryptsetup-devel
|
||||
```
|
||||
|
||||
```sh
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
make -j`nproc`
|
||||
```
|
||||
|
||||
# Test
|
||||
|
||||
You can run all integration and unitttests like this:
|
||||
|
||||
```sh
|
||||
ctest -j `nproc`
|
||||
```
|
||||
|
||||
You can limit the execution of tests to specific targets with e.g. `ctest -R unit-*` to only execute unit tests.
|
||||
|
||||
Some of the tests rely on libvirt and won't work if you don't have a virtualization capable CPU. You can find instructions on setting up libvirt in our [QEMU README](qemu.md).
|
||||
|
||||
# Deploy
|
||||
|
||||
> :warning: Debug images are not safe to use in production environments. :warning:
|
||||
The debug images will open an additional **unsecured** port (4000) which accepts any binary to be run on the target machine. **Make sure that those machines are not exposed to the internet.**
|
||||
|
||||
## Cloud
|
||||
|
||||
To familiarize yourself with debugd and learn how to deploy a cluster using it, read [this](/debugd/README.md) manual.
|
||||
If you want to deploy a cluster for production, please refer to our user documentation [here](https://docs.edgeless.systems/constellation/getting-started/first-steps#create-a-cluster).
|
||||
|
||||
## Locally
|
||||
|
||||
In case you want to have quicker iteration cycles during development you might want to setup a local cluster.
|
||||
You can do this by utilizing our QEMU setup.
|
||||
Instructions on how to set it up can be found in the [QEMU README](qemu.md).
|
||||
|
||||
# Verification
|
||||
|
||||
In order to verify your cluster we describe a [verification workflow](https://docs.edgeless.systems/constellation/workflows/verify-cluster) in our official docs.
|
||||
Apart from that you can also reproduce some of the measurements described in the [docs](https://docs.edgeless.systems/constellation/architecture/attestation#runtime-measurements) locally.
|
||||
To do so we built a tool that creates a VM, collects the PCR values and reports them to you.
|
||||
To run the tool execute the following command in `/hack/image-measurement`:
|
||||
|
||||
```sh
|
||||
go run . -path <image_path> -type <image_type>
|
||||
```
|
||||
|
||||
`<image_path>` needs to point to a valid image file.
|
||||
The image can be either in raw or QEMU's `qcow2` format.
|
||||
This format is specified in the `<image_type>` argument.
|
||||
|
||||
You can compare the values of PCR 4, 8 and 9 to the ones you are seeing in your `constellation-conf.yaml`.
|
||||
The PCR values depend on the image you specify in the `path` argument.
|
||||
Therefore, if you want to verify a cluster deployed with a release image you will have to download the images first.
|
||||
|
||||
After collecting the measurements you can put them into your `constellation-conf.yaml` under the `measurements` key in order to enforce them.
|
||||
|
||||
# Image export
|
||||
|
||||
To download an image you will have to export it first.
|
||||
Below you find general instructions on how to do this for GCP and Azure.
|
||||
|
||||
## GCP
|
||||
|
||||
In order to download an image you will have to export it to a bucket you have access to:
|
||||
|
||||
* "Owner" permissions on the project
|
||||
* "Storage Admin" permissions on the bucket
|
||||
* Export with:
|
||||
|
||||
```bash
|
||||
gcloud compute images export --image=<image_path> --destination-uri=<bucket_uri> --export-format=qcow2 --project=<image_project>
|
||||
```
|
||||
|
||||
* Click on "Download" on the created object
|
||||
|
||||
## Azure
|
||||
|
||||
To download an image from Azure you will have to create a disk from the image and generate a download link for that disk:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
VERSION=0.0.1
|
||||
TARGET_DISK=export-${VERSION}
|
||||
|
||||
az disk create -g <resource_group> -l <target_region> -n $TARGET_DISK --hyper-v-generation V2 --os-type Linux --sku standard_lrs --security-type TrustedLaunch --gallery-image-reference <image_path>
|
||||
|
||||
az disk grant-access -n $TARGET_DISK -g constellation-images --access-level Read --duration-in-seconds 3600 | jq -r .accessSas
|
||||
```
|
||||
|
||||
Depending on you internet connection you might have to modify the duration value.
|
||||
The duration value specifies for how long the link is usable.
|
106
dev-docs/workflows/github-actions.md
Normal file
106
dev-docs/workflows/github-actions.md
Normal file
|
@ -0,0 +1,106 @@
|
|||
# Actions & Workflows
|
||||
|
||||
## Manual Trigger (workflow_dispatch)
|
||||
|
||||
It is currently not possible to run a `workflow_dispatch` based workflow on a specific branch, while it is not yet available in `main` branch, from the WebUI. If you would like to test your pipeline changes on a branch, use the [GitHub CLI](https://github.com/cli/cli):
|
||||
|
||||
```bash
|
||||
gh workflow run e2e-test-manual.yml \
|
||||
--ref feat/e2e_pipeline \ # On your specific branch!
|
||||
-F cloudProvider=gcp \ # With your ...
|
||||
-F controlNodesCount=1 -F workerNodesCount=2 \ # ... settings
|
||||
-F machineType=n2d-standard-4
|
||||
```
|
||||
|
||||
### E2E Test Suites
|
||||
|
||||
Here are some examples for test suits you might want to run. Values for `sonobuoyTestSuiteCmd`:
|
||||
|
||||
* `--mode quick`
|
||||
* Runs a set of tests that are known to be quick to execute! (<1 min)
|
||||
* `--e2e-focus "Services should be able to create a functioning NodePort service"`
|
||||
* Runs a specific test
|
||||
* `--mode certified-conformance`
|
||||
* For K8s conformance certification test suite
|
||||
|
||||
Check [Sonobuoy docs](https://sonobuoy.io/docs/latest/e2eplugin/) for more examples.
|
||||
|
||||
When using `--mode` be aware that `--e2e-focus` and `e2e-skip` will be overwritten. [Check in the source code](https://github.com/vmware-tanzu/sonobuoy/blob/e709787426316423a4821927b1749d5bcc90cb8c/cmd/sonobuoy/app/modes.go#L130) what the different modes do.
|
||||
|
||||
## Local Development
|
||||
Using [***act***](https://github.com/nektos/act) you can run GitHub actions locally.
|
||||
|
||||
**These instructions are for internal use.**
|
||||
In case you want to use the E2E actions externally, you need to adjust other configuration parameters.
|
||||
Check the assignments made in the [/.github/actions/e2e_test/action.yml](E2E action) and adjust any hard-coded values.
|
||||
|
||||
### Specific Jobs
|
||||
|
||||
```bash
|
||||
act -j e2e-test-gcp
|
||||
```
|
||||
|
||||
### Simulate a `workflow_dispatch` event
|
||||
|
||||
Create a new JSON file to describe the event ([relevant issue](https://github.com/nektos/act/issues/332), there are [no further information about structure of this file](https://github.com/nektos/act/blob/master/pkg/model/github_context.go#L11)):
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "workflow_dispatch",
|
||||
"inputs": {
|
||||
"workerNodesCount": "2",
|
||||
"controlNodesCount": "1",
|
||||
"cloudProvider": "gcp",
|
||||
"machineType": "n2d-standard-4",
|
||||
"sonobuoyTestSuiteCmd": "--mode quick"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then run *act* with the event as input:
|
||||
|
||||
```bash
|
||||
act -j e2e-test-manual --eventpath event.json
|
||||
```
|
||||
|
||||
### Authorizing GCP
|
||||
|
||||
For creating Kubernetes clusters in GCP a local copy of the service account secret is required.
|
||||
|
||||
1. [Create a new service account key](https://console.cloud.google.com/iam-admin/serviceaccounts/details/112741463528383500960/keys?authuser=0&project=constellation-331613&supportedpurview=project)
|
||||
2. Create a compact (one line) JSON representation of the file `jq -c`
|
||||
3. Store in a GitHub Action Secret called `GCP_SERVICE_ACCOUNT` or create a local secret file for *act* to consume:
|
||||
|
||||
```bash
|
||||
$ cat secrets.env
|
||||
GCP_SERVICE_ACCOUNT={"type":"service_account", ... }
|
||||
|
||||
$ act --secret-file secrets.env
|
||||
```
|
||||
|
||||
In addition, you need to create a Service Account which Constellation itself is supposed to use. Refer to [First steps](https://docs.edgeless.systems/constellation/getting-started/first-steps#create-a-cluster) in the documentation on how to create it. What you need here specifically is the `gcpServiceAccountKey`, which needs to be stored in a secret called `GCP_CLUSTER_SERVICE_ACCOUNT`.
|
||||
|
||||
### Authorizing Azure
|
||||
|
||||
Create a new service principal:
|
||||
|
||||
```bash
|
||||
az ad sp create-for-rbac --name "github-actions-e2e-tests" --role contributor --scopes /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435 --sdk-auth
|
||||
az role assignment create --role "User Access Administrator" --scope /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435 --assignee <SERVICE_PRINCIPAL_CLIENT_ID>
|
||||
```
|
||||
|
||||
Next, add API permissions to Managed Identity:
|
||||
|
||||
* Not possible through portal; requires PowerShell
|
||||
* <https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/grant-graph-api-permission-to-managed-identity-object/ba-p/2792127>
|
||||
* `$GraphAppId` in this article is for Microsoft Graph. Azure AD Graph is `00000002-0000-0000-c000-000000000000`
|
||||
* Note that changing permissions can take between few seconds to several hours
|
||||
|
||||
Afterward, you need to define a few secrets either as Github Action Secrets or in a secrets file for *act* as described before.
|
||||
|
||||
The following secrets need to be defined:
|
||||
|
||||
* `AZURE_E2E_CREDENTIALS`: The output of `az ad sp ...`
|
||||
* `AZURE_E2E_CLIENT_SECRET`: The client secret value for the registered app on Azure (which is defined as `appClientID`).
|
||||
|
||||
For information on how to achieve this, refer to the [First steps](https://docs.edgeless.systems/constellation/getting-started/first-steps) in the documentation for Constellation.
|
133
dev-docs/workflows/qemu.md
Normal file
133
dev-docs/workflows/qemu.md
Normal file
|
@ -0,0 +1,133 @@
|
|||
# Local image testing with QEMU / libvirt
|
||||
|
||||
To create local testing clusters using QEMU, some prerequisites have to be met:
|
||||
|
||||
- [qcow2 constellation image](/image/README.md)
|
||||
- [qemu-metadata-api container image](/hack/qemu-metadata-api/README.md)
|
||||
|
||||
Deploying the VMs requires `libvirt` to be installed and configured correctly.
|
||||
You may either use [your local libvirt setup](#local-libvirt-setup) if it meets the requirements, or use a [containerized libvirt in docker](#containerized-libvirt).
|
||||
|
||||
## Containerized libvirt
|
||||
|
||||
Constellation will automatically deploy a containerized libvirt instance, if no connection URI is defined in the Constellation config file.
|
||||
Follow the steps in our [libvirt readme](../../cli/internal/libvirt/README.md) if you wish to build your own image.
|
||||
|
||||
## Local libvirt setup
|
||||
|
||||
<details>
|
||||
<summary>Ubuntu</summary>
|
||||
|
||||
### Install required packages
|
||||
|
||||
[General reference](https://ubuntu.com/server/docs/virtualization-libvirt)
|
||||
|
||||
```shell-session
|
||||
sudo apt install qemu-kvm libvirt-daemon-system xsltproc
|
||||
sudo systemctl enable libvirtd
|
||||
sudo usermod -a -G libvirt $USER
|
||||
# reboot
|
||||
```
|
||||
|
||||
### Setup emulated TPM
|
||||
|
||||
Using a virtual TPM (vTPM) with QEMU only works if swtpm is version 0.7 or newer!
|
||||
Ubuntu 22.04 currently ships swtpm 0.6.3, so you need to install swtpm [from launchpad](https://launchpad.net/~stefanberger/+archive/ubuntu/swtpm-jammy/).
|
||||
|
||||
1. Uninstall current version of swtpm (if installed)
|
||||
|
||||
```shell-session
|
||||
sudo apt remove swtpm swtpm-tools
|
||||
```
|
||||
|
||||
2. Add ppa (this command shows the ppa for Ubuntu 22.04 jammy but others are available)
|
||||
|
||||
```shell-session
|
||||
sudo add-apt-repository ppa:stefanberger/swtpm-jammy
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
3. Install swtpm
|
||||
|
||||
```shell-session
|
||||
sudo apt install swtpm swtpm-tools
|
||||
```
|
||||
|
||||
4. Patch configuration under `/etc/swtpm_setup.conf`
|
||||
|
||||
```shell-session
|
||||
# Program invoked for creating certificates
|
||||
create_certs_tool = /usr/bin/swtpm_localca
|
||||
```
|
||||
|
||||
5. Patch ownership of `/var/lib/swtpm-localca`
|
||||
|
||||
```shell-session
|
||||
sudo chown -R swtpm:root /var/lib/swtpm-localca
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Fedora</summary>
|
||||
|
||||
```shell-session
|
||||
sudo dnf install -y dnf-plugins-core
|
||||
sudo dnf -y install qemu-kvm libvirt-daemon-config-network libvirt-daemon-kvm xsltproc swtpm
|
||||
sudo usermod -a -G libvirt $USER
|
||||
# reboot
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Update libvirt settings
|
||||
|
||||
Open `/etc/libvirt/qemu.conf` and change the following settings:
|
||||
|
||||
```shell-session
|
||||
security_driver = "none"
|
||||
```
|
||||
|
||||
Then restart libvirt
|
||||
|
||||
```shell-session
|
||||
sudo systemctl restart libvirtd
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### VMs are not properly cleaned up after a failed `constellation create` command
|
||||
|
||||
Terraform may fail to remove your VMs, in which case you need to do so manually.
|
||||
|
||||
- List all domains: `virsh list --all`
|
||||
- Destroy domains with nvram: `virsh undefine --nvram <name>`
|
||||
|
||||
### VMs have no internet access
|
||||
|
||||
`iptables` rules may prevent your VMs form properly accessing the internet.
|
||||
Make sure your rules are not dropping forwarded packages.
|
||||
|
||||
List your rules:
|
||||
|
||||
```shell
|
||||
sudo iptables -S
|
||||
```
|
||||
|
||||
The output may look similar to the following:
|
||||
|
||||
```shell
|
||||
-P INPUT ACCEPT
|
||||
-P FORWARD DROP
|
||||
-P OUTPUT ACCEPT
|
||||
-N DOCKER
|
||||
-N DOCKER-ISOLATION-STAGE-1
|
||||
-N DOCKER-ISOLATION-STAGE-2
|
||||
-N DOCKER-USER
|
||||
```
|
||||
|
||||
If your `FORWARD` chain is set to `DROP`, you will need to update your rules:
|
||||
|
||||
```shell
|
||||
sudo iptables -P FORWARD ACCEPT
|
||||
```
|
147
dev-docs/workflows/release.md
Normal file
147
dev-docs/workflows/release.md
Normal file
|
@ -0,0 +1,147 @@
|
|||
# Release Checklist
|
||||
|
||||
This checklist will prepare `v1.3.0` from `v1.2.0`. Adjust your version numbers accordingly.
|
||||
|
||||
1. Merge ready PRs
|
||||
2. Search the code for TODOs and FIXMEs that should be resolved before releasing.
|
||||
3. Create docs release (new major or minor release)
|
||||
|
||||
```sh
|
||||
cd docs
|
||||
npm install
|
||||
npm run docusaurus docs:version 1.3
|
||||
# push upstream via PR
|
||||
```
|
||||
|
||||
4. Create a new branch `release/v1.3` (new minor version) or use the existing one (new patch version)
|
||||
5. On this branch, prepare the following things:
|
||||
1. (new patch version) `cherry-pick` (only) the required commits from `main`
|
||||
2. Use [Build micro-service manual](https://github.com/edgelesssys/constellation/actions/workflows/build-micro-service-manual.yml) and run the pipeline once for each micro-service with the following parameters:
|
||||
* branch: `release/v1.3`
|
||||
* Container image tag: `v1.3.0`
|
||||
* Version of the image to build: `1.3.0`
|
||||
|
||||
```sh
|
||||
ver=1.3.0
|
||||
```
|
||||
|
||||
```sh
|
||||
minor=$(echo $ver | cut -d '.' -f 1,2)
|
||||
echo $minor # should be 1.3
|
||||
```
|
||||
|
||||
```sh
|
||||
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=join-service -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
|
||||
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=kmsserver -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
|
||||
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=verification-service -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
|
||||
gh workflow run build-micro-service-manual.yml --ref release/v$minor -F microService=qemu-metadata-api -F imageTag=v$ver -F version=$ver --repo edgelesssys/constellation
|
||||
```
|
||||
|
||||
3. Use [Build operator manual](https://github.com/edgelesssys/constellation/actions/workflows/build-operator-manual.yml) and run the pipeline once with the following parameters:
|
||||
* branch: `release/v1.3`
|
||||
* Container image tag: `v1.3.0`
|
||||
|
||||
```sh
|
||||
# Alternative from CLI
|
||||
gh workflow run build-operator-manual.yml --ref release/v$minor -F imageTag=v$ver --repo edgelesssys/constellation
|
||||
```
|
||||
|
||||
4. Review and update changelog with all changes since last release. [GitHub's diff view](https://github.com/edgelesssys/constellation/compare/v2.0.0...main) helps a lot!
|
||||
1. Rename the "Unreleased" heading to "[v1.3.0] - YYYY-MM-DD" and link the version to the upcoming release tag.
|
||||
2. Create a new block for unreleased changes
|
||||
5. Update project version in [CMakeLists.txt](/CMakeLists.txt) to `1.3.0` (without v).
|
||||
6. Update the `version` key in [constellation-services/Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/Chart.yaml) and [operators/Chart.yaml](/cli/internal/helm/charts/edgeless/operators/Chart.yaml). Also update the `version` key for all subcharts, e.g. [Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/charts/kms/Chart.yaml). Lastly, update the `dependencies.*.version` key for all dependencies in the main charts [constellation-services/Chart.yaml](/cli/internal/helm/charts/edgeless/constellation-services/Chart.yaml) and [operators/Chart.yaml](/cli/internal/helm/charts/edgeless/operators/Chart.yaml).
|
||||
7. Update [default image versions in enterprise config](/internal/config/images_enterprise.go)
|
||||
8. When the microservice builds are finished update versions in [versions.go](../../internal/versions/versions.go#L33-L39) to `v1.3.0`, **add the container hashes** and **push your changes**.
|
||||
|
||||
```sh
|
||||
# crane: https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md
|
||||
crane digest ghcr.io/edgelesssys/constellation/node-operator:v$ver
|
||||
crane digest ghcr.io/edgelesssys/constellation/join-service:v$ver
|
||||
crane digest ghcr.io/edgelesssys/constellation/access-manager:v$ver
|
||||
crane digest ghcr.io/edgelesssys/constellation/kmsserver:v$ver
|
||||
crane digest ghcr.io/edgelesssys/constellation/verification-service:v$ver
|
||||
crane digest ghcr.io/edgelesssys/constellation/qemu-metadata-api:v$ver
|
||||
```
|
||||
|
||||
9. Create a [production OS image](/.github/workflows/build-os-image.yml)
|
||||
|
||||
```sh
|
||||
gh workflow run build-os-image.yml --ref release/v$minor -F imageVersion=v$ver -F isRelease=true -F stream=stable
|
||||
```
|
||||
|
||||
10. [Generate measurements](/.github/workflows/generate-measurements.yml) for the images.
|
||||
|
||||
```sh
|
||||
gh workflow run generate-measurements.yml --ref release/v$minor -F osImage=v$ver -F isDebugImage=false -F signMeasurements=true
|
||||
```
|
||||
|
||||
11. Update expected measurements in [`measurements_enterprise.go`](/internal/attestation/measurements/measurements_enterprise.go) using the generated measurements from step 12 and **push your changes**.
|
||||
|
||||
12. Run manual E2E tests using [Linux](/.github/workflows/e2e-test-manual.yml) and [macOS](/.github/workflows/e2e-test-manual-macos.yml) to confirm functionality and stability.
|
||||
|
||||
```sh
|
||||
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=aws -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=aws -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=azure -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=azure -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-test-manual.yml --ref release/v$minor -F cloudProvider=gcp -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-test-manual-macos.yml --ref release/v$minor -F cloudProvider=gcp -F test="sonobuoy full" -F osImage=v$ver -F isDebugImage=false -F keepMeasurements=true
|
||||
gh workflow run e2e-mini.yml --ref release/v$minor
|
||||
```
|
||||
|
||||
13. Create a new tag on this release branch.
|
||||
|
||||
```sh
|
||||
git tag v$ver
|
||||
git push origin refs/tags/v$ver
|
||||
```
|
||||
|
||||
14. Run [Release CLI](https://github.com/edgelesssys/constellation/actions/workflows/release-cli.yml) action on the tag.
|
||||
|
||||
```sh
|
||||
gh workflow run release-cli.yml --ref v$ver
|
||||
```
|
||||
|
||||
* The previous step will create a draft release. Check build output for link to draft release. Review & approve.
|
||||
|
||||
6. Check if the Constellation OS image is available via the versions API.
|
||||
|
||||
```sh
|
||||
curl -s "https://cdn.confidential.cloud/constellation/v1/ref/-/stream/stable/versions/minor/v${minor}/image.json"
|
||||
# list of versions should contain the new version
|
||||
```
|
||||
|
||||
7. Export, download and make image available in S3 for trusted launch users. To achieve this:
|
||||
|
||||
```sh
|
||||
TARGET_DISK=export-${ver}
|
||||
az disk create -g constellation-images -l westus -n ${TARGET_DISK} --hyper-v-generation V2 --os-type Linux --sku standard_lrs --security-type TrustedLaunch --gallery-image-reference /subscriptions/0d202bbb-4fa7-4af8-8125-58c269a05435/resourceGroups/CONSTELLATION-IMAGES/providers/Microsoft.Compute/galleries/Constellation/images/constellation/versions/${ver}
|
||||
```
|
||||
|
||||
* Find the created resource in Azure
|
||||
* Go to `Settings` -> `Export` and `Generate URLs`
|
||||
* Download both the disk image (first link) and VM state (second link)
|
||||
* Rename disk (`abcd`) to `constellation.img`.
|
||||
* Rename state (UUID) to `constellation.vmgs`.
|
||||
* Go to [AWS S3 bucket for trusted launch](https://s3.console.aws.amazon.com/s3/buckets/cdn-constellation-backend?prefix=constellation/images/azure/trusted-launch/®ion=eu-central-1), create a new folder with the given version number.
|
||||
* Upload both image and state into the newly created folder.
|
||||
* Delete the disk in Azure!
|
||||
|
||||
8. To bring updated version numbers and other changes (if any) to main, create a new branch `feat/release` from `release/v1.3`, rebase it onto main, and create a PR to main
|
||||
9. Milestones management
|
||||
1. Create a new milestone for the next release
|
||||
2. Add the next release manager and an approximate release date to the milestone description
|
||||
3. Close the milestone for the release
|
||||
4. Move open issues and PRs from closed milestone to next milestone
|
||||
10. If the release is a minor version release, tag the latest commit on main as the start of the next pre-release phase.
|
||||
|
||||
```sh
|
||||
nextMinorVer=$(echo "${ver}" | awk -F. -v OFS=. '{$2 += 1 ; print}')
|
||||
git checkout main
|
||||
git pull
|
||||
git tag v${nextMinorVer}-pre
|
||||
git push origin refs/tags/v${nextMinorVer}-pre
|
||||
```
|
||||
|
||||
11. Test Constellation mini up
|
26
dev-docs/workflows/terraform.md
Normal file
26
dev-docs/workflows/terraform.md
Normal file
|
@ -0,0 +1,26 @@
|
|||
# Terraform development
|
||||
|
||||
## iamlive
|
||||
|
||||
[iamlive](https://github.com/iann0036/iamlive) dynamically determines the minimal
|
||||
permissions to call a set of AWS API calls.
|
||||
|
||||
It uses a local proxy to intercept API calls and incrementally generate the AWS
|
||||
policy.
|
||||
|
||||
In one session start `iamlive`:
|
||||
|
||||
```sh
|
||||
iamlive -mode proxy -bind-addr 0.0.0.0:10080 -force-wildcard-resource -output-file iamlive.policy.json
|
||||
```
|
||||
|
||||
In another session execute terraform:
|
||||
|
||||
```sh
|
||||
PREFIX="record-iam"
|
||||
terraform init
|
||||
HTTP_PROXY=http://127.0.0.1:10080 HTTPS_PROXY=http://127.0.0.1:10080 AWS_CA_BUNDLE="${HOME}/.iamlive/ca.pem" terraform apply -auto-approve -var name_prefix=${PREFIX}
|
||||
HTTP_PROXY=http://127.0.0.1:10080 HTTPS_PROXY=http://127.0.0.1:10080 AWS_CA_BUNDLE="${HOME}/.iamlive/ca.pem" terraform destroy -auto-approve -var name_prefix=${PREFIX}
|
||||
```
|
||||
|
||||
`iamlive` will present the generated policy, and after \<CTRL-C\> the `iamlive` process it will also write it to the specified file.
|
52
dev-docs/workflows/upgrade-kubernetes.md
Normal file
52
dev-docs/workflows/upgrade-kubernetes.md
Normal file
|
@ -0,0 +1,52 @@
|
|||
# Upgrading Kubernetes
|
||||
|
||||
Constellation is a Kubernetes distribution. As such, dependencies on Kubernetes versions exist in multiple places:
|
||||
|
||||
- The desired Kubernetes version deployed by `kubeadm init`
|
||||
- Kubernetes resources (deployments made while initializing Kubernetes, including the `cloud-controller-manager`, `cluster-autoscaler` and more)
|
||||
- Kubernetes go dependencies for the bootstrapper code
|
||||
|
||||
## Understand what has changed
|
||||
|
||||
Before adding support for a new Kubernetes version, it is a very good idea to [read the release notes](https://kubernetes.io/releases/notes/) and to identify breaking changes.
|
||||
|
||||
## Upgrading Kubernetes resources
|
||||
|
||||
Everything related to Kubernetes versions is tracked in [the versions file](/internal/versions/versions.go). Add a new `ValidK8sVersion` and fill out the `VersionConfigs` entry for that version.
|
||||
During cluster initialization, multiple Kubernetes resources are deployed. Some of these should be upgraded with Kubernetes.
|
||||
You can check available version tags for container images using [the container registry tags API](https://docs.docker.com/registry/spec/api/#listing-image-tags):
|
||||
|
||||
```sh
|
||||
curl -qL https://registry.k8s.io/v2/autoscaling/cluster-autoscaler/tags/list | jq .tags
|
||||
curl -qL https://registry.k8s.io/v2/cloud-controller-manager/tags/list | jq .tags
|
||||
curl -qL https://registry.k8s.io/v2/provider-aws/cloud-controller-manager/tags/list | jq .tags
|
||||
curl -qL https://mcr.microsoft.com/v2/oss/kubernetes/azure-cloud-controller-manager/tags/list | jq .tags
|
||||
curl -qL https://mcr.microsoft.com/v2/oss/kubernetes/azure-cloud-node-manager/tags/list | jq .tags
|
||||
# [...]
|
||||
```
|
||||
|
||||
Normally renovate will handle the upgrading of Kubernetes dependencies.
|
||||
|
||||
## Test the new Kubernetes version
|
||||
|
||||
- Setup a Constellation cluster using the new image with the new bootstrapper binary and check if Kubernetes is deployed successfully.
|
||||
|
||||
```sh
|
||||
# should print the new k8s version for every node
|
||||
kubectl get nodes -o wide
|
||||
# read the logs for pods deployed in the kube-system namespace and ensure they are healthy
|
||||
kubectl -n kube-system get pods
|
||||
kubectl -n kube-system logs [...]
|
||||
kubectl -n kube-system describe pods
|
||||
```
|
||||
|
||||
- Read the logs of the main Kubernetes components by getting a shell on the nodes and scan for errors / deprecation warnings:
|
||||
|
||||
```sh
|
||||
journalctl -u kubelet
|
||||
journalctl -u containerd
|
||||
```
|
||||
|
||||
- Conduct e2e tests
|
||||
- [Run the sonobuoy test suite against your branch](https://sonobuoy.io/)
|
||||
- [Run CI e2e tests](/.github/docs/README.md)
|
Loading…
Add table
Add a link
Reference in a new issue