docs: document node groups and migration from old config fields (#2175)

This commit is contained in:
Malte Poll 2023-08-09 09:46:22 +02:00 committed by GitHub
parent eb2f3c3021
commit e1c6c533ed
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 47 additions and 0 deletions

View File

@ -17,6 +17,9 @@ Use [`constellation config migrate`](./cli.md#constellation-config-migrate) to a
## Migrating from CLI versions before 2.10 ## Migrating from CLI versions before 2.10
- AWS cluster upgrades require additional IAM permissions for the newly introduced `aws-load-balancer-controller`. Please upgrade your IAM roles using `iam upgrade apply`. This will show necessary changes and apply them, if desired. - AWS cluster upgrades require additional IAM permissions for the newly introduced `aws-load-balancer-controller`. Please upgrade your IAM roles using `iam upgrade apply`. This will show necessary changes and apply them, if desired.
- The global `nodeGroups` field was added.
- The fields `instanceType`, `stateDiskSizeGB`, and `stateDiskType` for each cloud provider are now part of the configuration of individual node groups.
- The `constellation create` command no longer uses the flags `--control-plane-count` and `--worker-count`. Instead, the initial node count is configured per node group in the `nodeGroups` field.
## Migrating from CLI versions before 2.9 ## Migrating from CLI versions before 2.9

View File

@ -80,6 +80,50 @@ SNP-based attestation will be enabled as soon as a fix is verified.
Fill the desired VM type into the **instanceType** fields in the `constellation-conf.yml` file. Fill the desired VM type into the **instanceType** fields in the `constellation-conf.yml` file.
## Creating additional node groups
By default, Constellation creates the node groups `control_plane_default` and `worker_default` for control-plane nodes and workers, respectively.
If you require additional control-plane or worker groups with different instance types, zone placements, or disk sizes, you can add additional node groups to the `constellation-conf.yml` file.
Each node group can be scaled individually.
Consider the following example for AWS:
```yaml
nodeGroups:
control_plane_default:
role: control-plane
instanceType: c6a.xlarge
stateDiskSizeGB: 30
stateDiskType: gp3
zone: eu-west-1c
initialCount: 3
worker_default:
role: worker
instanceType: c6a.xlarge
stateDiskSizeGB: 30
stateDiskType: gp3
zone: eu-west-1c
initialCount: 2
high_cpu:
role: worker
instanceType: c6a.24xlarge
stateDiskSizeGB: 128
stateDiskType: gp3
zone: eu-west-1c
initialCount: 1
```
This configuration creates an additional node group `high_cpu` with a larger instance type and disk.
You can use the field `zone` to specify what availability zone nodes of the group are placed in.
On Azure, this field is empty by default and nodes are automatically spread across availability zones.
Consult the documentation of your cloud provider for more information:
- [AWS](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/)
- [Azure](https://azure.microsoft.com/en-us/explore/global-infrastructure/availability-zones)
- [GCP](https://cloud.google.com/compute/docs/regions-zones)
## Choosing a Kubernetes version ## Choosing a Kubernetes version
To learn which Kubernetes versions can be installed with your current CLI, you can run `constellation config kubernetes-versions`. To learn which Kubernetes versions can be installed with your current CLI, you can run `constellation config kubernetes-versions`.