mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-08-03 12:36:09 -04:00
Post v2.20.0 release updates to main (#3525)
* docs: release 2.20 * chore: update version.txt to v2.21.0-pre * chore: update CI for v2.20.0 --------- Co-authored-by: edgelessci <edgelessci@users.noreply.github.com>
This commit is contained in:
parent
b03e671a62
commit
ab2782a2a2
75 changed files with 7858 additions and 3 deletions
122
docs/versioned_docs/version-2.20/workflows/scale.md
Normal file
122
docs/versioned_docs/version-2.20/workflows/scale.md
Normal file
|
@ -0,0 +1,122 @@
|
|||
# Scale your cluster
|
||||
|
||||
Constellation provides all features of a Kubernetes cluster including scaling and autoscaling.
|
||||
|
||||
## Worker node scaling
|
||||
|
||||
### Autoscaling
|
||||
|
||||
Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of
|
||||
worker nodes:
|
||||
|
||||
```bash
|
||||
kubectl get scalinggroups -o json | yq '.items | .[] | select(.spec.role == "Worker") | [{"name": .metadata.name, "nodeGoupName": .spec.nodeGroupName}]'
|
||||
```
|
||||
|
||||
This will output a list of scaling groups with the corresponding cloud provider name (`name`) and the cloud provider agnostic name of the node group (`nodeGroupName`).
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource with the desired `name` to `true`:
|
||||
|
||||
```bash
|
||||
# Replace <name> with the name of the scaling group you want to enable autoscaling for
|
||||
worker_group=<name>
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P
|
||||
```
|
||||
|
||||
The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run.
|
||||
You can configure the minimum and maximum number of worker nodes in the scaling group by patching the `min` or
|
||||
`max` fields of the scaling group resource:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P
|
||||
```
|
||||
|
||||
The cluster autoscaler will now never provision more than 5 worker nodes.
|
||||
|
||||
If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the
|
||||
following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of
|
||||
and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of
|
||||
worker nodes before and after the deployment:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx --replicas 150
|
||||
kubectl -n kube-system get nodes
|
||||
kubectl rollout status deployment nginx
|
||||
kubectl -n kube-system get nodes
|
||||
```
|
||||
|
||||
### Manual scaling
|
||||
|
||||
Alternatively, you can manually scale your cluster up or down:
|
||||
|
||||
<Tabs groupId="csp">
|
||||
<TabItem value="aws" label="AWS">
|
||||
|
||||
1. Go to Auto Scaling Groups and select the worker ASG to scale up.
|
||||
2. Click **Edit**
|
||||
3. Set the new (increased) **Desired capacity** and **Update**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-workers`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new **instance count** and **save**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **worker** instance group.
|
||||
3. Set the new **number of instances** and **save**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="stackit" label="STACKIT">
|
||||
|
||||
Dynamic cluster scaling isn't yet supported for STACKIT.
|
||||
Support will be introduced in one of the upcoming releases.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Control-plane node scaling
|
||||
|
||||
Control-plane nodes can **only be scaled manually and only scaled up**!
|
||||
|
||||
To increase the number of control-plane nodes, follow these steps:
|
||||
|
||||
<Tabs groupId="csp">
|
||||
<TabItem value="aws" label="AWS">
|
||||
|
||||
1. Go to Auto Scaling Groups and select the control-plane ASG to scale up.
|
||||
2. Click **Edit**
|
||||
3. Set the new (increased) **Desired capacity** and **Update**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="azure" label="Azure">
|
||||
|
||||
1. Find your Constellation resource group.
|
||||
2. Select the `scale-set-controlplanes`.
|
||||
3. Go to **settings** and **scaling**.
|
||||
4. Set the new (increased) **instance count** and **save**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="gcp" label="GCP">
|
||||
|
||||
1. In Compute Engine go to [Instance Groups](https://console.cloud.google.com/compute/instanceGroups/).
|
||||
2. **Edit** the **control-plane** instance group.
|
||||
3. Set the new (increased) **number of instances** and **save**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="stackit" label="STACKIT">
|
||||
|
||||
Dynamic cluster scaling isn't yet supported for STACKIT.
|
||||
Support will be introduced in one of the upcoming releases.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
If you scale down the number of control-planes nodes, the removed nodes won't be able to exit the `etcd` cluster correctly. This will endanger the quorum that's required to run a stable Kubernetes control plane.
|
Loading…
Add table
Add a link
Reference in a new issue