mirror of
https://github.com/edgelesssys/constellation.git
synced 2024-10-01 01:36:09 -04:00
Document cluster based autoscaling
This commit is contained in:
parent
a040921e94
commit
bcab213139
@ -2,7 +2,7 @@
|
||||
This example demonstrates Constellation's autoscaling capabilities. It's based on the Kubernetes [HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). During the following steps, Constellation will spawn new VMs on demand, verify them, add them to the cluster, and delete them again when the load has settled down.
|
||||
|
||||
## Requirements
|
||||
The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/create.md#the-init-step) to enable Constellation to assign new nodes dynamically.
|
||||
The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/scale.md) to enable Constellation to assign new nodes dynamically.
|
||||
|
||||
Just for this example specifically, the cluster should have as few worker nodes in the beginning as possible. Start with a small cluster with only *one* low-powered node for the control-plane node and *one* low-powered worker node.
|
||||
|
||||
|
@ -17,7 +17,6 @@ Before creating your cluster you need to decide on
|
||||
|
||||
* the initial size of your cluster (the number of control-plane and worker nodes)
|
||||
* the machine type of your nodes (depending on the availability in your cloud environment)
|
||||
* whether to enable autoscaling for your cluster (automatically adding and removing nodes depending on resource demands)
|
||||
|
||||
You can find the currently supported machine types for your cloud environment in the [installation guide](../architecture/orchestration.md).
|
||||
|
||||
@ -72,12 +71,6 @@ The following command initializes and bootstraps your cluster:
|
||||
constellation init
|
||||
```
|
||||
|
||||
To enable autoscaling in your cluster, add the `--autoscale` flag:
|
||||
|
||||
```bash
|
||||
constellation init --autoscale
|
||||
```
|
||||
|
||||
Next, configure `kubectl` for your Constellation cluster:
|
||||
|
||||
```bash
|
||||
|
@ -4,7 +4,49 @@ Constellation provides all features of a Kubernetes cluster including scaling an
|
||||
|
||||
## Worker node scaling
|
||||
|
||||
[During cluster initialization](create.md#init) you can choose to deploy the [cluster autoscaler](https://github.com/kubernetes/autoscaler). It automatically provisions additional worker nodes so that all pods have a place to run. Alternatively, you can choose to manually scale your cluster up or down:
|
||||
### Autoscaling
|
||||
|
||||
Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of
|
||||
worker nodes:
|
||||
|
||||
```bash
|
||||
worker_group=$(kubectl get scalinggroups -o json | jq -r '.items[].metadata.name | select(contains("worker"))')
|
||||
echo "The name of your worker scaling group is '$worker_group'"
|
||||
```
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource to `true`:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run.
|
||||
You can configure the minimum and maximum number of worker nodes in the scaling group by patching the `min` or
|
||||
`max` fields of the scaling group resource:
|
||||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
```
|
||||
|
||||
The cluster autoscaler will now never provision more than 5 worker nodes.
|
||||
|
||||
If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the
|
||||
following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of
|
||||
and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of
|
||||
worker nodes before and after the deployment:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx --replicas 150
|
||||
kubectl -n kube-system get nodes
|
||||
kubectl rollout status deployment nginx
|
||||
kubectl -n kube-system get nodes
|
||||
```
|
||||
|
||||
### Manual scaling
|
||||
|
||||
Alternatively, you can choose to manually scale your cluster up or down:
|
||||
|
||||
<tabs groupId="csp">
|
||||
<tabItem value="azure" label="Azure" default>
|
||||
|
@ -39,3 +39,5 @@ proxied
|
||||
whitepaper
|
||||
WireGuard
|
||||
Xeon
|
||||
Nginx
|
||||
rollout
|
||||
|
Loading…
Reference in New Issue
Block a user