From bcab213139ec56941453858459a1137da7c76d5b Mon Sep 17 00:00:00 2001 From: katexochen <49727155+katexochen@users.noreply.github.com> Date: Tue, 20 Sep 2022 09:56:25 +0200 Subject: [PATCH] Document cluster based autoscaling --- .../examples/horizontal-scaling.md | 2 +- docs/docs/workflows/create.md | 7 --- docs/docs/workflows/scale.md | 44 ++++++++++++++++++- docs/styles/Vocab/constellation/accept.txt | 2 + 4 files changed, 46 insertions(+), 9 deletions(-) diff --git a/docs/docs/getting-started/examples/horizontal-scaling.md b/docs/docs/getting-started/examples/horizontal-scaling.md index f14eeb199..dfaf9e742 100644 --- a/docs/docs/getting-started/examples/horizontal-scaling.md +++ b/docs/docs/getting-started/examples/horizontal-scaling.md @@ -2,7 +2,7 @@ This example demonstrates Constellation's autoscaling capabilities. It's based on the Kubernetes [HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). During the following steps, Constellation will spawn new VMs on demand, verify them, add them to the cluster, and delete them again when the load has settled down. ## Requirements -The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/create.md#the-init-step) to enable Constellation to assign new nodes dynamically. +The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/scale.md) to enable Constellation to assign new nodes dynamically. Just for this example specifically, the cluster should have as few worker nodes in the beginning as possible. Start with a small cluster with only *one* low-powered node for the control-plane node and *one* low-powered worker node. diff --git a/docs/docs/workflows/create.md b/docs/docs/workflows/create.md index 8e825cb8b..6019008c8 100644 --- a/docs/docs/workflows/create.md +++ b/docs/docs/workflows/create.md @@ -17,7 +17,6 @@ Before creating your cluster you need to decide on * the initial size of your cluster (the number of control-plane and worker nodes) * the machine type of your nodes (depending on the availability in your cloud environment) -* whether to enable autoscaling for your cluster (automatically adding and removing nodes depending on resource demands) You can find the currently supported machine types for your cloud environment in the [installation guide](../architecture/orchestration.md). @@ -72,12 +71,6 @@ The following command initializes and bootstraps your cluster: constellation init ``` -To enable autoscaling in your cluster, add the `--autoscale` flag: - -```bash -constellation init --autoscale -``` - Next, configure `kubectl` for your Constellation cluster: ```bash diff --git a/docs/docs/workflows/scale.md b/docs/docs/workflows/scale.md index 3d74f082e..2ed4407cf 100644 --- a/docs/docs/workflows/scale.md +++ b/docs/docs/workflows/scale.md @@ -4,7 +4,49 @@ Constellation provides all features of a Kubernetes cluster including scaling an ## Worker node scaling -[During cluster initialization](create.md#init) you can choose to deploy the [cluster autoscaler](https://github.com/kubernetes/autoscaler). It automatically provisions additional worker nodes so that all pods have a place to run. Alternatively, you can choose to manually scale your cluster up or down: +### Autoscaling + +Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of +worker nodes: + +```bash +worker_group=$(kubectl get scalinggroups -o json | jq -r '.items[].metadata.name | select(contains("worker"))') +echo "The name of your worker scaling group is '$worker_group'" +``` + +Then, patch the `autoscaling` field of the scaling group resource to `true`: + +```bash +kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge' +kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq +``` + +The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run. +You can configure the minimum and maximum number of worker nodes in the scaling group by patching the `min` or +`max` fields of the scaling group resource: + +```bash +kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge' +kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq +``` + +The cluster autoscaler will now never provision more than 5 worker nodes. + +If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the +following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of +and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of +worker nodes before and after the deployment: + +```bash +kubectl create deployment nginx --image=nginx --replicas 150 +kubectl -n kube-system get nodes +kubectl rollout status deployment nginx +kubectl -n kube-system get nodes +``` + +### Manual scaling + +Alternatively, you can choose to manually scale your cluster up or down: diff --git a/docs/styles/Vocab/constellation/accept.txt b/docs/styles/Vocab/constellation/accept.txt index 6b816de7b..f543ba219 100644 --- a/docs/styles/Vocab/constellation/accept.txt +++ b/docs/styles/Vocab/constellation/accept.txt @@ -39,3 +39,5 @@ proxied whitepaper WireGuard Xeon +Nginx +rollout