Malte Poll 4283601433
operators: infrastructure autodiscovery (#1958)
* helm: configure GCP cloud controller manager to search in all zones of a region

See also: d716fdd452/providers/gce/gce.go (L376-L380)

* operators: add nodeGroupName to ScalingGroup CRD

NodeGroupName is the human friendly name of the node group that will be exposed to customers via the Constellation config in the future.

* operators: support simple executor / scheduler to reconcile on non-k8s resources

* operators: add new return type for ListScalingGroups to support arbitrary node groups

* operators: ListScalingGroups should return additionally created node groups on AWS

* operators: ListScalingGroups should return additionally created node groups on Azure

* operators: ListScalingGroups should return additionally created node groups on GCP

* operators: ListScalingGroups should return additionally created node groups on unsupported CSPs

* operators: implement external scaling group reconciler

This controller scans the cloud provider infrastructure and changes k8s resources accordingly.
It creates ScaleSet resources when new node groups are created and deletes them if the node groups are removed.

* operators: no longer create scale sets when the operator starts

In the future, scale sets are created dynamically.

* operators: watch for node join/leave events using a controller

* operators: deploy new controllers

* docs: update auto scaling documentation with support for node groups
2023-07-05 07:27:34 +02:00

3.9 KiB

Scale your cluster

Constellation provides all features of a Kubernetes cluster including scaling and autoscaling.

Worker node scaling

Autoscaling

Constellation comes with autoscaling disabled by default. To enable autoscaling, find the scaling group of worker nodes:

kubectl get scalinggroups -o json | yq '.items | .[] | select(.spec.role == "Worker") | [{"name": .metadata.name, "nodeGoupName": .spec.nodeGroupName}]'

This will output a list of scaling groups with the corresponding cloud provider name (name) and the cloud provider agnostic name of the node group (nodeGroupName).

Then, patch the autoscaling field of the scaling group resource with the desired name to true:

# Replace <name> with the name of the scaling group you want to enable autoscaling for
worker_group=<name>
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P

The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run. You can configure the minimum and maximum number of worker nodes in the scaling group by patching the min or max fields of the scaling group resource:

kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P

The cluster autoscaler will now never provision more than 5 worker nodes.

If you want to see the autoscaling in action, try to add a deployment with a lot of replicas, like the following Nginx deployment. The number of replicas needed to trigger the autoscaling depends on the size of and count of your worker nodes. Wait for the rollout of the deployment to finish and compare the number of worker nodes before and after the deployment:

kubectl create deployment nginx --image=nginx --replicas 150
kubectl -n kube-system get nodes
kubectl rollout status deployment nginx
kubectl -n kube-system get nodes

Manual scaling

Alternatively, you can manually scale your cluster up or down:

  1. Find your Constellation resource group.
  2. Select the scale-set-workers.
  3. Go to settings and scaling.
  4. Set the new instance count and save.
  1. In Compute Engine go to Instance Groups.
  2. Edit the worker instance group.
  3. Set the new number of instances and save.
  1. Go to Auto Scaling Groups and select the worker ASG to scale up.
  2. Click Edit
  3. Set the new (increased) Desired capacity and Update.

Control-plane node scaling

Control-plane nodes can only be scaled manually and only scaled up!

To increase the number of control-plane nodes, follow these steps:

  1. Find your Constellation resource group.
  2. Select the scale-set-controlplanes.
  3. Go to settings and scaling.
  4. Set the new (increased) instance count and save.
  1. In Compute Engine go to Instance Groups.
  2. Edit the control-plane instance group.
  3. Set the new (increased) number of instances and save.
  1. Go to Auto Scaling Groups and select the control-plane ASG to scale up.
  2. Click Edit
  3. Set the new (increased) Desired capacity and Update.

If you scale down the number of control-planes nodes, the removed nodes won't be able to exit the etcd cluster correctly. This will endanger the quorum that's required to run a stable Kubernetes control plane.