mirror of
https://github.com/edgelesssys/constellation.git
synced 2025-07-22 06:50:43 -04:00
operators: infrastructure autodiscovery (#1958)
* helm: configure GCP cloud controller manager to search in all zones of a region
See also: d716fdd452/providers/gce/gce.go (L376-L380)
* operators: add nodeGroupName to ScalingGroup CRD
NodeGroupName is the human friendly name of the node group that will be exposed to customers via the Constellation config in the future.
* operators: support simple executor / scheduler to reconcile on non-k8s resources
* operators: add new return type for ListScalingGroups to support arbitrary node groups
* operators: ListScalingGroups should return additionally created node groups on AWS
* operators: ListScalingGroups should return additionally created node groups on Azure
* operators: ListScalingGroups should return additionally created node groups on GCP
* operators: ListScalingGroups should return additionally created node groups on unsupported CSPs
* operators: implement external scaling group reconciler
This controller scans the cloud provider infrastructure and changes k8s resources accordingly.
It creates ScaleSet resources when new node groups are created and deletes them if the node groups are removed.
* operators: no longer create scale sets when the operator starts
In the future, scale sets are created dynamically.
* operators: watch for node join/leave events using a controller
* operators: deploy new controllers
* docs: update auto scaling documentation with support for node groups
This commit is contained in:
parent
10a540c290
commit
388ff011a3
36 changed files with 1836 additions and 232 deletions
|
@ -10,15 +10,18 @@ Constellation comes with autoscaling disabled by default. To enable autoscaling,
|
|||
worker nodes:
|
||||
|
||||
```bash
|
||||
worker_group=$(kubectl get scalinggroups -o json | jq -r '.items[].metadata.name | select(contains("worker"))')
|
||||
echo "The name of your worker scaling group is '$worker_group'"
|
||||
kubectl get scalinggroups -o json | yq '.items | .[] | select(.spec.role == "Worker") | [{"name": .metadata.name, "nodeGoupName": .spec.nodeGroupName}]'
|
||||
```
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource to `true`:
|
||||
This will output a list of scaling groups with the corresponding cloud provider name (`name`) and the cloud provider agnostic name of the node group (`nodeGroupName`).
|
||||
|
||||
Then, patch the `autoscaling` field of the scaling group resource with the desired `name` to `true`:
|
||||
|
||||
```bash
|
||||
# Replace <name> with the name of the scaling group you want to enable autoscaling for
|
||||
worker_group=<name>
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"autoscaling": true}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P
|
||||
```
|
||||
|
||||
The cluster autoscaler now automatically provisions additional worker nodes so that all pods have a place to run.
|
||||
|
@ -27,7 +30,7 @@ You can configure the minimum and maximum number of worker nodes in the scaling
|
|||
|
||||
```bash
|
||||
kubectl patch scalinggroups $worker_group --patch '{"spec":{"max": 5}}' --type='merge'
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | jq
|
||||
kubectl get scalinggroup $worker_group -o jsonpath='{.spec}' | yq -P
|
||||
```
|
||||
|
||||
The cluster autoscaler will now never provision more than 5 worker nodes.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue