Also update Azure terraform:
ignore snp policy changes on resource
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: edgelessci <edgelessci@users.noreply.github.com>
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* add new iam upgrade apply
* remove iam tf plan from upgrade apply check
* add iam migration warning to upgrade apply
* update release process
* document migration
* Apply suggestions from code review
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* add iam upgrade
* remove upgrade dir check in test
* ask only without --yes
* make iam upgrade provider specific
* test without seperate logins
* remove csi and only add conditionally
* Revert "test without seperate logins"
This reverts commit 05a12e59c9.
* fix msising cred
* support iam migration for all csps
* add iam upgrade label
---------
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* add current chart
add current helm chart
* disable service controller for aws ccm
* add new iam roles
* doc AWS internet LB + add to LB test
* pass clusterName to helm for AWS LB
* fix update-aws-lb chart to also include .helmignore
* move chart outside services
* working state
* add subnet tags for AWS subnet discovery
* fix .helmignore load rule with file in subdirectory
* upgrade iam profile
* revert new loader impl since cilium is not correctly loaded
* install chart if not already present during `upgrade apply`
* cleanup PR + fix build + add todos
cleanup PR + add todos
* shared helm pkg for cli install and bootstrapper
* add link to eks docs
* refactor iamMigrationCmd
* delete unused helm.symwallk
* move iammigrate to upgrade pkg
* fixup! delete unused helm.symwallk
* add to upgradecheck
* remove nodeSelector from go code (Otto)
* update iam docs and sort permission + remove duplicate roles
* fix bug in `upgrade check`
* better upgrade check output when svc version upgrade not possible
* pr feedback
* remove force flag in upgrade_test
* use upgrader.GetUpgradeID instead of extra type
* remove todos + fix check
* update doc lb (leo)
* remove bootstrapper helm package
* Update cli/internal/cmd/upgradecheck.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* final nits
* add docs for e2e upgrade test setup
* Apply suggestions from code review
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/helm/loader.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/cmd/tfmigrationclient.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* fix daniel review
* link to the iam permissions instead of manually updating them (agreed with leo)
* disable iam upgrade in upgrade apply
---------
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
Co-authored-by: Malte Poll
terraform: collect apiserver cert SANs and support custom endpoint
constants: add new constants for cluster configuration and custom endpoint
cloud: support apiserver cert sans and prepare for endpoint migration on AWS
config: add customEndpoint field
bootstrapper: use per-CSP apiserver cert SANs
cli: route customEndpoint to terraform and add migration for apiserver cert SANs
bootstrapper: change interface of GetLoadBalancerEndpoint to return host and port separately
* deps: downgrade terraform-json to v0.15.0
terraform-exec requires a matching version of terraform json.
Since the latest released version of terraform-exec still uses terraform-json v0.15.0,
we need to stay on that version.
* cli: add "--skip-helm-wait" flag for "constellation init" to "constellation mini up"
* cli: add "--skip-helm-wait" flag
This flag can be used to disable the atomic and wait flags during helm install.
This is useful when debugging a failing constellation init, since the user gains access to
the cluster even when one of the deployments is never in a ready state.
* helm: configure GCP cloud controller manager to search in all zones of a region
See also: d716fdd452/providers/gce/gce.go (L376-L380)
* operators: add nodeGroupName to ScalingGroup CRD
NodeGroupName is the human friendly name of the node group that will be exposed to customers via the Constellation config in the future.
* operators: support simple executor / scheduler to reconcile on non-k8s resources
* operators: add new return type for ListScalingGroups to support arbitrary node groups
* operators: ListScalingGroups should return additionally created node groups on AWS
* operators: ListScalingGroups should return additionally created node groups on Azure
* operators: ListScalingGroups should return additionally created node groups on GCP
* operators: ListScalingGroups should return additionally created node groups on unsupported CSPs
* operators: implement external scaling group reconciler
This controller scans the cloud provider infrastructure and changes k8s resources accordingly.
It creates ScaleSet resources when new node groups are created and deletes them if the node groups are removed.
* operators: no longer create scale sets when the operator starts
In the future, scale sets are created dynamically.
* operators: watch for node join/leave events using a controller
* operators: deploy new controllers
* docs: update auto scaling documentation with support for node groups
* init
* make zone flag mandatory again
* add info about zone update + refactor
* add comment in docs about zone update
* Update cli/internal/cmd/iamcreate_test.go
Co-authored-by: Thomas Tendyck <51411342+thomasten@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Thomas Tendyck <51411342+thomasten@users.noreply.github.com>
* thomas feedback
* add format check to config validation
* remove TODO
* Update docs/docs/workflows/config.md
Co-authored-by: Thomas Tendyck <51411342+thomasten@users.noreply.github.com>
* thomas nit
---------
Co-authored-by: Thomas Tendyck <51411342+thomasten@users.noreply.github.com>
* Use CLI to fetch measurements in e2e test
* Abort helm service upgrade early if user confirmation is missing
* Add container push to CLI build action
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
This change is required to ensure we have not tls handshake errors when connecting to the kubernetes api.
Currently, the certificates used by kube-apiserver pods contain a SAN field with the (single) public ip of the loadbalancer.
If we would allow multiple loadbalancer frontend ips, we could encounter cases where the certificate is only valid for one public ip,
while we try to connect to a different ip.
To prevent this, we consciously disable support for the multi-zone loadbalancer frontend on AWS for now.
This will be re-enabled in the future.