* add current chart
add current helm chart
* disable service controller for aws ccm
* add new iam roles
* doc AWS internet LB + add to LB test
* pass clusterName to helm for AWS LB
* fix update-aws-lb chart to also include .helmignore
* move chart outside services
* working state
* add subnet tags for AWS subnet discovery
* fix .helmignore load rule with file in subdirectory
* upgrade iam profile
* revert new loader impl since cilium is not correctly loaded
* install chart if not already present during `upgrade apply`
* cleanup PR + fix build + add todos
cleanup PR + add todos
* shared helm pkg for cli install and bootstrapper
* add link to eks docs
* refactor iamMigrationCmd
* delete unused helm.symwallk
* move iammigrate to upgrade pkg
* fixup! delete unused helm.symwallk
* add to upgradecheck
* remove nodeSelector from go code (Otto)
* update iam docs and sort permission + remove duplicate roles
* fix bug in `upgrade check`
* better upgrade check output when svc version upgrade not possible
* pr feedback
* remove force flag in upgrade_test
* use upgrader.GetUpgradeID instead of extra type
* remove todos + fix check
* update doc lb (leo)
* remove bootstrapper helm package
* Update cli/internal/cmd/upgradecheck.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* final nits
* add docs for e2e upgrade test setup
* Apply suggestions from code review
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/helm/loader.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/cmd/tfmigrationclient.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* fix daniel review
* link to the iam permissions instead of manually updating them (agreed with leo)
* disable iam upgrade in upgrade apply
---------
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
Co-authored-by: Malte Poll
* cli: add "--skip-helm-wait" flag
This flag can be used to disable the atomic and wait flags during helm install.
This is useful when debugging a failing constellation init, since the user gains access to
the cluster even when one of the deployments is never in a ready state.
* helm: configure GCP cloud controller manager to search in all zones of a region
See also: d716fdd452/providers/gce/gce.go (L376-L380)
* operators: add nodeGroupName to ScalingGroup CRD
NodeGroupName is the human friendly name of the node group that will be exposed to customers via the Constellation config in the future.
* operators: support simple executor / scheduler to reconcile on non-k8s resources
* operators: add new return type for ListScalingGroups to support arbitrary node groups
* operators: ListScalingGroups should return additionally created node groups on AWS
* operators: ListScalingGroups should return additionally created node groups on Azure
* operators: ListScalingGroups should return additionally created node groups on GCP
* operators: ListScalingGroups should return additionally created node groups on unsupported CSPs
* operators: implement external scaling group reconciler
This controller scans the cloud provider infrastructure and changes k8s resources accordingly.
It creates ScaleSet resources when new node groups are created and deletes them if the node groups are removed.
* operators: no longer create scale sets when the operator starts
In the future, scale sets are created dynamically.
* operators: watch for node join/leave events using a controller
* operators: deploy new controllers
* docs: update auto scaling documentation with support for node groups
* Use CLI to fetch measurements in e2e test
* Abort helm service upgrade early if user confirmation is missing
* Add container push to CLI build action
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* Fix usage of errors.As in upgrade command implementation
* Use struct pointers when working with custom errors
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* Delete helm chart on failure before retrying installation
* Add chart name to debug output
* Remove now unused wait flag from helm Release struct
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* use force to skip compatibility and upgrade in progress check
* update doc
* fix tests
* add force check for helm and k8s
* add no-op check
* fix errors as
* invalidate app client id field for azure and provide info
* remove TestNewWithDefaultOptions case
* fix test
* remove appClientID field
* remove client secret + rename err
* remove from docs
* otto feedback
* update docs
* delete env test in cfg since no envs set anymore
* Update dev-docs/workflows/github-actions.md
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* WARNING to stderr
* fix check
---------
Co-authored-by: Otto Bittner <cobittner@posteo.net>
During upgrade all custom resources are backed up to files on the
local file system. Since old versions are also backed up, we need to
reflect the version in the name.
* Add attestation options to config
* Add join-config migration path for clusters with old measurement format
* Always create MAA provider for Azure SNP clusters
* Remove confidential VM option from provider in favor of attestation options
* cli: add config migrate command to handle config migration (#1678)
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
Resource names are only unique per kind+ns. Without this patch it
might happen that there are two resources with the same name
in different namespaces. Upgrade might fail in that case.
The new command allows checking the status of an upgrade
and which versions are installed.
Also remove the unused restclient.
And make GetConstellationVersion a function.
* Remove using measurements from the initial control-plane node for the cluster's initial measurements
* Add using measurements from the user's config for the cluster's initial measurements to align behavior with upgrade command
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
The test is implemented as a go test.
It can be executed as a bazel target.
The general workflow is to setup a cluster,
point the test to the workspace in which to
find the kubeconfig and the constellation config
and specify a target image, k8s and
service version. The test will succeed
if it detects all target versions in the cluster
within the configured timeout.
The CI automates the above steps.
A separate workflow is introduced as there
are multiple input fields to the test.
Adding all of these to the manual e2e test
seemed confusing.
Co-authored-by: Fabian Kammel <fk@edgeless.systems>