* Replace UpdateAttestationConfig with ApplyJoinConfig
* Dont set up join-config over Helm, it is now only managed by our CLI directly during init and upgrade
* Remove measurementSalt and attestationConfig parsing from helm, they were only needed for the JoinConfig
* Add migration step to remove join-config from Helm management
* Update attestation config trouble shooting tip
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* add current chart
add current helm chart
* disable service controller for aws ccm
* add new iam roles
* doc AWS internet LB + add to LB test
* pass clusterName to helm for AWS LB
* fix update-aws-lb chart to also include .helmignore
* move chart outside services
* working state
* add subnet tags for AWS subnet discovery
* fix .helmignore load rule with file in subdirectory
* upgrade iam profile
* revert new loader impl since cilium is not correctly loaded
* install chart if not already present during `upgrade apply`
* cleanup PR + fix build + add todos
cleanup PR + add todos
* shared helm pkg for cli install and bootstrapper
* add link to eks docs
* refactor iamMigrationCmd
* delete unused helm.symwallk
* move iammigrate to upgrade pkg
* fixup! delete unused helm.symwallk
* add to upgradecheck
* remove nodeSelector from go code (Otto)
* update iam docs and sort permission + remove duplicate roles
* fix bug in `upgrade check`
* better upgrade check output when svc version upgrade not possible
* pr feedback
* remove force flag in upgrade_test
* use upgrader.GetUpgradeID instead of extra type
* remove todos + fix check
* update doc lb (leo)
* remove bootstrapper helm package
* Update cli/internal/cmd/upgradecheck.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* final nits
* add docs for e2e upgrade test setup
* Apply suggestions from code review
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/helm/loader.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Update cli/internal/cmd/tfmigrationclient.go
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* fix daniel review
* link to the iam permissions instead of manually updating them (agreed with leo)
* disable iam upgrade in upgrade apply
---------
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
Co-authored-by: Malte Poll
* helm: configure GCP cloud controller manager to search in all zones of a region
See also: d716fdd452/providers/gce/gce.go (L376-L380)
* operators: add nodeGroupName to ScalingGroup CRD
NodeGroupName is the human friendly name of the node group that will be exposed to customers via the Constellation config in the future.
* operators: support simple executor / scheduler to reconcile on non-k8s resources
* operators: add new return type for ListScalingGroups to support arbitrary node groups
* operators: ListScalingGroups should return additionally created node groups on AWS
* operators: ListScalingGroups should return additionally created node groups on Azure
* operators: ListScalingGroups should return additionally created node groups on GCP
* operators: ListScalingGroups should return additionally created node groups on unsupported CSPs
* operators: implement external scaling group reconciler
This controller scans the cloud provider infrastructure and changes k8s resources accordingly.
It creates ScaleSet resources when new node groups are created and deletes them if the node groups are removed.
* operators: no longer create scale sets when the operator starts
In the future, scale sets are created dynamically.
* operators: watch for node join/leave events using a controller
* operators: deploy new controllers
* docs: update auto scaling documentation with support for node groups
* Add attestation options to config
* Add join-config migration path for clusters with old measurement format
* Always create MAA provider for Azure SNP clusters
* Remove confidential VM option from provider in favor of attestation options
* cli: add config migrate command to handle config migration (#1678)
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
The test is implemented as a go test.
It can be executed as a bazel target.
The general workflow is to setup a cluster,
point the test to the workspace in which to
find the kubeconfig and the constellation config
and specify a target image, k8s and
service version. The test will succeed
if it detects all target versions in the cluster
within the configured timeout.
The CI automates the above steps.
A separate workflow is introduced as there
are multiple input fields to the test.
Adding all of these to the manual e2e test
seemed confusing.
Co-authored-by: Fabian Kammel <fk@edgeless.systems>
* Convert enforceIDKeyDigest setting to enum
* Use MAA fallback in Azure SNP attestation
* Only create MAA provider if MAA fallback is enabled
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
Co-authored-by: Thomas Tendyck <tt@edgeless.systems>
* Add attestation type to config (optional for now)
* Get attestation variant from config in CLI
* Set attestation variant for Constellation services in helm deployments
* Remove AzureCVM variable from helm deployments
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* As charts receive information like the container image from
the cli it makes sense to also version the charts based on the cli
version.
* The pseudoversion is recalculated when running cmake.
* When merging changes from release branch to main,
a new commit is introduced to set the PROJECT_VERSION back
to 0.0.0, so that builds include a pseudoversion.
In the light of extending our eKMS support it will be helpful
to have a tighter use of the word "KMS".
KMS should refer to the actual component that manages keys.
The keyservice, also called KMS in the constellation code,
does not manage keys itself. It talks to a KMS backend,
which in turn does the actual key management.