Applies the updated NodeVersion object with one request
instead of two. This makes sure that the first request does
not accidentially put the cluster into a "updgrade in progress"
status. Which would lead users to having to run apply twice.
* `upgrade apply` will try to make the locally configured and
actual version in the cluster match by appling necessary
upgrades.
* Skip image or kubernetes upgrades if one is already
in progress.
* Skip downgrades/equal-as-running versions
* Move NodeVersionResourceName constant from operators
to internal as its needed in the CLI.
* introduce handleImageUpgrade & handleServiceUpgrade
* rename cloudUpgrader.Upgrade to UpgradeImage
* remove helm flag
* remove hint about development status
* Generate kubeconfig with unique name
* Move create name flag to config
* Add name validation to config
* Move name flag in e2e tests to config generation
* Remove name flag from create
* Update ascii cinema flow
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
Upgrade check is used to find updates for the current cluster.
Optionally the found upgrades can be persisted to the config
for consumption by the upgrade-execute cmd.
The old `upgrade execute` in this commit does not work with
the new `upgrade plan`.
The current versions are read from the cluster.
Supported versions are read from the cli and the versionsapi.
Adds a new config field MicroserviceVersion that will be used
by `upgrade execute` to update the service versions.
The field is optional until 2.7
A deprecation warning for the upgrade key is printed during
config validation.
Kubernetes versions now specify the patch version to make it
explicit for users if an upgrade changes the k8s version.
* update constellation-os to constellation-version references
* update nodeimage to nodeversion in CRD type name
Signed-off-by: Fabian Kammel <fk@edgeless.systems>
Co-authored-by: Malte Poll <mp@edgeless.systems>
These backups could be used in case an upgrade
misbehaves after helm declared it as successful.
The manual backups are required as helm-rollback
won't touch custom resources and changes to CRDs
delete resources of the old version.
Run with: constellation upgrade execute --helm.
This will only upgrade the helm charts. No config is needed.
Upgrades are implemented via helm's upgrade action, i.e. they
automatically roll back if something goes wrong. Releases could
still be managed via helm, even after an upgrade with constellation
has been done.
Currently not user facing as CRD/CR backups are still in progress.
These backups should be automatically created and saved to the
user's disk as updates may delete CRs. This happens implicitly
through CRD upgrades, which are part of microservice upgrades.
* Merge enforced and expected measurements
* Update measurement generation to new format
* Write expected measurements hex encoded by default
* Allow hex or base64 encoded expected measurements
* Allow hex or base64 encoded clusterID
* Allow security upgrades to warnOnly flag
* Upload signed measurements in JSON format
* Fetch measurements either from JSON or YAML
* Use yaml.v3 instead of yaml.v2
* Error on invalid enforced selection
* Add placeholder measurements to config
* Update e2e test to new measurement format
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* refactor measurements to use consistent types and less byte pushing
* refactor: only rely on a single multierr dependency
* extend config creation with envar support
* document changes
Signed-off-by: Fabian Kammel <fk@edgeless.systems>
* Include EXC0014 and fix issues.
* Include EXC0012 and fix issues.
Signed-off-by: Fabian Kammel <fk@edgeless.systems>
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* Add .DS_Store to .gitignore
* Add AWS to config / supported instance types
* Move AWS terraform skeleton to cli/internal/terraform
* Move currently unused IAM to hack/terraform/aws
* Print supported AWS instance types when AWS dev flag is set
* Block everything aTLS related (e.g. init, verify) until AWS attestation is available
* Create/Terminate AWS dev cluster when dev flag is set
* Restrict Nitro instances to NitroTPM supported specifically
* Pin zone for subnets
This is not great for HA, but for now we need to avoid the two subnets
ending up in different zones, causing the load balancer to not be able
to connect to the targets.
Should be replaced later with a better implementation that just uses
multiple subnets within the same region dynamically
based on # of nodes or similar.
* Add AWS/GCP to Terraform TestLoader unit test
* Add uid tag and create log group
Co-authored-by: Daniel Weiße <dw@edgeless.systems>
Co-authored-by: Malte Poll <mp@edgeless.systems>