* `upgrade apply` will try to make the locally configured and
actual version in the cluster match by appling necessary
upgrades.
* Skip image or kubernetes upgrades if one is already
in progress.
* Skip downgrades/equal-as-running versions
* Move NodeVersionResourceName constant from operators
to internal as its needed in the CLI.
* introduce handleImageUpgrade & handleServiceUpgrade
* rename cloudUpgrader.Upgrade to UpgradeImage
* remove helm flag
* remove hint about development status
* Generate kubeconfig with unique name
* Move create name flag to config
* Add name validation to config
* Move name flag in e2e tests to config generation
* Remove name flag from create
* Update ascii cinema flow
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* AB#2787 add debug logging to iam create command
* AB#2787 add test logger
* AB#2787 reword log
* separate debug output with empty line
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
---------
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
Upgrade check is used to find updates for the current cluster.
Optionally the found upgrades can be persisted to the config
for consumption by the upgrade-execute cmd.
The old `upgrade execute` in this commit does not work with
the new `upgrade plan`.
The current versions are read from the cluster.
Supported versions are read from the cli and the versionsapi.
Adds a new config field MicroserviceVersion that will be used
by `upgrade execute` to update the service versions.
The field is optional until 2.7
A deprecation warning for the upgrade key is printed during
config validation.
Kubernetes versions now specify the patch version to make it
explicit for users if an upgrade changes the k8s version.
Version validation checks that the configured versions
are not more than one minor version below the CLI's version.
The validation can be disabled using --force.
This is necessary for now during development as the CLI
does not have a prerelease version, as our images do.
So far the masterSecret was sent to the initial bootstrapper
on init/recovery. With this commit this information is encoded
in the kmsURI that is sent during init.
For recover, the communication with the recoveryserver is
changed. Before a streaming gRPC call was used to
exchanges UUID for measurementSecret and state disk key.
Now a standard gRPC is made that includes the same kmsURI &
storageURI that are sent during init.
In the light of extending our eKMS support it will be helpful
to have a tighter use of the word "KMS".
KMS should refer to the actual component that manages keys.
The keyservice, also called KMS in the constellation code,
does not manage keys itself. It talks to a KMS backend,
which in turn does the actual key management.
* Update module github.com/talos-systems/talos/pkg/machinery to v1.3.1
* Rename talos-systems/talos to siderolabs/talos
Co-authored-by: Paul Meyer <49727155+katexochen@users.noreply.github.com>
Run with: constellation upgrade execute --helm.
This will only upgrade the helm charts. No config is needed.
Upgrades are implemented via helm's upgrade action, i.e. they
automatically roll back if something goes wrong. Releases could
still be managed via helm, even after an upgrade with constellation
has been done.
Currently not user facing as CRD/CR backups are still in progress.
These backups should be automatically created and saved to the
user's disk as updates may delete CRs. This happens implicitly
through CRD upgrades, which are part of microservice upgrades.
Switch azure default region to west us
Update find-image script to work with new API spec
Add version for every os image build
generate measurements: Use new API paths
CLI: config fetch measurements: Use image short versions to fetch measurements
CLI: allows shortnames to specify image in config
Image build pipeline: Change paths to contain "ref" and "stream"
* Merge enforced and expected measurements
* Update measurement generation to new format
* Write expected measurements hex encoded by default
* Allow hex or base64 encoded expected measurements
* Allow hex or base64 encoded clusterID
* Allow security upgrades to warnOnly flag
* Upload signed measurements in JSON format
* Fetch measurements either from JSON or YAML
* Use yaml.v3 instead of yaml.v2
* Error on invalid enforced selection
* Add placeholder measurements to config
* Update e2e test to new measurement format
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
* refactor measurements to use consistent types and less byte pushing
* refactor: only rely on a single multierr dependency
* extend config creation with envar support
* document changes
Signed-off-by: Fabian Kammel <fk@edgeless.systems>
* Include EXC0014 and fix issues.
* Include EXC0012 and fix issues.
Signed-off-by: Fabian Kammel <fk@edgeless.systems>
Co-authored-by: Otto Bittner <cobittner@posteo.net>
* Add .DS_Store to .gitignore
* Add AWS to config / supported instance types
* Move AWS terraform skeleton to cli/internal/terraform
* Move currently unused IAM to hack/terraform/aws
* Print supported AWS instance types when AWS dev flag is set
* Block everything aTLS related (e.g. init, verify) until AWS attestation is available
* Create/Terminate AWS dev cluster when dev flag is set
* Restrict Nitro instances to NitroTPM supported specifically
* Pin zone for subnets
This is not great for HA, but for now we need to avoid the two subnets
ending up in different zones, causing the load balancer to not be able
to connect to the targets.
Should be replaced later with a better implementation that just uses
multiple subnets within the same region dynamically
based on # of nodes or similar.
* Add AWS/GCP to Terraform TestLoader unit test
* Add uid tag and create log group
Co-authored-by: Daniel Weiße <dw@edgeless.systems>
Co-authored-by: Malte Poll <mp@edgeless.systems>