* variant: move into internal/attestation
* attesation: move aws attesation into subfolder nitrotpm
* config: add aws-sev-snp variant
* cli: add tf option to enable AWS SNP
For now the implementations in aws/nitrotpm and aws/snp
are identical. They both contain the aws/nitrotpm impl.
A separate commit will add the actual attestation logic.
* rename to attestationconfigapi + put client and fetcher inside pkg
* rename api/version to versionsapi and put fetcher + client inside pkg
* rename AttestationConfigAPIFetcher to Fetcher
* api: rename AttestationVersionRepo to Client
* api: move client into separate subpkg for
clearer import paths.
* api: rename configapi -> attestationconfig
* api: rename versionsapi -> versions
* api: rename sut to client
* api: split versionsapi client and make it public
* api: split versionapi fetcher and make it public
* config: move attestationversion type to config
* api: fix attestationconfig client test
Co-authored-by: Adrian Stobbe <stobbe.adrian@gmail.com>
* add SignContent() + integrate into configAPI
* use static client for upload versions tool; fix staticupload calleeReference bug
* use version to get proper cosign pub key.
* mock fetcher in CLI tests
* only provide config.New constructor with fetcher
Co-authored-by: Otto Bittner <cobittner@posteo.net>
Co-authored-by: Daniel Weiße <66256922+daniel-weisse@users.noreply.github.com>
* Add attestation options to config
* Add join-config migration path for clusters with old measurement format
* Always create MAA provider for Azure SNP clusters
* Remove confidential VM option from provider in favor of attestation options
* cli: add config migrate command to handle config migration (#1678)
---------
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
Previously the content of files status and upgrade within the
cloudcmd pkg did not fit cloudcmd's pkg description.
This patch introduces a separate pkg to fix that.
Applies the updated NodeVersion object with one request
instead of two. This makes sure that the first request does
not accidentially put the cluster into a "updgrade in progress"
status. Which would lead users to having to run apply twice.
* `upgrade apply` will try to make the locally configured and
actual version in the cluster match by appling necessary
upgrades.
* Skip image or kubernetes upgrades if one is already
in progress.
* Skip downgrades/equal-as-running versions
* Move NodeVersionResourceName constant from operators
to internal as its needed in the CLI.
* introduce handleImageUpgrade & handleServiceUpgrade
* rename cloudUpgrader.Upgrade to UpgradeImage
* remove helm flag
* remove hint about development status