3a7b829107
* Replace external KMS backend logic for AWS, Azure, and GCP with go-kms-wrapping * Move kms client setup config into its own package for easier parsing * Update kms integration flag naming * Error if nil storage is passed to external KMS --------- Signed-off-by: Daniel Weiße <dw@edgeless.systems> |
||
---|---|---|
.. | ||
config | ||
kms | ||
setup | ||
storage | ||
test | ||
uri | ||
README.md |
Key Management Service backend implementation
This library provides an interface for the key management services used by Constellation. Its intended to be used for secure managing for data encryption keys and other symmetric secrets.
kms
A Key Management Service (KMS) is where we store our key encryption key (KEK).
We differentiate between two cases:
-
cluster KMS (cKMS):
The Constellation cluster itself holds the master secret (KEK) and manages key derivation. The KEK is generated by an admin on
constellation init
. Once send to the cluster, the KEK never leaves the confidential computing context. As keys are only derived on demand, no DEK is ever persisted to memory by the cKMS. -
external KMS (eKMS):
An external KMS solution is used to hold and manage the KEK. DEKs are encrypted and persisted to cloud storage solutions. An admin is required to set up and configure the KMS before use.
KMS Credentials
This section covers how credentials are used by the KMS plugins.
AWS KMS
The client requires the region the KMS is located, an access key ID, and an access key secret. Read the access key documentation for more details.
The IAM role requires the following permissions on the key:
kms:DescribeKey
kms:Encrypt
kms:Decrypt
Azure Key Vault / Azure managed HSM
Authorization for Azure Key Vault happens through the use of manged identities. The managed identity used by the client needs the following permissions on the KEK:
keys/get
keys/wrapKey
keys/unwrapKey
The client is set up using the tenant ID, client ID, and client secret tuple. Further, the vault type is chosen to configure whether or not the Key Vault is a managed HSM.
Google KMS
Providing credentials to your application for Google's Cloud KMS h
Note that the service account used for authentication requires the following permissions:
cloudkms.cryptoKeyVersions.get
cloudkms.cryptoKeyVersions.useToDecrypt
cloudkms.cryptoKeyVersions.useToEncrypt
storage
Storage is where the CSI Plugin stores the encrypted DEKs.
Supported are:
- In-memory (used for testing only)
- AWS S3, SSP
- GCP GCS
- Azure Blob
Storage Credentials
Each Plugin requires credentials to authenticate itself to a CSP.
AWS S3 Bucket
To use the AWS S3 Bucket plugin, you need to have an existing AWS account.
For authentication, you have to pass a config file to the plugin. The AWS config package lets you automatically fetch the data from the local AWS directory.
Passing credentials automatically
You need to store your credentials in your local AWS directory at $HOME/.aws/
. The AWS config package uses the values from the directory to build a config file, which is used to authenticate the client. The local AWS directory must contain two files:
-
credentials
[default] aws_access_key_id = MyAccessKeyId aws_secret_access_key = MySecretAccessKey
-
config
[default] region = MyRegion output = json
If you have the AWS CLI installed, you can initialise the files with the following command:
aws configure
To create the client:
cfg, err := config.LoadDefaultConfig(context.TODO())
store, err := storage.NewAWSS3Storage(context.TODO(), "bucketName", cfg, func(*s3.Options) {})
Azure Blob Storage
To use the Azure Blob storage plugin, you need to first create a storage account or give your application access to an existing storage account.
The plugin uses a connection string created for the storage account to authenticate itself to the Azure API. The connection string can be found in your storage account in the Azure Portal under the "Access Keys" section or with the following Azure CLI command:
az storage account show-connection-string -g MyResourceGroup -n MyStorageAccount
The client will use the specified Blob container if it already exists, or create it first otherwise.
To create the client:
connectionString := "DefaultEndpointsProtocol=https;AccountName=<myAccountName>;AccountKey=<myAccountKey>;EndpointSuffix=core.windows.net"
store, err := storage.NewAzureStorage(context.TODO(), connectionString, "myContainer", nil)
Google Cloud Storage
To use the Google Cloud Storage plugin, the Cloud Storage API needs to be enabled in your Google Cloud Account. You can use an existing bucket, create a new bucket yourself, or let the plugin create the bucket on initialization.
When using the Google Cloud APIs, your application will typically authenticate as a service account. You have two options for passing service account credentials to the Storage plugin: (1) Fetching them automatically from the environment or (2) passing them manually in your Go code.
Note that the serivce account requires the following permissions:
storage.buckets.create
storage.buckets.get
storage.objects.create
storage.objects.get
storage.objects.update
Finding credentials automatically
If your application is running inside a Google Cloud environment, and you have attached a service account to that environment, the Storage Plugin can retrieve credentials for the service account automatically.
If your application is running in an environment with no service account attached, you can manually attach a service account key to that environment.
After you created a service account and stored its access key to file you need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS
to the location of that file.
The Storage Plugin will then be able to automatically load the credentials from there:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-file.json"
To create the client:
store, err := storage.NewGoogleCloudStorage(context.TODO(), "myProject", "myBucket", nil)
Passing credentials manually
You may also explicitly use your service account file in code. First, create a service account and key the same way as in finding credentials automatically. You can then specify the location of the file in your application code.
To create the client:
credentialFile := "/path/to/service-account-file.json"
opts := option.WithCredentialsFile(credentialFile)
store, err := storage.NewGoogleCloudStorage(context.TODO(), "myProject", "myBucket", nil, opts)