Create mkosi image build pipeline

This commit is contained in:
Malte Poll 2022-10-19 13:10:15 +02:00 committed by Malte Poll
parent e5aaf0a42f
commit 34367ea3cc
107 changed files with 2733 additions and 105 deletions

View file

@ -1,113 +1,193 @@
# Constellation images
We use the [Fedora CoreOS Assembler](https://coreos.github.io/coreos-assembler/) to build the base image for Constellation nodes.
## Setup
1. Install prerequisites:
- [Docker](https://docs.docker.com/engine/install/) or [Podman](https://podman.io/getting-started/installation)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux)
- [azcopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)
- [Google Cloud CLI](https://cloud.google.com/sdk/docs/install)
- [gsutil](https://cloud.google.com/storage/docs/gsutil_install#linux)
- Ubuntu:
- Install mkosi (from git):
```shell-session
sudo apt install -y bash coreutils cryptsetup-bin grep libguestfs-tools make parted pv qemu-system qemu-utils sed tar util-linux wget
```
2. Log in to GCP and Azure
```shell-session
gcloud auth login
az login
```
3. [Log in to the ghcr.io package registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry)
4. Ensure read and write access to `/dev/kvm` (and repeat after every reboot)
```shell-session
sudo chmod 666 /dev/kvm
```sh
cd /tmp/
git clone https://github.com/systemd/mkosi
cd mkosi
tools/generate-zipapp.sh
cp builddir/mkosi /usr/local/bin/
```
## Configuration
- Install tools:
Create a configuration file in `image/config.mk` to override any of the variables found at the top of the [Makefile](Makefile).
Important settings are:
<details>
<summary>Ubuntu / Debian</summary>
- `BOOTSTRAPPER_BINARY`: path to a bootstrapper binary. Can be substituted with a path to a `debugd` binary if a debug image should be built. The binary has to be built before!
- `CONTAINER_ENGINE`: container engine used to run COSA. either `podman` or `docker`.
- `COSA_INIT_REPO`: Git repository containing CoreOS config. Cloned in `cosa-init` target.
- `COSA_INIT_BRANCH`: Git branch checked out from `COSA_INIT_REPO`. Can be used to test out changes on another branch before merging.
- `NETRC` path to a netrc file containing a GitHub PAT. Used to authenticate to GitHub from within the COSA container.
- `GCP_IMAGE_NAME`: Image name for the GCP image. Set to include a timestamp when using the build pipeline. Can be set to a custom value if you want to upload a custom image for testing on GCP.
- `AZURE_IMAGE_NAME`: Image name for the Azure image. Can be set to a custom value if you want to upload a custom image for testing on Azure.
```sh
sudo apt-get update
sudo apt-get install --assume-yes --no-install-recommends \
dnf \
systemd-container \
qemu-system-x86 \
qemu-utils \
ovmf \
e2fsprogs \
squashfs-tools \
efitools \
sbsigntool \
coreutils \
curl \
jq \
util-linux \
virt-manager
```
Example `config.mk` to create a debug image with docker and name it `my-custom-image`:
</details>
```Makefile
BOOTSTRAPPER_BINARY = ../build/debugd
CONTAINER_ENGINE = docker
GCP_IMAGE_NAME = my-custom-image
AZURE_IMAGE_NAME = my-custom-image
<details>
<summary>Fedora</summary>
```sh
sudo dnf install -y \
edk2-ovmf \
systemd-container \
qemu \
e2fsprogs \
squashfs-tools \
efitools \
sbsigntools \
coreutils \
curl \
jq \
util-linux \
virt-manager
```
</details>
- Prepare secure boot PKI (see `secure-boot/genkeys.sh`)
## Build
When building your first image, prepare the secure boot PKI (see `secure-boot/genkeys.sh`) for self-signed, locally built images.
After that, you can build the image with:
```sh
# OPTIONAL: to create a debug image, export the following line
# export BOOTSTRAPPER_BINARY=$(realpath ${PWD}/../../build/debugd)
# OPTIONAL: symlink custom path to secure boot PKI to ./pki
# ln -s /path/to/pki/folder ./pki
sudo make -j $(nproc)
```
## Build an image
Raw images will be placed in `mkosi.output.<CSP>/fedora~36/image.raw`.
Ensure you have the modified cosa container image installed:
## Prepare Secure Boot
```shell-session
docker image ls | grep localhost/coreos-assembler
The generated images are partially signed by Microsoft ([shim loader](https://github.com/rhboot/shim)), and partially signed by Edgeless Systems (systemd-boot and unified kernel images consisting of the linux kernel, initramfs and kernel commandline).
For QEMU and Azure, you can pre-generate the NVRAM variables for secure boot. This is not necessary for GCP, as you can specify secure boot parameters via the GCP API on image creation.
<details>
<summary>libvirt / QEMU / KVM</summary>
```sh
secure-boot/generate_nvram_vars.sh mkosi.output.qemu/fedora~36/image.raw
```
or
</details>
```shell-session
podman image ls | grep localhost/coreos-assembler
<details>
<summary><a id="azure-secure-boot">Azure</a></summary>
These steps only have to performed once for a fresh set of secure boot certificates.
VMGS blobs for testing and release images already exist.
First, create a disk without embedded MOK EFI variables.
```sh
# set these variables
export AZURE_SECURITY_TYPE=ConfidentialVM # or TrustedLaunch
export AZURE_RESOURCE_GROUP_NAME= # e.g. "constellation-images"
export AZURE_REGION=northeurope
export AZURE_DISK_NAME=constellation-$(date +%s)
export AZURE_SNAPSHOT_NAME=${AZURE_DISK_NAME}
export AZURE_RAW_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.raw
export AZURE_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.vhd
export AZURE_VMGS_FILENAME=${AZURE_SECURITY_TYPE}.vmgs
export BLOBS_DIR=${PWD}/blobs
upload/pack.sh azure "${AZURE_RAW_IMAGE_PATH}" "${AZURE_IMAGE_PATH}"
upload/upload_azure.sh --disk-name "${AZURE_DISK_NAME}-setup-secure-boot" ""
secure-boot/azure/launch.sh -n "${AZURE_DISK_NAME}-setup-secure-boot" -d --secure-boot true --disk-name "${AZURE_DISK_NAME}-setup-secure-boot"
```
If not present, install with
Ignore the running launch script and connect to the serial console once available.
The console shows the message "Verification failed: (0x1A) Security Violation". You can import the MOK certificate via the UEFI shell:
```shell-session
make cosa-image
Press OK, then ENTER, then "Enroll key from disk".
Select the following key: `/EFI/loader/keys/auto/db.cer`.
Press Continue, then choose "Yes" to the question "Enroll the key(s)?".
Choose reboot.
Extract the VMGS from the running VM (this includes the MOK EFI variables) and delete the VM:
```sh
secure-boot/azure/extract_vmgs.sh --name "${AZURE_DISK_NAME}-setup-secure-boot"
secure-boot/azure/delete.sh --name "${AZURE_DISK_NAME}-setup-secure-boot"
```
> It is always advisable to create an image from a clean `build` dir.
</details>
Clean up the `build` dir and remove old images (⚠ this will undo any local changes to the CoreOS configuration!):
## Upload to CSP
```shell-session
sudo make clean
<details>
<summary>GCP</summary>
- Install `gcloud` and `gsutil` (see [here](https://cloud.google.com/sdk/docs/install))
- Login to GCP (see [here](https://cloud.google.com/sdk/docs/authorizing))
- Choose secure boot PKI public keys (one of `pki_dev`, `pki_test`, `pki_prod`)
- `pki_dev` can be used for local image builds
- `pki_test` is used by the CI for non-release images
- `pki_prod` is used for release images
```sh
# set these variables
export GCP_IMAGE_FAMILY= # e.g. "constellation"
export GCP_IMAGE_NAME= # e.g. "constellation-v1.0.0"
export PKI=${PWD}/pki
export GCP_PROJECT=constellation-images
export GCP_REGION=europe-west3
export GCP_BUCKET=constellation-images
export GCP_RAW_IMAGE_PATH=${PWD}/mkosi.output.gcp/fedora~36/image.raw
export GCP_IMAGE_FILENAME=$(date +%s).tar.gz
export GCP_IMAGE_PATH=${PWD}/mkosi.output.gcp/fedora~36/image.tar.gz
upload/pack.sh gcp ${GCP_RAW_IMAGE_PATH} ${GCP_IMAGE_PATH}
upload/upload_gcp.sh
```
- Build QEMU image (for local testing only)
</details>
```shell-session
make coreos
```
<details>
<summary>Azure</summary>
- Build Azure image (without upload)
- Install `az` and `azcopy` (see [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli))
- Login to Azure (see [here](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli))
- [Prepare virtual machine guest state (VMGS) with customized NVRAM or use existing VMGS blob](#azure-secure-boot)
```shell-session
make image-azure
```
```sh
# set these variables
export AZURE_GALLERY_NAME= # e.g. "Constellation"
export AZURE_IMAGE_DEFINITION= # e.g. "constellation"
export AZURE_IMAGE_VERSION= # e.g. "1.0.0"
export AZURE_VMGS_PATH= # e.g. "path/to/ConfidentialVM.vmgs"
export AZURE_SECURITY_TYPE=ConfidentialVM # or TrustedLaunch
- Build Azure image (with upload)
export AZURE_RESOURCE_GROUP_NAME=constellation-images
export AZURE_REGION=northeurope
export AZURE_REPLICATION_REGIONS="northeurope eastus westeurope westus"
export AZURE_IMAGE_OFFER=constellation
export AZURE_SKU=constellation
export AZURE_PUBLISHER=edgelesssys
export AZURE_DISK_NAME=constellation-$(date +%s)
export AZURE_RAW_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.raw
export AZURE_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.vhd
upload/pack.sh azure "${AZURE_RAW_IMAGE_PATH}" "${AZURE_IMAGE_PATH}"
upload/upload_azure.sh -g --disk-name "${AZURE_DISK_NAME}" "${AZURE_VMGS_PATH}"
```
```shell-session
make image-azure upload-azure
```
- Build GCP image (without upload)
```shell-session
make image-gcp
```
- Build GCP image (with upload)
```shell-session
make image-gcp upload-gcp
```
Resulting images for the CSPs can be found under [images](images/). QEMU images are stored at `build/builds/latest/` with a name ending in `.qcow2`.
</details>

0
image/mkosi/.gitattributes vendored Normal file
View file

5
image/mkosi/.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
mkosi.cache
mkosi.extra
pki
image.*
mkosi.output.*

54
image/mkosi/Makefile Normal file
View file

@ -0,0 +1,54 @@
SHELL = /bin/bash
SRC_PATH = $(CURDIR)
BASE_PATH ?= $(SRC_PATH)
BOOTSTRAPPER_BINARY ?= $(BASE_PATH)/../../build/bootstrapper
DISK_MAPPER_BINARY ?= $(BASE_PATH)/../../build/disk-mapper
PKI ?= $(BASE_PATH)/pki
MKOSI_EXTRA ?= $(BASE_PATH)/mkosi.extra
-include $(CURDIR)/config.mk
csps := qemu gcp azure
certs := $(PKI)/PK.cer $(PKI)/KEK.cer $(PKI)/db.cer
.PHONY: all clean inject-bins $(csps)
all: $(csps)
$(csps): %: mkosi.output.%/fedora~36/image.raw
mkosi.output.%/fedora~36/image.raw: mkosi.files/mkosi.%.conf inject-bins inject-certs
mkosi --config mkosi.files/mkosi.$*.conf build
secure-boot/signed-shim.sh $@
@if [ -n $(SUDO_UID) ] && [ -n $(SUDO_GID) ]; then \
chown -R $(SUDO_UID):$(SUDO_GID) mkosi.output.$*; \
fi
@echo "Image is ready: $@"
inject-bins:
mkdir -p $(MKOSI_EXTRA)/usr/bin
mkdir -p $(MKOSI_EXTRA)/usr/sbin
cp $(BOOTSTRAPPER_BINARY) $(MKOSI_EXTRA)/usr/bin/bootstrapper
cp $(DISK_MAPPER_BINARY) $(MKOSI_EXTRA)/usr/sbin/disk-mapper
inject-certs: $(certs)
# for auto enrollment using systemd-boot (not working yet)
mkdir -p "$(MKOSI_EXTRA)/boot/loader/keys/auto"
cp $(PKI)/{PK,KEK,db}.cer "$(MKOSI_EXTRA)/boot/loader/keys/auto"
cp $(PKI)/{MicWinProPCA2011_2011-10-19,MicCorUEFCA2011_2011-06-27,MicCorKEKCA2011_2011-06-24}.crt "$(MKOSI_EXTRA)/boot/loader/keys/auto"
cp $(PKI)/{PK,KEK,db}.esl "$(MKOSI_EXTRA)/boot/loader/keys/auto"
cp $(PKI)/{PK,KEK,db}.auth "$(MKOSI_EXTRA)/boot/loader/keys/auto"
# for manual enrollment using sbkeysync
mkdir -p $(MKOSI_EXTRA)/etc/secureboot/keys/{db,dbx,KEK,PK}
cp $(PKI)/db.auth "$(MKOSI_EXTRA)/etc/secureboot/keys/db/"
cp $(PKI)/KEK.auth "$(MKOSI_EXTRA)/etc/secureboot/keys/KEK/"
cp $(PKI)/PK.auth "$(MKOSI_EXTRA)/etc/secureboot/keys/PK/"
clean-cache:
rm -rf mkosi.cache/*
clean-%:
mkosi --config mkosi.files/mkosi.$*.conf clean
clean:
rm -rf mkosi.output.*
rm -rf $(MKOSI_EXTRA)
mkdir -p $(MKOSI_EXTRA)

187
image/mkosi/README.md Normal file
View file

@ -0,0 +1,187 @@
## Setup
- Install mkosi (from git):
```sh
cd /tmp/
git clone https://github.com/systemd/mkosi
cd mkosi
tools/generate-zipapp.sh
cp builddir/mkosi /usr/local/bin/
```
- Install tools:
<details>
<summary>Ubuntu / Debian</summary>
```sh
sudo apt-get update
sudo apt-get install --assume-yes --no-install-recommends \
dnf \
systemd-container \
qemu-system-x86 \
qemu-utils \
ovmf \
e2fsprogs \
squashfs-tools \
efitools \
sbsigntool \
coreutils \
curl \
jq \
util-linux \
virt-manager
```
</details>
<details>
<summary>Fedora</summary>
```sh
sudo dnf install -y \
edk2-ovmf \
systemd-container \
qemu \
e2fsprogs \
squashfs-tools \
efitools \
sbsigntools \
coreutils \
curl \
jq \
util-linux \
virt-manager
```
</details>
- Prepare secure boot PKI (see `secure-boot/genkeys.sh`)
## Build
```sh
# OPTIONAL: to create a debug image, export the following line
# export BOOTSTRAPPER_BINARY=$(realpath ${PWD}/../../build/debugd)
# OPTIONAL: specify path to secure boot PKI
# export PKI=/path/to/pki/folder
sudo make -j $(nproc)
```
Raw images will be placed in `mkosi.output.<CSP>/fedora~36/image.raw`.
## Prepare Secure Boot
The generated images are partially signed by Microsoft ([shim loader](https://github.com/rhboot/shim)), and partially signed by Edgeless Systems (systemd-boot and unified kernel images consisting of the linux kernel, initramfs and kernel commandline).
For QEMU and Azure, you can pre-generate the NVRAM variables for secure boot. This is not necessary for GCP, as you can specify secure boot parameters via the GCP API on image creation.
<details>
<summary>libvirt / QEMU / KVM</summary>
```sh
secure-boot/generate_nvram_vars.sh mkosi.output.qemu/fedora~36/image.raw
```
</details>
<details>
<summary><a id="azure-secure-boot">Azure</a></summary>
These steps only have to performed once for a fresh set of secure boot certificates.
VMGS blobs for testing and release images already exist.
First, create a disk without embedded MOK EFI variables.
```sh
# set these variables
export AZURE_SECURITY_TYPE=ConfidentialVM # or TrustedLaunch
export AZURE_RESOURCE_GROUP_NAME= # e.g. "constellation-images"
export AZURE_REGION=northeurope
export AZURE_DISK_NAME=constellation-$(date +%s)
export AZURE_SNAPSHOT_NAME=${AZURE_DISK_NAME}
export AZURE_RAW_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.raw
export AZURE_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.vhd
export AZURE_VMGS_FILENAME=${AZURE_SECURITY_TYPE}.vmgs
export BLOBS_DIR=${PWD}/blobs
upload/pack.sh azure "${AZURE_RAW_IMAGE_PATH}" "${AZURE_IMAGE_PATH}"
upload/upload_azure.sh --disk-name "${AZURE_DISK_NAME}-setup-secure-boot" ""
secure-boot/azure/launch.sh -n "${AZURE_DISK_NAME}-setup-secure-boot" -d --secure-boot true --disk-name "${AZURE_DISK_NAME}-setup-secure-boot"
```
Ignore the running launch script and connect to the serial console once available.
The console shows the message "Verification failed: (0x1A) Security Violation". You can import the MOK certificate via the UEFI shell:
Press OK, then ENTER, then "Enroll key from disk".
Select the following key: `/EFI/loader/keys/auto/db.cer`.
Press Continue, then choose "Yes" to the question "Enroll the key(s)?".
Choose reboot.
Extract the VMGS from the running VM (this includes the MOK EFI variables) and delete the VM:
```sh
secure-boot/azure/extract_vmgs.sh --name "${AZURE_DISK_NAME}-setup-secure-boot"
secure-boot/azure/delete.sh --name "${AZURE_DISK_NAME}-setup-secure-boot"
```
</details>
## Upload to CSP
<details>
<summary>GCP</summary>
- Install `gcloud` and `gsutil` (see [here](https://cloud.google.com/sdk/docs/install))
- Login to GCP (see [here](https://cloud.google.com/sdk/docs/authorizing))
- Prepare secure boot PKI (see `secure-boot/genkeys.sh`)
```sh
# set these variables
export GCP_IMAGE_FAMILY= # e.g. "constellation"
export GCP_IMAGE_NAME= # e.g. "constellation-v1.0.0"
export PKI=${PWD}/pki
export GCP_PROJECT=constellation-images
export GCP_REGION=europe-west3
export GCP_BUCKET=constellation-images
export GCP_RAW_IMAGE_PATH=${PWD}/mkosi.output.gcp/fedora~36/image.raw
export GCP_IMAGE_FILENAME=$(date +%s).tar.gz
export GCP_IMAGE_PATH=${PWD}/mkosi.output.gcp/fedora~36/image.tar.gz
upload/pack.sh gcp ${GCP_RAW_IMAGE_PATH} ${GCP_IMAGE_PATH}
upload/upload_gcp.sh
```
</details>
<details>
<summary>Azure</summary>
- Install `az` and `azcopy` (see [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli))
- Login to Azure (see [here](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli))
- Prepare secure boot PKI (see `secure-boot/genkeys.sh`)
- [Prepare virtual machine guest state (VMGS) with customized NVRAM or use existing VMGS blob](#azure-secure-boot)
```sh
# set these variables
export AZURE_GALLERY_NAME= # e.g. "Constellation"
export AZURE_IMAGE_DEFINITION= # e.g. "constellation"
export AZURE_IMAGE_VERSION= # e.g. "1.0.0"
export AZURE_VMGS_PATH= # e.g. "path/to/ConfidentialVM.vmgs"
export AZURE_SECURITY_TYPE=ConfidentialVM # or TrustedLaunch
export AZURE_RESOURCE_GROUP_NAME=constellation-images
export AZURE_REGION=northeurope
export AZURE_REPLICATION_REGIONS="northeurope eastus westeurope westus"
export AZURE_IMAGE_OFFER=constellation
export AZURE_SKU=constellation
export AZURE_PUBLISHER=edgelesssys
export AZURE_DISK_NAME=constellation-$(date +%s)
export AZURE_RAW_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.raw
export AZURE_IMAGE_PATH=${PWD}/mkosi.output.azure/fedora~36/image.vhd
upload/pack.sh azure "${AZURE_RAW_IMAGE_PATH}" "${AZURE_IMAGE_PATH}"
upload/upload_azure.sh -g --disk-name "${AZURE_DISK_NAME}" "${AZURE_VMGS_PATH}"
```
</details>

View file

View file

@ -0,0 +1,3 @@
[Content]
Packages=
WALinuxAgent-udev

View file

@ -0,0 +1,9 @@
[Content]
Packages=
containerd,
containernetworking-plugins,
iptables-nft,
ethtool,
socat,
iproute-tc,
conntrack-tools

View file

@ -0,0 +1,3 @@
[Content]
Packages=
nvme-cli

View file

@ -0,0 +1,22 @@
[Distribution]
Distribution=fedora
Release=36
[Output]
Format=gpt_squashfs
ManifestFormat=json,changelog
Bootable=yes
KernelCommandLine=mitigations=auto,nosmt preempt=full
WithUnifiedKernelImages=yes
Verity=yes
CompressFs=zstd
SplitArtifacts=yes
# Enable Secure Boot with own PKI
SecureBoot=yes
SecureBootKey=pki/db.key
SecureBootCertificate=pki/db.crt
# TODO: Wait for systemd 252 to bring systemd-measure
# Measure=yes
[Host]
QemuHeadless=yes

View file

@ -0,0 +1,8 @@
[Content]
Packages=
iproute,
dbus-broker,
systemd-networkd,
systemd-resolved,
dracut-network,
dhclient, # prevent NetworkManager from being pulled in by dracut-network

View file

@ -0,0 +1,8 @@
[Content]
# Secure Boot / EFI related packages for manual enrollment / verification of Secure Boot
Packages=
e2fsprogs,
sbsigntools,
efitools,
mokutil,
tpm2-tools

View file

@ -0,0 +1,8 @@
[Content]
Packages=
passwd,
nano,
nano-default-editor,
vim,
curl,
wget

View file

@ -0,0 +1,3 @@
[Output]
KernelCommandLine=constel.csp=azure
OutputDirectory=mkosi.output.azure

View file

@ -0,0 +1,3 @@
[Output]
KernelCommandLine=constel.csp=gcp
OutputDirectory=mkosi.output.gcp

View file

@ -0,0 +1,3 @@
[Output]
KernelCommandLine=constel.csp=qemu
OutputDirectory=mkosi.output.qemu

12
image/mkosi/mkosi.finalize Executable file
View file

@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -euxo pipefail
# recreate kubelet systemd unit after reboot.
# tmpfile config has to be written late as it interferes with the systemd-nspawn build environment
cat >"${BUILDROOT}/usr/lib/tmpfiles.d/kubelet-service.conf" <<EOF
C /run/systemd/system/kubelet.service - - - - /run/state/systemd/system/kubelet.service
C /run/systemd/system/kubelet.service.d/10-kubeadm.conf - - - - /run/state/systemd/system/kubelet.service.d/10-kubeadm.conf
EOF
# cleanup dracut generation files (disk-mapper) to save space
rm -rf "${BUILDROOT}/usr/lib/dracut/modules.d/39constellation-mount/"

22
image/mkosi/mkosi.postinst Executable file
View file

@ -0,0 +1,22 @@
#!/bin/sh
set -euxo pipefail
# This will work in sd-boot 251 to auto-enroll secure boot keys.
# https://www.freedesktop.org/software/systemd/man/systemd-boot.html
# > CHANGES WITH 252 in spe:
# > [...]
# > * sd-boot can automatically enroll SecureBoot keys from files found on
# > the ESP. This enrollment can be either automatic ('force' mode) or
# > controlled by the user ('manual' mode).
# > [...]
#
# echo "secure-boot-enroll force" >> /boot/loader/loader.conf
# create mountpoints in /etc
mkdir -p /etc/{cni,kubernetes}
# move issue files away from /etc
# to allow /run/issue and /run/issue.d to take precedence
mv /etc/issue.d /usr/lib/issue.d || true
rm -f /etc/issue
rm -f /etc/issue.net

View file

@ -0,0 +1,4 @@
# enable networking in initrd (initramfs) with dracut and systemd-networkd
install_items+=" /usr/lib/systemd/network/20-wired.network "
install_items+=" /usr/lib/systemd/network/21-azure.network "
add_dracutmodules+=" systemd-networkd "

View file

@ -0,0 +1,3 @@
# add hyperv drivers to initramfs
# (important for early networking)
force_drivers+=" hv_netvsc hv_sock hv_storvsc hv_vmbus "

View file

@ -0,0 +1,2 @@
# Include NVMe driver in initrd to boot on NVMe devices.
force_drivers+=" nvme "

View file

@ -0,0 +1,5 @@
/dev/mapper/state /run/state ext4 defaults,x-systemd.makefs,x-mount.mkdir 0 0
/run/state/var /var none defaults,bind,x-mount.mkdir 0 0
/run/state/kubernetes /etc/kubernetes none defaults,bind,x-mount.mkdir 0 0
/run/state/etccni /etc/cni/ none defaults,bind,x-mount.mkdir 0 0
/run/state/opt /opt none defaults,bind,x-mount.mkdir 0 0

View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
export PATH=/run/state/bin:${PATH}
export KUBECONFIG=/etc/kubernetes/admin.conf
alias k=kubectl

View file

@ -0,0 +1,216 @@
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "k8s.gcr.io/pause:3.5"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0

View file

@ -0,0 +1 @@
../../../systemd/system/configure-constel-csp.service

View file

@ -0,0 +1,15 @@
[Unit]
Description=Force symlink creation for GCP nvme disks
Before=prepare-state-disk.service
After=network-online.target
Wants=network-online.target
ConditionKernelCommandLine=constel.csp=gcp
[Service]
Type=oneshot
ExecStart=/bin/bash /usr/sbin/google-nvme-disk
RemainAfterExit=yes
StandardOutput=tty
StandardInput=tty
StandardError=tty
TimeoutSec=infinity

View file

@ -0,0 +1,23 @@
#!/bin/bash
set -euo pipefail
shopt -s extglob nullglob
GCP_STATE_DISK_SYMLINK="/dev/disk/by-id/google-state-disk"
# hack: google nvme udev rules are never executed. Create symlinks for the nvme devices manually.
while [ ! -L "${GCP_STATE_DISK_SYMLINK}" ]
do
for nvmedisk in /dev/nvme0n+([0-9])
do
/usr/lib/udev/google_nvme_id -s -d "${nvmedisk}" || true
done
if [ -L "${GCP_STATE_DISK_SYMLINK}" ]; then
break
fi
echo "Waiting for state disk to appear.."
sleep 2
done
echo "Google state disk found"
echo ${GCP_STATE_DISK_SYMLINK}$(readlink -f "${GCP_STATE_DISK_SYMLINK}")
sleep 2

View file

@ -0,0 +1,56 @@
#!/bin/bash
depends() {
echo systemd
}
install_and_enable_unit() {
unit="$1"; shift
target="$1"; shift
inst_simple "$moddir/$unit" "$systemdsystemunitdir/$unit"
mkdir -p "${initdir}${systemdsystemconfdir}/${target}.wants"
ln_r "${systemdsystemunitdir}/${unit}" \
"${systemdsystemconfdir}/${target}.wants/${unit}"
}
install() {
inst_multiple \
bash
inst_script "/usr/sbin/disk-mapper" \
"/usr/sbin/disk-mapper"
inst_script "$moddir/prepare-state-disk.sh" \
"/usr/sbin/prepare-state-disk"
install_and_enable_unit "prepare-state-disk.service" \
"basic.target"
inst_script "$moddir/google-nvme-disk.sh" \
"/usr/sbin/google-nvme-disk"
install_and_enable_unit "google-nvme-disk.service" \
"basic.target"
install_and_enable_unit "configure-constel-csp.service" \
"basic.target"
# azure scsi disks
inst_multiple \
cut \
readlink
# gcp nvme disks
inst_multiple \
date \
xxd \
grep \
sed \
ln \
command \
readlink
inst_script "/usr/sbin/nvme" \
"/usr/sbin/nvme"
inst_script "/usr/lib/udev/google_nvme_id" \
"/usr/lib/udev/google_nvme_id"
inst_simple "/usr/lib/udev/rules.d/64-gce-disk-removal.rules" \
"/usr/lib/udev/rules.d/64-gce-disk-removal.rules"
inst_simple "/usr/lib/udev/rules.d/65-gce-disk-naming.rules" \
"/usr/lib/udev/rules.d/65-gce-disk-naming.rules"
}

View file

@ -0,0 +1,17 @@
[Unit]
Description=Prepare encrypted state disk
Before=initrd-fs.target
After=network-online.target configure-constel-csp.service
Wants=network-online.target
Requires=initrd-root-fs.target
FailureAction=reboot-immediate
[Service]
Type=oneshot
EnvironmentFile=/run/constellation.env
ExecStart=/bin/bash /usr/sbin/prepare-state-disk
RemainAfterExit=yes
StandardOutput=tty
StandardInput=tty
StandardError=tty
TimeoutSec=infinity

View file

@ -0,0 +1,11 @@
#!/bin/bash
set -euo pipefail
# Prepare the encrypted volume by either initializing it with a random key or by aquiring the key from another bootstrapper.
# Store encryption key (random or recovered key) in /run/cryptsetup-keys.d/state.key
disk-mapper -csp "${CONSTEL_CSP}"
if [[ $? -ne 0 ]]; then
echo "Failed to prepare state disk"
sleep 2 # give the serial console time to print the error message
exit $? # exit with the same error code as disk-mapper
fi

View file

@ -0,0 +1,2 @@
PATH=/run/state/bin:$PATH
KUBECONFIG=/etc/kubernetes/admin.conf

View file

@ -0,0 +1,2 @@
overlay
br_netfilter

View file

@ -0,0 +1,3 @@
# See https://github.com/cilium/cilium/issues/10645
net.ipv4.conf.lxc*.rp_filter = 0
net.ipv4.conf.cilium_*.rp_filter = 0

View file

@ -0,0 +1,9 @@
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 524288
# kubernetes hardening (protectKernelDefaults=true)
vm.overcommit_memory = 1
kernel.panic = 10
kernel.panic_on_oops = 1

View file

@ -0,0 +1,5 @@
[Match]
Name=en*
[Network]
DHCP=yes

View file

@ -0,0 +1,6 @@
# Used as a fallback rule for Azure NICs as they are not named with "en*"
[Match]
Driver=hv_netvsc
[Network]
DHCP=yes

View file

@ -0,0 +1,5 @@
enable constellation-bootstrapper.service
enable configure-constel-csp.service
enable containerd.service
enable tpm-pcrs.service
enable systemd-networkd.service

View file

@ -0,0 +1,10 @@
[Unit]
Description=Configures constellation cloud service provider environment variable
[Service]
Type=oneshot
ExecStart=/bin/bash -c "CSP=$(< /proc/cmdline tr ' ' '\n' | grep constel.csp | sed 's/constel.csp=//'); echo CONSTEL_CSP=$CSP >> /run/constellation.env"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=Constellation Bootstrapper
Wants=network-online.target
After=network-online.target configure-constel-csp.service
[Service]
Type=simple
RemainAfterExit=yes
Restart=on-failure
EnvironmentFile=/run/constellation.env
Environment=PATH=/run/state/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
ExecStart=/usr/bin/bootstrapper
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,3 @@
[Service]
ExecStart=
ExecStart=/usr/bin/containerd --config /usr/etc/containerd/config.toml

View file

@ -0,0 +1,11 @@
[Unit]
Description=Print PCR state on startup
Before=constellation-bootstrapper.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/libexec/constellation-pcrs
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,2 @@
#Type Name ID GECOS Home directory Shell
u etcd 998:997 "etcd user" /var/lib/etcd

View file

@ -0,0 +1,8 @@
#Type Path Mode User Group Age Argument
d /var/lib/etcd 0700 998 997 - -
d /var/log/kubernetes/audit/ 0700 0 0 - -
d /run/state/bin 0755 0 0 - -
d /run/issue.d 0755 0 0 - -
C /run/issue - - - - /usr/lib/issue
# merge all CNI binaries in writable folder until containerd can use multiple CNI bins: https://github.com/containerd/containerd/issues/6600
C /opt/cni/bin - - - - /usr/libexec/cni/

View file

@ -0,0 +1,245 @@
#!/bin/bash
# Copyright 2020 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Used to generate symlinks for PD-NVMe devices using the disk names reported by
# the metadata server
# Locations of the script's dependencies
readonly nvme_cli_bin=/usr/sbin/nvme
# Bash regex to parse device paths and controller identification
readonly NAMESPACE_NUMBER_REGEX="/dev/nvme[[:digit:]]+n([[:digit:]]+).*"
readonly PARTITION_NUMBER_REGEX="/dev/nvme[[:digit:]]+n[[:digit:]]+p([[:digit:]]+)"
readonly PD_NVME_REGEX="sn[[:space:]]+:[[:space]]+nvme_card-pd"
# Globals used to generate the symlinks for a PD-NVMe disk. These are populated
# by the identify_pd_disk function and exported for consumption by udev rules.
ID_SERIAL=''
ID_SERIAL_SHORT=''
#######################################
# Helper function to log an error message to stderr.
# Globals:
# None
# Arguments:
# String to print as the log message
# Outputs:
# Writes error to STDERR
#######################################
function err() {
echo "[$(date +'%Y-%m-%dT%H:%M:%S%z')]: $*" >&2
}
#######################################
# Retrieves the device name for an NVMe namespace using nvme-cli.
# Globals:
# Uses nvme_cli_bin
# Arguments:
# The path to the nvme namespace (/dev/nvme0n?)
# Outputs:
# The device name parsed from the JSON in the vendor ext of the ns-id command.
# Returns:
# 0 if the device name for the namespace could be retrieved, 1 otherwise
#######################################
function get_namespace_device_name() {
local nvme_json
nvme_json="$("$nvme_cli_bin" id-ns -b "$1" | xxd -p -seek 384 | xxd -p -r)"
if [[ $? -ne 0 ]]; then
return 1
fi
if [[ -z "$nvme_json" ]]; then
err "NVMe Vendor Extension disk information not present"
return 1
fi
local device_name
device_name="$(echo "$nvme_json" | grep device_name | sed -e 's/.*"device_name":[ \t]*"\([a-zA-Z0-9_-]\+\)".*/\1/')"
# Error if our device name is empty
if [[ -z "$device_name" ]]; then
err "Empty name"
return 1
fi
echo "$device_name"
return 0
}
#######################################
# Retrieves the nsid for an NVMe namespace
# Globals:
# None
# Arguments:
# The path to the nvme namespace (/dev/nvme0n*)
# Outputs:
# The namespace number/id
# Returns:
# 0 if the namespace id could be retrieved, 1 otherwise
#######################################
function get_namespace_number() {
local dev_path="$1"
local namespace_number
if [[ "$dev_path" =~ $NAMESPACE_NUMBER_REGEX ]]; then
namespace_number="${BASH_REMATCH[1]}"
else
return 1
fi
echo "$namespace_number"
return 0
}
#######################################
# Retrieves the partition number for a device path if it exists
# Globals:
# None
# Arguments:
# The path to the device partition (/dev/nvme0n*p*)
# Outputs:
# The value after 'p' in the device path, or an empty string if the path has
# no partition.
#######################################
function get_partition_number() {
local dev_path="$1"
local partition_number
if [[ "$dev_path" =~ $PARTITION_NUMBER_REGEX ]]; then
partition_number="${BASH_REMATCH[1]}"
echo "$partition_number"
else
echo ''
fi
return 0
}
#######################################
# Generates a symlink for a PD-NVMe device using the metadata's disk name.
# Primarily used for testing but can be used if the script is directly invoked.
# Globals:
# Uses ID_SERIAL_SHORT (can be populated by identify_pd_disk)
# Arguments:
# The device path for the disk
#######################################
function gen_symlink() {
local dev_path="$1"
local partition_number="$(get_partition_number "$dev_path")"
if [[ -n "$partition_number" ]]; then
ln -s "$dev_path" /dev/disk/by-id/google-"$ID_SERIAL_SHORT"-part"$partition_number" > /dev/null 2>&1
else
ln -s "$dev_path" /dev/disk/by-id/google-"$ID_SERIAL_SHORT" > /dev/null 2>&1
fi
return 0
}
#######################################
# Populates the ID_* global variables with a disk's device name and namespace
# Globals:
# Populates ID_SERIAL_SHORT, and ID_SERIAL
# Arguments:
# The device path for the disk
# Returns:
# 0 on success and 1 if an error occurrs
#######################################
function identify_pd_disk() {
local dev_path="$1"
local dev_name
dev_name="$(get_namespace_device_name "$dev_path")"
if [[ $? -ne 0 ]]; then
return 1
fi
ID_SERIAL_SHORT="$dev_name"
ID_SERIAL="Google_PersistentDisk_${ID_SERIAL_SHORT}"
return 0
}
function print_help_message() {
echo "Usage: google_nvme_id [-s] [-h] -d device_path"
echo " -d <device_path> (Required): Specifies the path to generate a name"
echo " for. This needs to be a path to an nvme device or namespace"
echo " -s: Create symbolic link for the disk under /dev/disk/by-id."
echo " Otherwise, the disk name will be printed to STDOUT"
echo " -h: Print this help message"
}
function main() {
local opt_gen_symlink='false'
local device_path=''
while getopts :d:sh flag; do
case "$flag" in
d) device_path="$OPTARG";;
s) opt_gen_symlink='true';;
h) print_help_message
return 0
;;
:) echo "Invalid option: ${OPTARG} requires an argument" 1>&2
return 1
;;
*) return 1
esac
done
if [[ -z "$device_path" ]]; then
echo "Device path (-d) argument required. Use -h for full usage." 1>&2
exit 1
fi
# Ensure the nvme-cli command is installed
command -v "$nvme_cli_bin" > /dev/null 2>&1
if [[ $? -ne 0 ]]; then
err "The nvme utility (/usr/sbin/nvme) was not found. You may need to run \
with sudo or install nvme-cli."
return 1
fi
# Ensure the passed device is actually an NVMe device
"$nvme_cli_bin" id-ctrl "$device_path" &>/dev/null
if [[ $? -ne 0 ]]; then
err "Passed device was not an NVMe device. (You may need to run this \
script as root/with sudo)."
return 1
fi
# Detect the type of attached nvme device
local controller_id
controller_id=$("$nvme_cli_bin" id-ctrl "$device_path")
if [[ ! "$controller_id" =~ nvme_card-pd ]] ; then
err "Device is not a PD-NVMe device"
return 1
fi
# Fill the global variables for the id command for the given disk type
# Error messages will be printed closer to error, no need to reprint here
identify_pd_disk "$device_path"
if [[ $? -ne 0 ]]; then
return $?
fi
# Gen symlinks or print out the globals set by the identify command
if [[ "$opt_gen_symlink" == 'true' ]]; then
gen_symlink "$device_path"
else
# These will be consumed by udev
echo "ID_SERIAL_SHORT=${ID_SERIAL_SHORT}"
echo "ID_SERIAL=${ID_SERIAL}"
fi
return $?
}
main "$@"

View file

@ -0,0 +1,17 @@
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# When a disk is removed, unmount any remaining attached volumes.
ACTION=="remove", SUBSYSTEM=="block", KERNEL=="sd*|vd*|nvme*", RUN+="/bin/sh -c '/bin/umount -fl /dev/$name && /usr/bin/logger -p daemon.warn -s WARNING: hot-removed /dev/$name that was still mounted, data may have been corrupted'"

View file

@ -0,0 +1,37 @@
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Name the attached disks as the specified by deviceName.
ACTION!="add|change", GOTO="gce_disk_naming_end"
SUBSYSTEM!="block", GOTO="gce_disk_naming_end"
# SCSI naming
KERNEL=="sd*|vd*", IMPORT{program}="scsi_id --export --whitelisted -d $tempnode"
# NVME Local SSD naming
KERNEL=="nvme*n*", ATTRS{model}=="nvme_card", PROGRAM="/bin/sh -c 'nsid=$$(echo %k|sed -re s/nvme[0-9]+n\([0-9]+\).\*/\\1/); echo $$((nsid-1))'", ENV{ID_SERIAL_SHORT}="local-nvme-ssd-%c"
KERNEL=="nvme*", ATTRS{model}=="nvme_card", ENV{ID_SERIAL}="Google_EphemeralDisk_$env{ID_SERIAL_SHORT}"
# NVME Persistent Disk IO Timeout
KERNEL=="nvme*n*", ENV{DEVTYPE}=="disk", ATTRS{model}=="nvme_card-pd", ATTR{queue/io_timeout}="4294967295"
# NVME Persistent Disk Naming
KERNEL=="nvme*n*", ATTRS{model}=="nvme_card-pd", IMPORT{program}="google_nvme_id -d $tempnode"
# Symlinks
KERNEL=="sd*|vd*|nvme*", ENV{DEVTYPE}=="disk", SYMLINK+="disk/by-id/google-$env{ID_SERIAL_SHORT}"
KERNEL=="sd*|vd*|nvme*", ENV{DEVTYPE}=="partition", SYMLINK+="disk/by-id/google-$env{ID_SERIAL_SHORT}-part%n"
LABEL="gce_disk_naming_end"

View file

@ -0,0 +1,3 @@
# prevent systemd udev rules from marking unformatted device mapper device as unready (SYSTEMD_READY=0)
# this is the offending rule from systemd: SUBSYSTEM=="block", ENV{DM_UUID}=="CRYPT-*", ENV{ID_PART_TABLE_TYPE}=="", ENV{ID_FS_USAGE}=="", ENV{SYSTEMD_READY}="0"
SUBSYSTEM=="block", ENV{DM_NAME}=="state", ENV{DM_UUID}=="CRYPT-*", ENV{ID_PART_TABLE_TYPE}=="", ENV{ID_FS_USAGE}="constellation-state"

View file

@ -0,0 +1,10 @@
#!/usr/bin/bash
# This script reads the PCR state of the system
# and prints the message to the serial console
main() {
pcr_state="$(tpm2_pcrread sha256)"
echo -e "PCR state:\n${pcr_state}\n" > /run/issue.d/35_constellation_pcrs.issue
}
main

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID3zCCAsegAwIBAgIUHii75K8+vo3LkCUKJjGBiTVJy/8wDQYJKoZIhvcNAQEL
BQAwgYgxCzAJBgNVBAYTAkRFMRwwGgYDVQQIDBNOb3JkcmhlaW4gV2VzdGZhbGVu
MQ8wDQYDVQQHDAZCb2NodW0xHjAcBgNVBAoMFUVkZ2VsZXNzIFN5c3RlbXMgR21i
SDEqMCgGA1UEAwwhQ29uc3RlbGxhdGlvbiBUZXN0aW5nIEtFSyBDQSAyMDIyMB4X
DTIyMDkyMzE0MTYwN1oXDTIyMTAyMzE0MTYwN1owgYgxCzAJBgNVBAYTAkRFMRww
GgYDVQQIDBNOb3JkcmhlaW4gV2VzdGZhbGVuMQ8wDQYDVQQHDAZCb2NodW0xHjAc
BgNVBAoMFUVkZ2VsZXNzIFN5c3RlbXMgR21iSDEqMCgGA1UEAwwhQ29uc3RlbGxh
dGlvbiBUZXN0aW5nIEtFSyBDQSAyMDIyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA9QnVns2kVMrbV6308MDN6xVoTLcQ91ashmpj5Zj/gDiidtguggWS
hJ8bItzH2c02AWrMuf+T5wxaPWxZPjgYI9CnoBg53rgmGWpeBeV2ZWK8wRp0iUQe
GmhaBP+aA4h6UYN7N1120kV0O4BmJg76JwecHK2VXom7H4ILD4EKvXgpiyRdKJm7
CZaJ422G1LIjzLI1Gm2yhTsihyma9iguyV4EFLukp8umGneiE0erRngKw9a6YTWP
eFfTyXtwaWVcGhA5guhkp5U/KQCY1Mr6e4+Zp3ISL8QFBneYbDpVI3nWDNfWPMAH
eLxny2rpc+zvOJ+JDCc3frkWCLV4BZD8ZQIDAQABoz8wPTAdBgNVHQ4EFgQU8iGE
QDAs7qF7LowPBQu0Tft2fbowDwYDVR0TAQH/BAUwAwEB/zALBgNVHQ8EBAMCAYYw
DQYJKoZIhvcNAQELBQADggEBAKF9RGT46YAFhn2CrMuMd3eOalpoLM+SllAtmq5c
7RqUGQBOhuAdf1aiucFXH8xzi7DOV6aVhQG67kv5isISqUdUL80+RpajWYcU5YaW
jEX+w/o2Jv0kzBDBTVcX6uKuid1oiQDXGVL/UaU30Smdk/9ni1RImuPBPupNEljU
AjaduiqqcJArLBmXzEizCnaGhEvdezPIiZDbzARDbkvl1WQthcghh3i6iiBiN7Vp
gGMzEecGJ4oxlxa+fycIiFbrX8h3DgVOCXeaxWtbfmIuswwFeZ3rVXHXizfOMEdr
2qWu/xQwwUfN3Z07kXigfB3akJfqDYO5/mNGNNi6fAJV2do=
-----END CERTIFICATE-----

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID4TCCAsmgAwIBAgIUUsz90g+sTYH+2QEMZf7gKWDiDEwwDQYJKoZIhvcNAQEL
BQAwgYkxCzAJBgNVBAYTAkRFMRwwGgYDVQQIDBNOb3JkcmhlaW4gV2VzdGZhbGVu
MQ8wDQYDVQQHDAZCb2NodW0xHjAcBgNVBAoMFUVkZ2VsZXNzIFN5c3RlbXMgR21i
SDErMCkGA1UEAwwiQ29uc3RlbGxhdGlvbiBUZXN0aW5nIFVFRkkgQ0EgMjAyMjAe
Fw0yMjA5MjMxNDE2MDZaFw0yMjEwMjMxNDE2MDZaMIGJMQswCQYDVQQGEwJERTEc
MBoGA1UECAwTTm9yZHJoZWluIFdlc3RmYWxlbjEPMA0GA1UEBwwGQm9jaHVtMR4w
HAYDVQQKDBVFZGdlbGVzcyBTeXN0ZW1zIEdtYkgxKzApBgNVBAMMIkNvbnN0ZWxs
YXRpb24gVGVzdGluZyBVRUZJIENBIDIwMjIwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQC4fWmh6n3qjKmJSh0zvNc9frZnEcyfK3pDSCtdhNQh/kN59Tjd
+kg79yxPfG5v6MYiQhxva2EVPiAuMmB/zrhq75UmNU6o0WCVk58g7IrXDd4xpO+s
32+umlntT86wHWfrSAj3JduBG8ci1J1uARz0mmHl2gOzoB0dn4WOAoLLMH2bAzjS
ICX0giXiY6Q66JQya4OPMvdwSAW78H6JgTtNIsYKn2clnA3VJZWXIzYl6Bd92zjC
6RFrhzu7WYw9nmIA0HMtjpSeU9JDPUmedU7MPqLAK6kpiR+RrowzkfaFmIUqdxpj
4IlGXkqWqu/I3WzKucdt7X0Run914M5iM4xzAgMBAAGjPzA9MB0GA1UdDgQWBBQ9
/pmrY9gxhPV658NbhCfmcGWkDjAPBgNVHRMBAf8EBTADAQH/MAsGA1UdDwQEAwIB
hjANBgkqhkiG9w0BAQsFAAOCAQEAU+4eQJXS02qof7S+vkLOGznuC4KD0yXs9+jg
Ih6ANg6YBlxNZWDWAYZeJIIrQfINnzC36dQcb4StiUOKJu4eT5YxH4Afv39L3eoZ
eKsVE2Dddt4tE+i8oEBA7XPKZZH8le2V0csnZ2cbsphxftwy72qkFukmkmcBn4Zq
fXeAsdbDHoKlnTPeTNAXcgPUyLGhxqoX5vaIsNDFPGQ3BhsHfmNgKkZ4J+BSGGnc
R4Gre/mN1eNz3LYn7RZeExOrcnwnQAvFVhtz4ZFIndxP3kSiNh2Lo/7p/ECEnnEn
jOUCkARuA3lUZiWMzGMDo/kgTi6C6kulEkCWcp417OeXmThrjA==
-----END CERTIFICATE-----

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID2TCCAsGgAwIBAgIUQZYX6ujSb/QFL2p8BY62ydJ/HEYwDQYJKoZIhvcNAQEL
BQAwgYUxCzAJBgNVBAYTAkRFMRwwGgYDVQQIDBNOb3JkcmhlaW4gV2VzdGZhbGVu
MQ8wDQYDVQQHDAZCb2NodW0xHjAcBgNVBAoMFUVkZ2VsZXNzIFN5c3RlbXMgR21i
SDEnMCUGA1UEAwweQ29uc3RlbGxhdGlvbiBUZXN0aW5nIFBDQSAyMDIyMB4XDTIy
MDkyMzE0MTYwN1oXDTIyMTAyMzE0MTYwN1owgYUxCzAJBgNVBAYTAkRFMRwwGgYD
VQQIDBNOb3JkcmhlaW4gV2VzdGZhbGVuMQ8wDQYDVQQHDAZCb2NodW0xHjAcBgNV
BAoMFUVkZ2VsZXNzIFN5c3RlbXMgR21iSDEnMCUGA1UEAwweQ29uc3RlbGxhdGlv
biBUZXN0aW5nIFBDQSAyMDIyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAx4lZNSnqGy1Dk8hxQHgzzIpCAr2+zPtNpPKdXq8PLIZ1fBERYkVW66JHH5Dl
9pzMl8KaJkkRbv5orxzpL0g9qtM+j/JQl1Ap7S9MAXgysATwyaFWfddLW+ZOi/XS
oOhq1JXp0FFFUBQwP56JtqhUEcgaICi41L9S/XMxQaoZfRQu+9ICVlDMLurqVdI4
Or5mz7eWUqj34Xl5bpjMOZgbyI1pYN4UxvODKFIYhVTmKZOWA0np4JtI4iCA5Dkd
ckF9uwOyQUyGUs7Atar2tZr2hWK66NxhcdSfj9ltuBHwskJnoJYesWPz0QHz26xO
d+a4WwIKhKfchvv5qVq9KFkRpQIDAQABoz8wPTAdBgNVHQ4EFgQUvCP4FCxcypyU
STwBjhtgqnojgaEwDwYDVR0TAQH/BAUwAwEB/zALBgNVHQ8EBAMCAYYwDQYJKoZI
hvcNAQELBQADggEBAAhpPWCLVl8FvWZOeZA2iOKmLen/c6W0d9WiB9qvj+mI8QBq
kLhkqkxEcXTpaOjCxJLqLU3LsA828yUdfL0zmxusrJlz+gux+KRlRn1yTsCyWqmx
9rYUXO6IFwZnSUV923uVZ1nkHydwqV9hqIgrKYrppzDXOsm+ugz+NP2lxFNp6q8u
MyMsClaTdfPT+EUucp7g1lPVdtWV4RbWK/v3/rp2l5jzs7F0Roa3zc/+YquwB/AA
ZasNpDInz6RlEzhA/GkXEO5Rssem4a1NBwGrvs9mCIm5sKLaxLUnlM7SLCAeEBdS
qxLwUVTOtnQzWEM04I978V3zfKNPBtl2mvhuXYw=
-----END CERTIFICATE-----

Binary file not shown.

View file

@ -0,0 +1,74 @@
#!/usr/bin/env bash
set -euo pipefail
if [ -z "${CONFIG_FILE-}" ] && [ -f "${CONFIG_FILE-}" ]; then
. "${CONFIG_FILE}"
fi
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-n|--name)
AZURE_VM_NAME="$2"
shift # past argument
shift # past value
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
AZ_VM_INFO=$(az vm show --name "${AZURE_VM_NAME}" --resource-group "${AZURE_RESOURCE_GROUP_NAME}" -o json)
NIC=$(echo "${AZ_VM_INFO}" | jq -r '.networkProfile.networkInterfaces[0].id')
NIC_INFO=$(az network nic show --ids "${NIC}" -o json)
PUBIP=$(echo "${NIC_INFO}" | jq -r '.ipConfigurations[0].publicIpAddress.id')
NSG=$(echo "${NIC_INFO}" | jq -r '.networkSecurityGroup.id')
SUBNET=$(echo "${NIC_INFO}" | jq -r '.ipConfigurations[0].subnet.id')
VNET=$(echo $SUBNET | sed 's#/subnets/.*##')
DISK=$(echo "${AZ_VM_INFO}" | jq -r '.storageProfile.osDisk.managedDisk.id')
delete_vm () {
az vm delete -y --name "${AZURE_VM_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP_NAME}" || true
}
delete_vnet () {
az network vnet delete --ids "${VNET}" || true
}
delete_subnet () {
az network vnet subnet delete --ids "${SUBNET}" || true
}
delete_nsg () {
az network nsg delete --ids "${NSG}" || true
}
delete_pubip () {
az network public-ip delete --ids "${PUBIP}" || true
}
delete_disk () {
az disk delete -y --ids "${DISK}" || true
}
delete_nic () {
az network nic delete --ids "${NIC}" || true
}
delete_vm
delete_disk
delete_nic
delete_nsg
delete_subnet
delete_vnet
delete_pubip

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
set -euo pipefail
if [ -z "${CONFIG_FILE-}" ] && [ -f "${CONFIG_FILE-}" ]; then
. "${CONFIG_FILE}"
fi
AZURE_SUBSCRIPTION=$(az account show --query id -o tsv)
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-n|--name)
AZURE_VM_NAME="$2"
shift # past argument
shift # past value
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
VM_DISK=$(az vm show -g "${AZURE_RESOURCE_GROUP_NAME}" --name "${AZURE_VM_NAME}" --query "storageProfile.osDisk.managedDisk.id" -o tsv)
LOCATION=$(az disk show --ids "${VM_DISK}" --query "location" -o tsv)
az snapshot create \
-g "${AZURE_RESOURCE_GROUP_NAME}" \
--source "${VM_DISK}" \
--name "${AZURE_SNAPSHOT_NAME}" \
-l "${LOCATION}"
# Azure CLI does not implement getSecureVMGuestStateSAS for snapshots yet
# az snapshot grant-access \
# --duration-in-seconds 3600 \
# --access-level Read \
# --name "${AZURE_SNAPSHOT_NAME}" \
# -g "${AZURE_RESOURCE_GROUP_NAME}"
BEGIN=$(az rest \
--method post \
--url "https://management.azure.com/subscriptions/${AZURE_SUBSCRIPTION}/resourceGroups/${AZURE_RESOURCE_GROUP_NAME}/providers/Microsoft.Compute/snapshots/${AZURE_SNAPSHOT_NAME}/beginGetAccess" \
--uri-parameters api-version="2021-12-01" \
--body '{"access": "Read", "durationInSeconds": 3600, "getSecureVMGuestStateSAS": true}' \
--verbose 2>&1)
ASYNC_OPERATION_URI=$(echo "${BEGIN}" | grep Azure-AsyncOperation | cut -d ' ' -f 7 | tr -d "'")
sleep 10
ACCESS=$(az rest --method get --url "${ASYNC_OPERATION_URI}")
VMGS_URL=$(echo "${ACCESS}" | jq -r '.properties.output.securityDataAccessSAS')
curl -L -o "${AZURE_VMGS_FILENAME}" "${VMGS_URL}"
az snapshot revoke-access \
--name "${AZURE_SNAPSHOT_NAME}" \
-g "${AZURE_RESOURCE_GROUP_NAME}"
az snapshot delete \
--name "${AZURE_SNAPSHOT_NAME}" \
-g "${AZURE_RESOURCE_GROUP_NAME}"
echo "VMGS saved to ${AZURE_VMGS_FILENAME}"

View file

@ -0,0 +1,101 @@
#!/usr/bin/env bash
set -euo pipefail
if [ -z "${CONFIG_FILE-}" ] && [ -f "${CONFIG_FILE-}" ]; then
. "${CONFIG_FILE}"
fi
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-n|--name)
AZURE_VM_NAME="$2"
shift # past argument
shift # past value
;;
-g|--gallery)
CREATE_FROM_GALLERY=YES
shift # past argument
;;
-d|--disk)
CREATE_FROM_GALLERY=NO
shift # past argument
;;
--secure-boot)
AZURE_SECURE_BOOT="$2"
shift # past argument
shift # past value
;;
--disk-name)
AZURE_DISK_NAME="$2"
shift # past argument
shift # past value
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [[ "${AZURE_SECURITY_TYPE}" == "ConfidentialVM" ]]; then
VMSIZE="Standard_DC2as_v5"
elif [[ "${AZURE_SECURITY_TYPE}" == "TrustedLaunch" ]]; then
VMSIZE="standard_D2as_v5"
else
echo "Unknown security type: ${AZURE_SECURITY_TYPE}"
exit 1
fi
create_vm_from_disk () {
AZURE_DISK_REFERENCE=$(az disk show --resource-group ${AZURE_RESOURCE_GROUP_NAME} --name ${AZURE_DISK_NAME} --query id -o tsv)
az vm create --name "${AZURE_VM_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP_NAME}" \
-l ${AZURE_REGION} \
--size "${VMSIZE}" \
--public-ip-sku Standard \
--os-type Linux \
--attach-os-disk "${AZURE_DISK_REFERENCE}" \
--security-type "${AZURE_SECURITY_TYPE}" \
--os-disk-security-encryption-type VMGuestStateOnly \
--enable-vtpm true \
--enable-secure-boot "${AZURE_SECURE_BOOT}" \
--boot-diagnostics-storage "" \
--no-wait
}
create_vm_from_sig () {
AZURE_IMAGE_REFERENCE=$(az sig image-version show \
--gallery-image-definition "${AZURE_IMAGE_DEFINITION}" \
--gallery-image-version "${AZURE_IMAGE_VERSION}" \
--gallery-name "${AZURE_GALLERY_NAME}" \
-g "${AZURE_RESOURCE_GROUP_NAME}" \
--query id -o tsv)
az vm create --name "${AZURE_VM_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP_NAME}" \
-l ${AZURE_REGION} \
--size "${VMSIZE}" \
--public-ip-sku Standard \
--image "${AZURE_IMAGE_REFERENCE}" \
--security-type "${AZURE_SECURITY_TYPE}" \
--os-disk-security-encryption-type VMGuestStateOnly \
--enable-vtpm true \
--enable-secure-boot "${AZURE_SECURE_BOOT}" \
--boot-diagnostics-storage "" \
--no-wait
}
if [ "$CREATE_FROM_GALLERY" = "YES" ]; then
create_vm_from_sig
else
create_vm_from_disk
fi
sleep 30
az vm boot-diagnostics enable --name "${AZURE_VM_NAME}" --resource-group "${AZURE_RESOURCE_GROUP_NAME}"

View file

@ -0,0 +1,89 @@
#!/usr/bin/env bash
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
BASE_DIR=$(realpath "${SCRIPT_DIR}/..")
# Set to qemu+tcp://localhost:16599/system for dockerized libvirt setup
if [[ -z "${LIBVIRT_SOCK}" ]]; then
LIBVIRT_SOCK=qemu:///system
fi
libvirt_nvram_gen () {
local image_path="${1}"
if test -f "${BASE_DIR}/image.nvram.template"; then
echo "NVRAM template already generated: $(realpath "--relative-to=$(pwd)" ${BASE_DIR}/image.nvram.template)"
return
fi
if ! test -f "${image_path}"; then
echo "Image \"${image_path}\" does not exist yet. To generate nvram, create disk image first."
return
fi
OVMF_CODE=/usr/share/OVMF/OVMF_CODE_4M.ms.fd
OVMF_VARS=/usr/share/OVMF/OVMF_VARS_4M.ms.fd
if ! test -f "${OVMF_CODE}"; then
OVMF_CODE=/usr/share/OVMF/OVMF_CODE.secboot.fd
fi
if ! test -f "${OVMF_VARS}"; then
OVMF_VARS=/usr/share/OVMF/OVMF_VARS.secboot.fd
fi
echo "Using OVMF_CODE: ${OVMF_CODE}"
echo "Using OVMF_VARS: ${OVMF_VARS}"
# generate nvram file using libvirt
virt-install --name constell-nvram-gen \
--connect ${LIBVIRT_SOCK} \
--nonetworks \
--description 'Constellation' \
--ram 1024 \
--vcpus 1 \
--osinfo detect=on,require=off \
--disk "${image_path},format=raw" \
--boot "machine=q35,menu=on,loader=${OVMF_CODE},loader.readonly=yes,loader.type=pflash,nvram.template=${OVMF_VARS},nvram=${BASE_DIR}/image.nvram,loader_secure=yes" \
--features smm.state=on \
--noautoconsole
echo -e 'connect using'
echo -e ' \u001b[1mvirsh console constell-nvram-gen\u001b[0m'
echo -e ''
echo -e 'Load db cert with MokManager or enroll full PKI with firmware setup'
echo -e ''
echo -e ' \u001b[1mMokManager\u001b[0m'
echo -e ' For mokmanager, try to boot as usual. You will see this message:'
echo -e ' > "Verification failed: (0x1A) Security Violation"'
echo -e ' Press OK, then ENTER, then "Enroll key from disk"'
echo -e ' Select the following key:'
echo -e ' > \u001b[1m/EFI/loader/keys/auto/db.cer\u001b[0m'
echo -e ' Press Continue, then choose "Yes" to the question "Enroll the key(s)?"'
echo -e ' Choose reboot and continue this script.'
echo -e ''
echo -e ' \u001b[1mFirmware setup\u001b[0m'
echo -e ' For firmware setup, press F2.'
echo -e ' Go to "Device Manager">"Secure Boot Configuration">"Secure Boot Mode"'
echo -e ' Choose "Custom Mode"'
echo -e ' Go to "Custom Securee Boot Options"'
echo -e ' Go to "PK Options">"Enroll PK", Press "Y" if queried, "Enroll PK using File"'
echo -e ' Select the following cert: \u001b[1m/EFI/loader/keys/auto/PK.cer\u001b[0m'
echo -e ' Choose "Commit Changes and Exit"'
echo -e ' Go to "KEK Options">"Enroll KEK", Press "Y" if queried, "Enroll KEK using File"'
echo -e ' Select the following cert: \u001b[1m/EFI/loader/keys/auto/KEK.cer\u001b[0m'
echo -e ' Choose "Commit Changes and Exit"'
echo -e ' Go to "DB Options">"Enroll Signature">"Enroll Signature using File"'
echo -e ' Select the following cert: \u001b[1m/EFI/loader/keys/auto/db.cer\u001b[0m'
echo -e ' Choose "Commit Changes and Exit"'
echo -e ' Repeat the last step for the following certs:'
echo -e ' > \u001b[1m/EFI/loader/keys/auto/MicWinProPCA2011_2011-10-19.crt\u001b[0m'
echo -e ' > \u001b[1m/EFI/loader/keys/auto/MicCorUEFCA2011_2011-06-27.crt\u001b[0m'
echo -e ' Reboot and continue this script.'
echo -e ''
echo -e 'Press ENTER to continue after you followed one of the guides from above.'
read
sudo cp "${BASE_DIR}/image.nvram" "${BASE_DIR}/image.nvram.template"
virsh --connect "${LIBVIRT_SOCK}" destroy --domain constell-nvram-gen
virsh --connect "${LIBVIRT_SOCK}" undefine --nvram constell-nvram-gen
rm -f "${BASE_DIR}/image.nvram"
echo "NVRAM template generated: $(realpath "--relative-to=$(pwd)" ${BASE_DIR}/image.nvram.template)"
}
libvirt_nvram_gen $1

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
# This script generates a PKI for secure boot.
# It is based on the example from https://github.com/systemd/systemd/blob/main/man/loader.conf.xml
# This is meant to be used for development purposes only.
# Release images are signed using a different set of keys.
# Set PKI to an empty folder and PKI_SET to "dev".
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
TEMPLATES=${SCRIPT_DIR}/templates
BASE_DIR=$(realpath "${SCRIPT_DIR}/..")
if [ -z "${PKI}" ]; then
PKI=${BASE_DIR}/pki
fi
if [ -z "${PKI_SET}" ]; then
PKI_SET=dev
fi
gen_pki () {
# Only use for non-production images.
# Use real PKI for production images instead.
count=$(ls -1 ${PKI}/*.{key,crt,cer,esl,auth} 2>/dev/null | wc -l)
if [ $count != 0 ]
then
echo PKI files $(ls -1 $(realpath "--relative-to=$(pwd)" ${PKI})/*.{key,crt,cer,esl,auth}) already exist
return
fi
mkdir -p "${PKI}"
pushd "${PKI}"
uuid=$(systemd-id128 new --uuid)
for key in PK KEK db; do
openssl req -new -x509 -config "${TEMPLATES}/${PKI_SET}_${key}.conf" -keyout "${key}.key" -out "${key}.crt" -nodes
openssl x509 -outform DER -in "${key}.crt" -out "${key}.cer"
cert-to-efi-sig-list -g "${uuid}" "${key}.crt" "${key}.esl"
done
for key in MicWinProPCA2011_2011-10-19.crt MicCorUEFCA2011_2011-06-27.crt MicCorKEKCA2011_2011-06-24.crt; do
curl -sL "https://www.microsoft.com/pkiops/certs/${key}" --output "${key}"
sbsiglist --owner 77fa9abd-0359-4d32-bd60-28f4e78f784b --type x509 --output "${key%crt}esl" "${key}"
done
# Optionally add Microsoft Windows Production CA 2011 (needed to boot into Windows).
cat MicWinProPCA2011_2011-10-19.esl >> db.esl
# Optionally add Microsoft Corporation UEFI CA 2011 (for firmware drivers / option ROMs
# and third-party boot loaders (including shim). This is highly recommended on real
# hardware as not including this may soft-brick your device (see next paragraph).
cat MicCorUEFCA2011_2011-06-27.esl >> db.esl
# Optionally add Microsoft Corporation KEK CA 2011. Recommended if either of the
# Microsoft keys is used as the official UEFI revocation database is signed with this
# key. The revocation database can be updated with [fwupdmgr(1)](https://www.freedesktop.org/software/systemd/man/fwupdmgr.html#).
cat MicCorKEKCA2011_2011-06-24.esl >> KEK.esl
sign-efi-sig-list -c PK.crt -k PK.key PK PK.esl PK.auth
sign-efi-sig-list -c PK.crt -k PK.key KEK KEK.esl KEK.auth
sign-efi-sig-list -c KEK.crt -k KEK.key db db.esl db.auth
popd
}
# gen_pki generates a PKI for testing purposes only.
# if keys/certs are already present in the pki folder, they are not regenerated.
gen_pki

View file

@ -0,0 +1,45 @@
#!/bin/env bash
set -euo pipefail
# This script is used to add a signed shim to the image.raw file EFI partition after running `mkosi build`.
if (( $# != 1 ))
then
echo "Usage: $0 <image.raw>"
exit 1
fi
# SOURCE is the URL used to download the signed shim RPM
SOURCE=https://kojipkgs.fedoraproject.org/packages/shim/15.6/2/x86_64/shim-x64-15.6-2.x86_64.rpm
# EXPECTED_SHA512 is the SHA512 checksum of the signed shim RPM
EXPECTED_SHA512=971978bddee95a6a134ef05c4d88cf5df41926e631de863b74ef772307f3e106c82c8f6889c18280d47187986abd774d8671c5be4b85b1b0bb3d1858b65d02cf
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
BASE_DIR=$(realpath "${SCRIPT_DIR}/..")
TMPDIR=$(mktemp -d)
pushd "${TMPDIR}"
curl -sL -o shim.rpm "${SOURCE}"
echo "Checking SHA512 checksum of signed shim..."
sha512sum -c <<< "${EXPECTED_SHA512} shim.rpm"
rpm2cpio shim.rpm | cpio -idmv
echo $TMPDIR
popd
MOUNTPOINT=$(mktemp -d)
sectoroffset=$(sfdisk -J "${1}" | jq -r '.partitiontable.partitions[0].start')
byteoffset=$((sectoroffset * 512))
mount -o offset="${byteoffset}" "${1}" "${MOUNTPOINT}"
mkdir -p "${MOUNTPOINT}/EFI/BOOT/"
cp "${TMPDIR}/boot/efi/EFI/BOOT/BOOTX64.EFI" "${MOUNTPOINT}/EFI/BOOT/"
cp "${TMPDIR}/boot/efi/EFI/fedora/mmx64.efi" "${MOUNTPOINT}/EFI/BOOT/"
cp "${MOUNTPOINT}/EFI/systemd/systemd-bootx64.efi" "${MOUNTPOINT}/EFI/BOOT/grubx64.efi"
# Remove unused kernel and initramfs from EFI to save space
# We boot from unified kernel image anyway
rm -f "${MOUNTPOINT}"/*/*/{linux,initrd}
umount "${MOUNTPOINT}"
rm -rf ${MOUNTPOINT}
rm -rf "${TMPDIR}"

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Development KEK CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Development UEFI CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Development PCA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation KEK CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation UEFI CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Production PCA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Testing KEK CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Testing UEFI CA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

View file

@ -0,0 +1,20 @@
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
x509_extensions = v3_req
req_extensions = v3_req
prompt = no
dirstring_type = nobmp
[ req_distinguished_name ]
C = DE
ST = Nordrhein Westfalen
L = Bochum
O = Edgeless Systems GmbH
CN = Constellation Testing PCA 2022
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true
keyUsage = digitalSignature,keyCertSign,cRLSign

57
image/mkosi/upload/pack.sh Executable file
View file

@ -0,0 +1,57 @@
#!/usr/bin/env bash
set -euo pipefail
# Show progress on pipes if `pv` is installed
# Otherwise use plain cat
if ! command -v pv &> /dev/null
then
PV="cat"
else
PV="pv"
fi
pack () {
local cloudprovider=$1
local unpacked_image=$2
local packed_image=$3
local unpacked_image_dir
unpacked_image_dir=$(mktemp -d)
local unpacked_image_filename
unpacked_image_filename=disk.raw
local tmp_tar_file
tmp_tar_file=$(mktemp -t verity.XXXXXX.tar)
cp ${unpacked_image} "${unpacked_image_dir}/${unpacked_image_filename}"
case $cloudprovider in
gcp)
echo "📥 Packing GCP image..."
tar --owner=0 --group=0 -C "${unpacked_image_dir}" -Sch --format=oldgnu -f "${tmp_tar_file}" "${unpacked_image_filename}"
"${PV}" "${tmp_tar_file}" | pigz -9c > "${packed_image}"
rm "${tmp_tar_file}"
echo " Repacked image stored in ${packed_image}"
;;
azure)
echo "📥 Packing Azure image..."
truncate -s %1MiB "${unpacked_image_dir}/${unpacked_image_filename}"
qemu-img convert -p -f raw -O vpc -o force_size,subformat=fixed "${unpacked_image_dir}/${unpacked_image_filename}" "$packed_image"
echo " Repacked image stored in ${packed_image}"
;;
*)
echo "unknown cloud provider"
exit 1
;;
esac
rm -r ${unpacked_image_dir}
}
if [ $# -ne 3 ]; then
echo "Usage: $0 <cloudprovider> <unpacked_image> <packed_image>"
exit 1
fi
pack "${1}" "${2}" "${3}"

View file

@ -0,0 +1,190 @@
#!/usr/bin/env bash
set -euo pipefail
if [ -z "${CONFIG_FILE-}" ] && [ -f "${CONFIG_FILE-}" ]; then
. "${CONFIG_FILE}"
fi
CREATE_SIG_VERSION=NO
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-g|--gallery)
CREATE_SIG_VERSION=YES
shift # past argument
;;
--disk-name)
AZURE_DISK_NAME="$2"
shift # past argument
shift # past value
;;
-*|--*)
echo "Unknown option $1"
exit 1
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [[ "${AZURE_SECURITY_TYPE}" == "ConfidentialVM" ]]; then
AZURE_DISK_SECURITY_TYPE=ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey
AZURE_SIG_VERSION_ENCRYPTION_TYPE=EncryptedVMGuestStateOnlyWithPmk
elif [[ "${AZURE_SECURITY_TYPE}" == "ConfidentialVMSupported" ]]; then
AZURE_DISK_SECURITY_TYPE=""
elif [[ "${AZURE_SECURITY_TYPE}" == "TrustedLaunch" ]]; then
AZURE_DISK_SECURITY_TYPE=TrustedLaunch
else
echo "Unknown security type: ${AZURE_SECURITY_TYPE}"
exit 1
fi
AZURE_CVM_ENCRYPTION_ARGS=""
if [[ -n "${AZURE_SIG_VERSION_ENCRYPTION_TYPE-}" ]]; then
AZURE_CVM_ENCRYPTION_ARGS=" --target-region-cvm-encryption "
for region in ${AZURE_REPLICATION_REGIONS}; do
AZURE_CVM_ENCRYPTION_ARGS=" ${AZURE_CVM_ENCRYPTION_ARGS} ${AZURE_SIG_VERSION_ENCRYPTION_TYPE}, "
done
fi
echo "Replicating image in ${AZURE_REPLICATION_REGIONS}"
AZURE_VMGS_PATH=$1
if [[ -z "${AZURE_VMGS_PATH}" ]] && [[ "${AZURE_SECURITY_TYPE}" == "ConfidentialVM" ]]; then
echo "No VMGS path provided - using default ConfidentialVM VMGS"
AZURE_VMGS_PATH="${BLOBS_DIR}/cvm-vmgs.vhd"
elif [[ -z "${AZURE_VMGS_PATH}" ]] && [[ "${AZURE_SECURITY_TYPE}" == "TrustedLaunch" ]]; then
echo "No VMGS path provided - using default TrsutedLaunch VMGS"
AZURE_VMGS_PATH="${BLOBS_DIR}/trusted-launch-vmgs.vhd"
fi
SIZE=$(wc -c "${AZURE_IMAGE_PATH}" | cut -d " " -f1)
create_disk_with_vmgs () {
az disk create \
-n "${AZURE_DISK_NAME}" \
-g "${AZURE_RESOURCE_GROUP_NAME}" \
-l "${AZURE_REGION}" \
--hyper-v-generation V2 \
--os-type Linux \
--upload-size-bytes "${SIZE}" \
--sku standard_lrs \
--upload-type UploadWithSecurityData \
--security-type "${AZURE_DISK_SECURITY_TYPE}"
az disk wait --created -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
az disk list --output table --query "[?name == '${AZURE_DISK_NAME}' && resourceGroup == '${AZURE_RESOURCE_GROUP_NAME^^}']"
DISK_SAS=$(az disk grant-access -n ${AZURE_DISK_NAME} -g ${AZURE_RESOURCE_GROUP_NAME} \
--access-level Write --duration-in-seconds 86400 \
${AZURE_VMGS_PATH+"--secure-vm-guest-state-sas"})
azcopy copy "${AZURE_IMAGE_PATH}" \
"$(echo $DISK_SAS | jq -r .accessSas)" \
--blob-type PageBlob
if [[ -z "${AZURE_VMGS_PATH}" ]]; then
echo "No VMGS path provided - skipping VMGS upload"
else
azcopy copy "${AZURE_VMGS_PATH}" \
"$(echo $DISK_SAS | jq -r .securityDataAccessSas)" \
--blob-type PageBlob
fi
az disk revoke-access -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
}
create_disk_without_vmgs () {
az disk create \
-n "${AZURE_DISK_NAME}" \
-g "${AZURE_RESOURCE_GROUP_NAME}" \
-l "${AZURE_REGION}" \
--hyper-v-generation V2 \
--os-type Linux \
--upload-size-bytes "${SIZE}" \
--sku standard_lrs \
--upload-type Upload
az disk wait --created -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
az disk list --output table --query "[?name == '${AZURE_DISK_NAME}' && resourceGroup == '${AZURE_RESOURCE_GROUP_NAME^^}']"
DISK_SAS=$(az disk grant-access -n ${AZURE_DISK_NAME} -g ${AZURE_RESOURCE_GROUP_NAME} \
--access-level Write --duration-in-seconds 86400)
azcopy copy "${AZURE_IMAGE_PATH}" \
"$(echo $DISK_SAS | jq -r .accessSas)" \
--blob-type PageBlob
az disk revoke-access -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
}
create_disk () {
if [[ -z "${AZURE_VMGS_PATH}" ]]; then
create_disk_without_vmgs
else
create_disk_with_vmgs
fi
}
delete_disk () {
az disk delete -y -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
}
create_image () {
if [[ -n "${AZURE_VMGS_PATH}" ]]; then
return
fi
az image create \
--resource-group ${AZURE_RESOURCE_GROUP_NAME} \
-l ${AZURE_REGION} \
-n ${AZURE_DISK_NAME} \
--hyper-v-generation V2 \
--os-type Linux \
--source "$(az disk list --query "[?name == '${AZURE_DISK_NAME}' && resourceGroup == '${AZURE_RESOURCE_GROUP_NAME^^}'] | [0].id" --output tsv)"
}
delete_image () {
if [[ -n "${AZURE_VMGS_PATH}" ]]; then
return
fi
az image delete -n "${AZURE_DISK_NAME}" -g "${AZURE_RESOURCE_GROUP_NAME}"
}
create_sig_version () {
if [[ -n "${AZURE_VMGS_PATH}" ]]; then
local DISK="$(az disk list --query "[?name == '${AZURE_DISK_NAME}' && resourceGroup == '${AZURE_RESOURCE_GROUP_NAME^^}'] | [0].id" --output tsv)"
local SOURCE="--os-snapshot ${DISK}"
else
local IMAGE="$(az image list --query "[?name == '${AZURE_DISK_NAME}' && resourceGroup == '${AZURE_RESOURCE_GROUP_NAME^^}'] | [0].id" --output tsv)"
local SOURCE="--managed-image ${IMAGE}"
fi
az sig create -l "${AZURE_REGION}" --gallery-name "${AZURE_GALLERY_NAME}" --resource-group "${AZURE_RESOURCE_GROUP_NAME}" || true
az sig image-definition create \
--resource-group "${AZURE_RESOURCE_GROUP_NAME}" \
-l "${AZURE_REGION}" \
--gallery-name "${AZURE_GALLERY_NAME}" \
--gallery-image-definition "${AZURE_IMAGE_DEFINITION}" \
--publisher "${AZURE_PUBLISHER}" \
--offer "${AZURE_IMAGE_OFFER}" \
--sku "${AZURE_SKU}" \
--os-type Linux \
--os-state generalized \
--hyper-v-generation V2 \
--features SecurityType="${AZURE_SECURITY_TYPE}" || true
az sig image-version create \
--resource-group "${AZURE_RESOURCE_GROUP_NAME}" \
-l "${AZURE_REGION}" \
--gallery-name "${AZURE_GALLERY_NAME}" \
--gallery-image-definition "${AZURE_IMAGE_DEFINITION}" \
--gallery-image-version "${AZURE_IMAGE_VERSION}" \
--target-regions ${AZURE_REPLICATION_REGIONS} \
${AZURE_CVM_ENCRYPTION_ARGS} \
--replica-count 1 \
--replication-mode Full \
${SOURCE}
}
create_disk
if [ "$CREATE_SIG_VERSION" = "YES" ]; then
create_image
create_sig_version
delete_image
delete_disk
fi

View file

@ -0,0 +1,27 @@
#!/usr/bin/env bash
set -euo pipefail
if [ -z "${CONFIG_FILE-}" ] && [ -f "${CONFIG_FILE-}" ]; then
. "${CONFIG_FILE}"
fi
PK_FILE=${PKI}/PK.cer
KEK_FILES=${PKI}/KEK.cer,${PKI}/MicCorKEKCA2011_2011-06-24.crt
DB_FILES=${PKI}/db.cer,${PKI}/MicWinProPCA2011_2011-10-19.crt,${PKI}/MicCorUEFCA2011_2011-06-27.crt
gsutil mb -l "${GCP_REGION}" "gs://${GCP_BUCKET}" || true
gsutil pap set enforced "gs://${GCP_BUCKET}" || true
gsutil cp "${GCP_IMAGE_PATH}" "gs://${GCP_BUCKET}/${GCP_IMAGE_FILENAME}"
gcloud compute images create "${GCP_IMAGE_NAME}" \
"--family=${GCP_IMAGE_FAMILY}" \
"--source-uri=gs://${GCP_BUCKET}/${GCP_IMAGE_FILENAME}" \
"--guest-os-features=GVNIC,SEV_CAPABLE,VIRTIO_SCSI_MULTIQUEUE,UEFI_COMPATIBLE" \
"--platform-key-file=${PK_FILE}" \
"--key-exchange-key-file=${KEK_FILES}" \
"--signature-database-file=${DB_FILES}" \
"--project=${GCP_PROJECT}"
gcloud compute images add-iam-policy-binding "${GCP_IMAGE_NAME}" \
"--project=${GCP_PROJECT}" \
--member='allAuthenticatedUsers' \
--role='roles/compute.imageUser'
gsutil rm "gs://${GCP_BUCKET}/${GCP_IMAGE_FILENAME}"