From cf82794b1d827093e074721b50b4d38df9bddf76 Mon Sep 17 00:00:00 2001 From: Thomas Tendyck Date: Fri, 11 Nov 2022 16:23:06 +0100 Subject: [PATCH] docs: publish access manager removal --- docs/docs/workflows/troubleshooting.md | 2 +- .../version-2.2/architecture/components.md | 7 --- .../version-2.2/workflows/ssh.md | 59 ------------------- .../version-2.2/workflows/troubleshooting.md | 28 +++++++++ .../version-2.2-sidebars.json | 5 -- 5 files changed, 29 insertions(+), 72 deletions(-) delete mode 100644 docs/versioned_docs/version-2.2/workflows/ssh.md diff --git a/docs/docs/workflows/troubleshooting.md b/docs/docs/workflows/troubleshooting.md index 0555e9076..8ac5b5dc9 100644 --- a/docs/docs/workflows/troubleshooting.md +++ b/docs/docs/workflows/troubleshooting.md @@ -68,7 +68,7 @@ Debugging via a shell on a node is [directly supported by Kubernetes](https://ku The nodes file system is mounted at `/host`. -3. Once finished, cleanup the debug pod: +3. Once finished, clean up the debug pod: ```sh kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj diff --git a/docs/versioned_docs/version-2.2/architecture/components.md b/docs/versioned_docs/version-2.2/architecture/components.md index 2ba453a3f..c2e09293b 100644 --- a/docs/versioned_docs/version-2.2/architecture/components.md +++ b/docs/versioned_docs/version-2.2/architecture/components.md @@ -8,7 +8,6 @@ These features are provided by several components: * The [JoinService](components.md#joinservice) joins new nodes to an existing cluster * The [VerificationService](components.md#verificationservice) provides remote attestation functionality * The [Key Management Service (KMS)](components.md#kms) manages Constellation-internal keys -* The [AccessManager](components.md#accessmanager) manages node SSH access The relations between components are shown in the following diagram: @@ -22,7 +21,6 @@ flowchart LR C[Bootstrapper] end subgraph Kubernetes - D[AccessManager] E[JoinService] F[KMS] G[VerificationService] @@ -74,8 +72,3 @@ Read more about the hardware-based [attestation feature](attestation.md) of Cons The *KMS* runs as DaemonSet on each control-plane node. It implements the key management for the [storage encryption keys](keys.md#storage-encryption) in Constellation. These keys are used for the [state disk](images.md#state-disk) of each node and the [transparently encrypted storage](encrypted-storage.md) for Kubernetes. Depending on wether the [constellation-managed](keys.md#constellation-managed-key-management) or [user-managed](keys.md#user-managed-key-management) mode is used, the *KMS* holds the key encryption key (KEK) directly or calls an external service for key derivation respectively. - -## AccessManager - -The *AccessManager* runs as DaemonSet on each node. -It manages the user's SSH access to nodes as specified in the config. diff --git a/docs/versioned_docs/version-2.2/workflows/ssh.md b/docs/versioned_docs/version-2.2/workflows/ssh.md deleted file mode 100644 index 0871973f7..000000000 --- a/docs/versioned_docs/version-2.2/workflows/ssh.md +++ /dev/null @@ -1,59 +0,0 @@ -# Manage SSH keys - -Constellation allows you to create UNIX users that can connect to both control-plane and worker nodes over SSH. As the system partitions are read-only, users need to be re-created upon each restart of a node. This is automated by the *Access Manager*. - -On cluster initialization, users defined in the `ssh-users` section of the Constellation configuration file are created and stored in the `ssh-users` ConfigMap in the `kube-system` namespace. For a running cluster, you can add or remove users by modifying the ConfigMap and restarting a node. - -## Access Manager -The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519. - -:::note -All users are automatically created with `sudo` capabilities. -::: - -The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a node restart is required after making changes to the ConfigMap. - -When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`. - -You can update the ConfigMap by: -```bash -kubectl edit configmap -n kube-system ssh-users -``` - -Or alternatively, by modifying and re-applying it with the definition listed in the examples. - -## Examples -You can add a user `myuser` in `constellation-config.yaml` like this: - -```yaml -# Create SSH users on Constellation nodes upon the first initialization of the cluster. -sshUsers: - myuser: "ssh-rsa AAAA...mgNJd9jc=" -``` - -This user is then created upon the first initialization of the cluster, and translated into a ConfigMap as shown below: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: ssh-users - namespace: kube-system -data: - myuser: "ssh-rsa AAAA...mgNJd9jc=" -``` - -You can add users by adding `data` entries: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: ssh-users - namespace: kube-system -data: - myuser: "ssh-rsa AAAA...mgNJd9jc=" - anotheruser: "ssh-ed25519 AAAA...CldH" -``` - -Similarly, removing any entries causes users to be evicted upon the next restart of the node. diff --git a/docs/versioned_docs/version-2.2/workflows/troubleshooting.md b/docs/versioned_docs/version-2.2/workflows/troubleshooting.md index 0cf87db0c..8ac5b5dc9 100644 --- a/docs/versioned_docs/version-2.2/workflows/troubleshooting.md +++ b/docs/versioned_docs/version-2.2/workflows/troubleshooting.md @@ -45,3 +45,31 @@ Constellation uses the default bucket to store logs. Its [default retention peri + +## Connect to nodes via SSH + +Debugging via a shell on a node is [directly supported by Kubernetes](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#node-shell-session). + +1. Figure out which node to connect to: + + ```sh + kubectl get nodes + # or to see more information, such as IPs: + kubectl get nodes -o wide + ``` + +2. Connect to the node: + + ```sh + kubectl debug node/constell-worker-xksa0-000000 -it --image=busybox + ``` + + You will be presented with a prompt. + + The nodes file system is mounted at `/host`. + +3. Once finished, clean up the debug pod: + + ```sh + kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj + ``` diff --git a/docs/versioned_sidebars/version-2.2-sidebars.json b/docs/versioned_sidebars/version-2.2-sidebars.json index dc0e232d0..ff4ca06bb 100644 --- a/docs/versioned_sidebars/version-2.2-sidebars.json +++ b/docs/versioned_sidebars/version-2.2-sidebars.json @@ -135,11 +135,6 @@ "label": "Verify your cluster", "id": "workflows/verify-cluster" }, - { - "type": "doc", - "label": "Manage SSH keys", - "id": "workflows/ssh" - }, { "type": "doc", "label": "Use persistent storage",