docs: publish access manager removal

This commit is contained in:
Thomas Tendyck 2022-11-11 16:23:06 +01:00 committed by Thomas Tendyck
parent 5009de823f
commit cf82794b1d
5 changed files with 29 additions and 72 deletions

View File

@ -68,7 +68,7 @@ Debugging via a shell on a node is [directly supported by Kubernetes](https://ku
The nodes file system is mounted at `/host`. The nodes file system is mounted at `/host`.
3. Once finished, cleanup the debug pod: 3. Once finished, clean up the debug pod:
```sh ```sh
kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj

View File

@ -8,7 +8,6 @@ These features are provided by several components:
* The [JoinService](components.md#joinservice) joins new nodes to an existing cluster * The [JoinService](components.md#joinservice) joins new nodes to an existing cluster
* The [VerificationService](components.md#verificationservice) provides remote attestation functionality * The [VerificationService](components.md#verificationservice) provides remote attestation functionality
* The [Key Management Service (KMS)](components.md#kms) manages Constellation-internal keys * The [Key Management Service (KMS)](components.md#kms) manages Constellation-internal keys
* The [AccessManager](components.md#accessmanager) manages node SSH access
The relations between components are shown in the following diagram: The relations between components are shown in the following diagram:
@ -22,7 +21,6 @@ flowchart LR
C[Bootstrapper] C[Bootstrapper]
end end
subgraph Kubernetes subgraph Kubernetes
D[AccessManager]
E[JoinService] E[JoinService]
F[KMS] F[KMS]
G[VerificationService] G[VerificationService]
@ -74,8 +72,3 @@ Read more about the hardware-based [attestation feature](attestation.md) of Cons
The *KMS* runs as DaemonSet on each control-plane node. The *KMS* runs as DaemonSet on each control-plane node.
It implements the key management for the [storage encryption keys](keys.md#storage-encryption) in Constellation. These keys are used for the [state disk](images.md#state-disk) of each node and the [transparently encrypted storage](encrypted-storage.md) for Kubernetes. It implements the key management for the [storage encryption keys](keys.md#storage-encryption) in Constellation. These keys are used for the [state disk](images.md#state-disk) of each node and the [transparently encrypted storage](encrypted-storage.md) for Kubernetes.
Depending on wether the [constellation-managed](keys.md#constellation-managed-key-management) or [user-managed](keys.md#user-managed-key-management) mode is used, the *KMS* holds the key encryption key (KEK) directly or calls an external service for key derivation respectively. Depending on wether the [constellation-managed](keys.md#constellation-managed-key-management) or [user-managed](keys.md#user-managed-key-management) mode is used, the *KMS* holds the key encryption key (KEK) directly or calls an external service for key derivation respectively.
## AccessManager
The *AccessManager* runs as DaemonSet on each node.
It manages the user's SSH access to nodes as specified in the config.

View File

@ -1,59 +0,0 @@
# Manage SSH keys
Constellation allows you to create UNIX users that can connect to both control-plane and worker nodes over SSH. As the system partitions are read-only, users need to be re-created upon each restart of a node. This is automated by the *Access Manager*.
On cluster initialization, users defined in the `ssh-users` section of the Constellation configuration file are created and stored in the `ssh-users` ConfigMap in the `kube-system` namespace. For a running cluster, you can add or remove users by modifying the ConfigMap and restarting a node.
## Access Manager
The Access Manager supports all OpenSSH key types. These are RSA, ECDSA (using the `nistp256`, `nistp384`, `nistp521` curves) and Ed25519.
:::note
All users are automatically created with `sudo` capabilities.
:::
The Access Manager is deployed as a DaemonSet called `constellation-access-manager`, running as an `initContainer` and afterward running a `pause` container to avoid automatic restarts. While technically killing the Pod and letting it restart works for the (re-)creation of users, it doesn't automatically remove users. Thus, a node restart is required after making changes to the ConfigMap.
When a user is deleted from the ConfigMap, it won't be re-created after the next restart of a node. The home directories of the affected users will be moved to `/var/evicted`.
You can update the ConfigMap by:
```bash
kubectl edit configmap -n kube-system ssh-users
```
Or alternatively, by modifying and re-applying it with the definition listed in the examples.
## Examples
You can add a user `myuser` in `constellation-config.yaml` like this:
```yaml
# Create SSH users on Constellation nodes upon the first initialization of the cluster.
sshUsers:
myuser: "ssh-rsa AAAA...mgNJd9jc="
```
This user is then created upon the first initialization of the cluster, and translated into a ConfigMap as shown below:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-users
namespace: kube-system
data:
myuser: "ssh-rsa AAAA...mgNJd9jc="
```
You can add users by adding `data` entries:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-users
namespace: kube-system
data:
myuser: "ssh-rsa AAAA...mgNJd9jc="
anotheruser: "ssh-ed25519 AAAA...CldH"
```
Similarly, removing any entries causes users to be evicted upon the next restart of the node.

View File

@ -45,3 +45,31 @@ Constellation uses the default bucket to store logs. Its [default retention peri
</tabItem> </tabItem>
</tabs> </tabs>
## Connect to nodes via SSH
Debugging via a shell on a node is [directly supported by Kubernetes](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#node-shell-session).
1. Figure out which node to connect to:
```sh
kubectl get nodes
# or to see more information, such as IPs:
kubectl get nodes -o wide
```
2. Connect to the node:
```sh
kubectl debug node/constell-worker-xksa0-000000 -it --image=busybox
```
You will be presented with a prompt.
The nodes file system is mounted at `/host`.
3. Once finished, clean up the debug pod:
```sh
kubectl delete pod node-debugger-constell-worker-xksa0-000000-bjthj
```

View File

@ -135,11 +135,6 @@
"label": "Verify your cluster", "label": "Verify your cluster",
"id": "workflows/verify-cluster" "id": "workflows/verify-cluster"
}, },
{
"type": "doc",
"label": "Manage SSH keys",
"id": "workflows/ssh"
},
{ {
"type": "doc", "type": "doc",
"label": "Use persistent storage", "label": "Use persistent storage",