docs: add release v2.14.0 (#2734)

Co-authored-by: burgerdev <burgerdev@users.noreply.github.com>
Co-authored-by: Adrian Stobbe <stobbe.adrian@gmail.com>
This commit is contained in:
edgelessci 2023-12-19 17:05:40 +01:00 committed by GitHub
parent ae6b22a143
commit 6b2c00693c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
73 changed files with 8573 additions and 4 deletions

View file

@ -0,0 +1,22 @@
# Emojivoto
[Emojivoto](https://github.com/BuoyantIO/emojivoto) is a simple and fun application that's well suited to test the basic functionality of your cluster.
<!-- vale off -->
<img src={require("../../_media/example-emojivoto.jpg").default} alt="emojivoto - Web UI" width="552"/>
<!-- vale on -->
1. Deploy the application:
```bash
kubectl apply -k github.com/BuoyantIO/emojivoto/kustomize/deployment
```
2. Wait until it becomes available:
```bash
kubectl wait --for=condition=available --timeout=60s -n emojivoto --all deployments
```
3. Forward the web service to your machine:
```bash
kubectl -n emojivoto port-forward svc/web-svc 8080:80
```
4. Visit [http://localhost:8080](http://localhost:8080)

View file

@ -0,0 +1,107 @@
# Deploying Filestash
Filestash is a web frontend for different storage backends, including S3.
It's a useful application to showcase s3proxy in action.
1. Deploy s3proxy as described in [Deployment](../../workflows/s3proxy.md#deployment).
2. Create a deployment file for Filestash with one pod:
```sh
cat << EOF > "deployment-filestash.yaml"
apiVersion: apps/v1
kind: Deployment
metadata:
name: filestash
spec:
replicas: 1
selector:
matchLabels:
app: filestash
template:
metadata:
labels:
app: filestash
spec:
hostAliases:
- ip: $(kubectl get svc s3proxy-service -o=jsonpath='{.spec.clusterIP}')
hostnames:
- "s3.us-east-1.amazonaws.com"
- "s3.us-east-2.amazonaws.com"
- "s3.us-west-1.amazonaws.com"
- "s3.us-west-2.amazonaws.com"
- "s3.eu-north-1.amazonaws.com"
- "s3.eu-south-1.amazonaws.com"
- "s3.eu-south-2.amazonaws.com"
- "s3.eu-west-1.amazonaws.com"
- "s3.eu-west-2.amazonaws.com"
- "s3.eu-west-3.amazonaws.com"
- "s3.eu-central-1.amazonaws.com"
- "s3.eu-central-2.amazonaws.com"
- "s3.ap-northeast-1.amazonaws.com"
- "s3.ap-northeast-2.amazonaws.com"
- "s3.ap-northeast-3.amazonaws.com"
- "s3.ap-east-1.amazonaws.com"
- "s3.ap-southeast-1.amazonaws.com"
- "s3.ap-southeast-2.amazonaws.com"
- "s3.ap-southeast-3.amazonaws.com"
- "s3.ap-southeast-4.amazonaws.com"
- "s3.ap-south-1.amazonaws.com"
- "s3.ap-south-2.amazonaws.com"
- "s3.me-south-1.amazonaws.com"
- "s3.me-central-1.amazonaws.com"
- "s3.il-central-1.amazonaws.com"
- "s3.af-south-1.amazonaws.com"
- "s3.ca-central-1.amazonaws.com"
- "s3.sa-east-1.amazonaws.com"
containers:
- name: filestash
image: machines/filestash:latest
ports:
- containerPort: 8334
volumeMounts:
- name: ca-cert
mountPath: /etc/ssl/certs/kube-ca.crt
subPath: kube-ca.crt
volumes:
- name: ca-cert
secret:
secretName: s3proxy-tls
items:
- key: ca.crt
path: kube-ca.crt
EOF
```
The pod spec includes the `hostAliases` key, which adds an entry to the pod's `/etc/hosts`.
The entry forwards all requests for any of the currently defined AWS regions to the Kubernetes service `s3proxy-service`.
If you followed the s3proxy [Deployment](../../workflows/s3proxy.md#deployment) guide, this service points to a s3proxy pod.
The deployment specifies all regions explicitly to prevent accidental data leaks.
If one of your buckets were located in a region that's not part of the `hostAliases` key, traffic towards those buckets would not be redirected to s3proxy.
Similarly, if you want to exclude data for specific regions from going through s3proxy you can remove those regions from the deployment.
The spec also includes a volume mount for the TLS certificate and adds it to the pod's certificate trust store.
The volume is called `ca-cert`.
The key `ca.crt` of that volume is mounted to `/etc/ssl/certs/kube-ca.crt`, which is the default certificate trust store location for that container's OpenSSL library.
Not adding the CA certificate will result in TLS authentication errors.
3. Apply the file: `kubectl apply -f deployment-filestash.yaml`
Afterward, you can use a port forward to access the Filestash pod:
`kubectl port-forward pod/$(kubectl get pod --selector='app=filestash' -o=jsonpath='{.items[*].metadata.name}') 8334:8334`
4. After browsing to `localhost:8443`, Filestash will ask you to set an administrator password.
After setting it, you can directly leave the admin area by clicking the blue cloud symbol in the top left corner.
Subsequently, you can select S3 as storage backend and enter your credentials.
This will bring you to an overview of your buckets.
If you want to deploy Filestash in production, take a look at its [documentation](https://www.filestash.app/docs/).
5. To see the logs of s3proxy intercepting requests made to S3, run: `kubectl logs -f pod/$(kubectl get pod --selector='app=s3proxy' -o=jsonpath='{.items[*].metadata.name}')`
Look out for log messages labeled `intercepting`.
There is one such log message for each message that's encrypted, decrypted, or blocked.
6. Once you have uploaded a file with Filestash, you should be able to view the file in Filestash.
However, if you go to the AWS S3 [Web UI](https://s3.console.aws.amazon.com/s3/home) and download the file you just uploaded in Filestash, you won't be able to read it.
Another way to spot encrypted files without downloading them is to click on a file, scroll to the Metadata section, and look for the header named `x-amz-meta-constellation-encryption`.
This header holds the encrypted data encryption key of the object and is only present on objects that are encrypted by s3proxy.

View file

@ -0,0 +1,98 @@
# Horizontal Pod Autoscaling
This example demonstrates Constellation's autoscaling capabilities. It's based on the Kubernetes [HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). During the following steps, Constellation will spawn new VMs on demand, verify them, add them to the cluster, and delete them again when the load has settled down.
## Requirements
The cluster needs to be initialized with Kubernetes 1.23 or later. In addition, [autoscaling must be enabled](../../workflows/scale.md) to enable Constellation to assign new nodes dynamically.
Just for this example specifically, the cluster should have as few worker nodes in the beginning as possible. Start with a small cluster with only *one* low-powered node for the control-plane node and *one* low-powered worker node.
:::info
We tested the example using instances of types `Standard_DC4as_v5` on Azure and `n2d-standard-4` on GCP.
:::
## Setup
1. Install the Kubernetes Metrics Server:
```bash
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
```
2. Deploy the HPA example server that's supposed to be scaled under load.
This manifest is similar to the one from the Kubernetes HPA walkthrough, but with increased CPU limits and requests to facilitate the triggering of node scaling events.
```bash
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: registry.k8s.io/hpa-example
ports:
- containerPort: 80
resources:
limits:
cpu: 900m
requests:
cpu: 600m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache
EOF
```
3. Create a HorizontalPodAutoscaler.
Set an average CPU usage across all Pods of 20% to see one additional worker node being created later. Note that the CPU usage isn't related to the host CPU usage, but rather to the requested CPU capacities (20% of 600 milli-cores CPU across all Pods). See the [original tutorial](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler) for more information on the HPA configuration.
```bash
kubectl autoscale deployment php-apache --cpu-percent=20 --min=1 --max=10
```
4. Create a Pod that generates load on the server:
```bash
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"
```
5. Wait a few minutes until new nodes are added to the cluster. You can [monitor](#monitoring) the state of the HorizontalPodAutoscaler, the list of nodes, and the behavior of the autoscaler.
6. To stop the load generator, press CTRL+C and run:
```bash
kubectl delete pod load-generator
```
7. The cluster-autoscaler checks every few minutes if nodes are underutilized. It will taint such candidates for removal and wait additional 10 minutes before the nodes are eventually removed and deallocated. The whole process can take about 20 minutes.
## Monitoring
:::tip
For better observability, run the listed commands in different tabs in your terminal.
:::
You can watch the status of the HorizontalPodAutoscaler with the current CPU usage, the target CPU limit, and the number of replicas created with:
```bash
kubectl get hpa php-apache --watch
```
From time to time compare the list of nodes to check the behavior of the autoscaler:
```bash
kubectl get nodes
```
For deeper insights, see the logs of the autoscaler Pod, which contains more details about the scaling decision process:
```bash
kubectl logs -f deployment/constellation-cluster-autoscaler -n kube-system
```

View file

@ -0,0 +1,29 @@
# Online Boutique
[Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo) is an e-commerce demo application by Google consisting of 11 separate microservices. In this demo, Constellation automatically sets up a load balancer on your CSP, making it easy to expose services from your confidential cluster.
<!-- vale off -->
<img src={require("../../_media/example-online-boutique.jpg").default} alt="Online Boutique - Web UI" width="662"/>
<!-- vale on -->
1. Create a namespace:
```bash
kubectl create ns boutique
```
2. Deploy the application:
```bash
kubectl apply -n boutique -f https://github.com/GoogleCloudPlatform/microservices-demo/raw/main/release/kubernetes-manifests.yaml
```
3. Wait for all services to become available:
```bash
kubectl wait --for=condition=available --timeout=300s -n boutique --all deployments
```
4. Get the frontend's external IP address:
```shell-session
$ kubectl get service frontend-external -n boutique | awk '{print $4}'
EXTERNAL-IP
<your-ip>
```
(`<your-ip>` is a placeholder for the IP assigned by your CSP.)
5. Enter the IP from the result in your browser to browse the online shop.