mirror of
https://github.com/autistic-symposium/backend-and-orchestration-toolkit.git
synced 2025-06-08 15:02:55 -04:00
🛌 Commit progress before sleep break
This commit is contained in:
parent
119cc7f62c
commit
585ee80f5d
319 changed files with 29 additions and 23 deletions
1
code/kubernetes/python-cdk/.gitkeep
Normal file
1
code/kubernetes/python-cdk/.gitkeep
Normal file
|
@ -0,0 +1 @@
|
|||
|
7
code/kubernetes/python-cdk/README.md
Normal file
7
code/kubernetes/python-cdk/README.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
## examples using python cdk to orchestrate kubernetes
|
||||
|
||||
<br>
|
||||
|
||||
* [setting up a postgreSQL RDS with CDK](https://github.com/go-outside-labs/orchestration-toolkit/tree/master/kubernetes/python-cdk/python/PostgreSQL_example)
|
||||
* [setting up a VPC with CDK](https://github.com/go-outside-labs/orchestration-toolkit/tree/master/kubernetes/python-cdk/python/VPC_example)
|
||||
* [gitops, flux, and deploying services with CDK](https://github.com/go-outside-labs/orchestration-toolkit/blob/master/kubernetes/python-cdk/admin_guide_cdk.md)
|
416
code/kubernetes/python-cdk/admin_guide_cdk.md
Normal file
416
code/kubernetes/python-cdk/admin_guide_cdk.md
Normal file
|
@ -0,0 +1,416 @@
|
|||
# GitOps, Flux, and Deploying services with CDK
|
||||
|
||||
An introduction to some concepts and tools we will be using.
|
||||
|
||||
## What's **GitOps**
|
||||
|
||||
In general, there are two ways to deploy infrastructure changes:
|
||||
|
||||
- **Procedural way**: telling some tools what to do, e.g.: Ansible. This is also known as a push model.
|
||||
- **Declarative way**: telling some tool what you want to have done, also known as infrastructure as code, e.g.: Terraform, Pulumi, CDK.
|
||||
|
||||
[GitOps](https://www.weave.works/technologies/gitops/) is a term created by WeWorks and works by using Git as a source of truth for declarative infrastructure and applications. Automated CI/CD pipelines roll out changes to your infrastructure after commits are pushed and approved in Git.
|
||||
|
||||
The GitOps methodology consists in describing the desired state of the system using a **declarative** specification for each environment (e.g., our Kubernetes cluster for a specific environment):
|
||||
|
||||
- A Git repo is the single source of truth for the desired state of the system
|
||||
- All changes to the desired state are Git commits
|
||||
- All specified properties of the cluster are also observable in the cluster so that we can detect if the desired and observed states are the same (converged) or different (diverged)
|
||||
|
||||
In GitOps you only push code. The developer interacts with the source control, which triggers the CI/CD tool (CicleCI), and this pushes the docker image to the container register (e.g. docker hub). You see the Docker image as an artifact.
|
||||
|
||||
To deploy that Docker image, you have a different config repository which contains the Kubernetes manifests. CircleCI sends a pull request, and when it is merged, a pod in the Kubernetes cluster pulls the image to the cluster (similar to `kubectl apply`, or even `helm update`). Everything is controlled through pull requests. You push code, not containers.
|
||||
|
||||
The refereed pod runs a tool called [Flux](https://github.com/fluxcd/flux), which automatically ensures that the state of a cluster matches the config in Git. It uses an operator in the cluster to trigger deployments inside Kubernetes, which means you don't need a separated CircleCI. It monitors all relevant image repositories, detects new images, triggers deployments, and updates the desired running configuration based on that.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
A Kubernetes cluster consists of a series of objects:
|
||||
|
||||
- **Nodes**, which can be equated to servers, be they bare-metal or virtual machines running in a cloud.
|
||||
- Nodes run **Pods**, which are collections of Docker containers. A Pod is the unit of deployment in Kubernetes. All containers in a Pod share the same network and can refer to each other as if they were running on the same host. The Kubernetes object responsible for launching and maintaining the desired number of pods is called a **Deployment.**
|
||||
- For Pods to communicate with other Pods, Kubernetes provides another kind of object called a **Service.**
|
||||
- Services are tied to Deployments through **Selectors** and **Labels,** and are also exposed to external clients either by exposing a **NodePort** as a static port on each Kubernetes node or by creating a **LoadBalancer** object.
|
||||
|
||||
## Kustomize
|
||||
|
||||
Kustomize provides a **purely declarative approach** to configuration customization that adheres to and leverages the familiar and carefully designed Kubernetes API.
|
||||
|
||||
Kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Kustomize targets Kubernetes; it understands and can patch.
|
||||
|
||||
### How Kustomize works
|
||||
|
||||
For each service, there is two directories, where a `kustomization.yaml` file list all the `yaml` files inside them:
|
||||
|
||||
- `base/` - usually immutable.
|
||||
- `overlay/`- where you add customizations and new code.
|
||||
|
||||
---
|
||||
|
||||
# **Bootstrapping Services in an AWS EKS cluster**
|
||||
|
||||
## **Pre-requisites**
|
||||
|
||||
### **Install CLI tools**
|
||||
|
||||
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
|
||||
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
- [sops](https://github.com/mozilla/sops).
|
||||
- [Kustomize](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/INSTALL.md).
|
||||
- [fluxctl](https://www.weave.works/blog/install-fluxctl-and-manage-your-deployments-easily).
|
||||
|
||||
### **Get access to our AWS Cluster**
|
||||
|
||||
We spin up clusters' resources using AWS CDK. This provisions a **developer EKS cluster** and **MSK cluster**, together with the following resources: a dedicated **VPC**, a **VPN**, **Elasticsearch cluster**, **Cloudwatch dashboards**, and an **RDS Postgres instance configured for Hydra.**
|
||||
|
||||
This staging and dev clusters are already available for you in our AWS staging account. For full access you need:
|
||||
|
||||
- AWS credentials (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`)
|
||||
- The VPN `.ovpn` file (can be downloaded from the dashboard) and VPN client private key.
|
||||
- Kubeconfig file.
|
||||
|
||||
However, if you would like to bootstrap an entirely new cluster, follow the instructions below.
|
||||
|
||||
|
||||
## **Bootstrapping Step-by-step**
|
||||
|
||||
### **Update Kubeconfig**
|
||||
|
||||
Edit `./bootstrap/kubeconfig/aws-auth-configmap.yaml` with your account's `rolearn`.
|
||||
|
||||
Set env variables:
|
||||
|
||||
export REGION=<aws region>
|
||||
|
||||
Get kubectl config:
|
||||
|
||||
./get_kubeconfig.sh
|
||||
|
||||
Remember, you can always change your kubeconfig context with:
|
||||
|
||||
kubectl config use-context <context>
|
||||
|
||||
You can also use [kubectx](https://github.com/ahmetb/kubectx) for this.
|
||||
|
||||
### **Create Nginx ingress controller in the EKS cluster**
|
||||
|
||||
Create Nginx ingress controller's namespaces, services, roless, deployments, etc. by running:
|
||||
|
||||
kubectl apply -f ./bootstrap/nginx-ingress-alb/all-in-one.yaml
|
||||
|
||||
This is the output:
|
||||
|
||||
namespace/kube-ingress created
|
||||
serviceaccount/nginx-ingress-controller created
|
||||
clusterrole.rbac.authorization.k8s.io/nginx-ingress-controller created
|
||||
role.rbac.authorization.k8s.io/nginx-ingress-controller created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
|
||||
rolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
|
||||
service/nginx-default-backend created
|
||||
deployment.extensions/nginx-default-backend created
|
||||
configmap/ingress-nginx created
|
||||
service/ingress-nginx created
|
||||
deployment.extensions/ingress-nginx created
|
||||
priorityclass.scheduling.k8s.io/high-priority created
|
||||
|
||||
Check whether all the pods created:
|
||||
|
||||
kubectl get pods --namespace kube-ingress
|
||||
|
||||
Should result:
|
||||
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-ingress ingress-nginx-55966f5cf8-bpvwj 1/1 Running 0 7m53s
|
||||
kube-ingress ingress-nginx-55966f5cf8-vssfl 1/1 Running 0 7m53s
|
||||
kube-ingress ingress-nginx-55966f5cf8-xtkv9 1/1 Running 0 7m53s
|
||||
kube-ingress nginx-default-backend-c4bbbc8b7-j5cnh 1/1 Running 0 7m57s
|
||||
|
||||
Check all the services created:
|
||||
|
||||
kubectl get services --namespace kube-ingress
|
||||
|
||||
Should result:
|
||||
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kube-ingress ingress-nginx NodePort 172.20.203.36 <none> 80:30080/TCP,443:30443/TCP 6m32s
|
||||
kube-ingress nginx-default-backend ClusterIP 172.20.128.3 <none> 80/TCP 6m35s
|
||||
|
||||
Note that the `Service` type for the `ingress-nginx` service is `NodePort` and not `LoadBalancer`. We don't want AWS to create a new Load Balancer every time we recreate the ingress. Instead, we provision an ALB and send both HTTP and HTTPS traffic to a `Target Group` that targets port `30080` on the EKS worker nodes (which is the `nodePort` in the manifest above for HTTP traffic).
|
||||
|
||||
### **Create a namespace in the EKS cluster**
|
||||
|
||||
This step is necessary in case you are creating an entirely new cluster namespace (i.e., if it's not `dev` nor `staging`).
|
||||
|
||||
To add a new namespace, just follow the current examples in `/bootstrap/namespaces/overlays/` and then apply the changes in the overlay:
|
||||
|
||||
cd ./bootstrap/namespaces/overlays/
|
||||
kustomize build . | kubectl apply -k <namespace>
|
||||
|
||||
You should see something like:
|
||||
|
||||
namespace/<namespace> created
|
||||
namespace/logging created
|
||||
namespace/monitoring created
|
||||
namespace/observability created
|
||||
|
||||
Check whether it worked:
|
||||
|
||||
kubectl get ns
|
||||
|
||||
### **Create secret for DockerHub credentials in EKS cluster**
|
||||
|
||||
All right, if you are working on an AWS account that is not staging, hold tight, because this step is a trip.
|
||||
|
||||
Currently, we use [sops](https://github.com/mozilla/sops) to manage secrets in Kubernetes.
|
||||
|
||||
You have a file named `./bootstrap/dockerhub-creds-secret/docker-hub.yaml` that possess the secret for DockerHub credentials and it's encrypted. So the first thing we need to do is decrypt it so we can use the secret for our cluster. The caveat is that you need to set your AWS creds to the account `773713188930` (staging) account (or it won't be able to grab the key to decrypt):
|
||||
|
||||
sops -d docker-hub.yaml > dec.yaml
|
||||
|
||||
Take a look at `dec.yaml`, you will see something like this:
|
||||
|
||||
apiVersion: v1
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: docker-hub
|
||||
data:
|
||||
.dockerconfigjson: <Base64 1337 password>
|
||||
|
||||
Now, the next step is either to go to the [AWS KMS dashboard](https://us-east-2.console.aws.amazon.com/kms/home?region=us-east-2#/kms/keys) or run `aws kms create-custom-key-store` to create a `Customer managed keys`.
|
||||
|
||||
KMS is a service that encrypts and decrypts data with AES_GCM, using keys that are never visible to users of the service. Each KMS master key has a set of role-based access controls, and individual roles are permitted to encrypt or decrypt using the master key. KMS helps solve the problem of distributing keys, by shifting it into an access control problem that can be solved using AWS's trust model.
|
||||
|
||||
Once you have this ready, grab its ARN.
|
||||
|
||||
Create a new encrypted file with your new KMS key:
|
||||
|
||||
sops --kms="ARN" --encrypt dec.yaml > docker-hub-<MY CLUSTER>.yaml
|
||||
|
||||
|
||||
This Secret is created in several namespaces (default, monitoring, logging, flux-system).
|
||||
|
||||
### **Apply overlay config for fluentd, jaeger-operator and prometheus-operator**
|
||||
|
||||
Follow the same procedure for each of these services: `./bootstrap/fluentd`, `./bootstrap/jaeger-operator` and `./bootstrap/prometheus-operator`, copying an overlay subdirectory for your namespace, replacing your namespace string to anywhere where `staging` is, and running:
|
||||
|
||||
cd ./bootstrap/<service>/overlays/<namespace>
|
||||
kustomize build . | kubectl apply -f -
|
||||
|
||||
### **Install and configure Flux in EKS cluster**
|
||||
|
||||
This part is a little longer. [Here](https://docs.fluxcd.io/en/latest/tutorials/get-started-kustomize.html) is the official Flux documentation with Kustomize.
|
||||
|
||||
Flux (and memcached) is bootstrapped by following the instructions inside `bootstrap/flux/`. That directory should have the following structure:
|
||||
|
||||
├── base
|
||||
│ ├── flux
|
||||
│ └── memcached
|
||||
├── overlays
|
||||
|
||||
The first step is creating an `overlay/<namespace>` directory for your deployment, similar to `overlay/staging`.
|
||||
|
||||
### **How Flux works**
|
||||
|
||||
Flux runs by looking at `./.flux.yaml`. This calls `./generate_kustomize_output.sh` in a docker container and runs the following:
|
||||
|
||||
1. Set the environment (e.g. `staging`).
|
||||
2. For each sub-directory in `kustomize/`, `cd` inside each `overlays/` for the environment and runs `kustomize build`.
|
||||
3. If there are `sops` secrets inside these directories, decrypts the secret as well.
|
||||
|
||||
### **Setting up Flux Docker image**
|
||||
|
||||
The default `Deployment` for Flux is using `weaveworks/flux` Docker image.
|
||||
|
||||
You will need to push a docker image to DockerHub for your namespace.
|
||||
|
||||
Once you have a [docker image in Docker Hub] grab its tag (e.g. `staging-af87bcc`).
|
||||
|
||||
|
||||
### **Building and deploying**
|
||||
|
||||
Inside your overlay directory, run:
|
||||
|
||||
cd bootstrap/flux/overlays/<namespace>
|
||||
kustomize build . | kubectl apply -f -
|
||||
|
||||
You should see the following:
|
||||
|
||||
namespace/flux-system created
|
||||
serviceaccount/flux created
|
||||
podsecuritypolicy.policy/flux created
|
||||
role.rbac.authorization.k8s.io/flux created
|
||||
clusterrole.rbac.authorization.k8s.io/flux-psp created
|
||||
clusterrole.rbac.authorization.k8s.io/flux created
|
||||
clusterrole.rbac.authorization.k8s.io/flux-readonly created
|
||||
rolebinding.rbac.authorization.k8s.io/flux created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/flux-psp created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/flux created
|
||||
configmap/flux-kube-config-hmbbmcb469 created
|
||||
secret/flux-git-deploy created
|
||||
service/flux-memcached created
|
||||
deployment.apps/flux created
|
||||
deployment.apps/flux-memcached created
|
||||
|
||||
Wait for Flux and memcached to start:
|
||||
|
||||
kubectl -n flux-system rollout status deployment.apps/flux
|
||||
|
||||
Check that the pods are up:
|
||||
|
||||
kubectl get pods --namespace flux-system
|
||||
|
||||
You should see two pods, something like this:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
flux-<some string> 1/1 Running 0 21m
|
||||
flux-memcached-<some string> 1/1 Running 0 60m
|
||||
|
||||
At any point you can debug your pod by running:
|
||||
|
||||
kubectl describe pod flux-<some string> -n flux-system
|
||||
|
||||
### **Adding key to Github**
|
||||
|
||||
Generate a deployment key:
|
||||
|
||||
fluxctl --k8s-fwd-ns=flux-system identity
|
||||
|
||||
|
||||
Later on, when you have everything set, you can force Flux `git pull` with
|
||||
|
||||
fluxctl sync --k8s-fwd-ns flux-system
|
||||
|
||||
|
||||
### **Create k8s-developer-role in multiple namespaces in EKS cluster**
|
||||
|
||||
Similarly to the previous step, create an overlay to your namespace (e.g. dev) in the [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) kustomize resources. You can do this by copying the files from `./bootstrap/rbac/overlays/staging`, and changing the namespace string from `staging` inside `k8s-developer-user.yaml`:
|
||||
|
||||
...
|
||||
metadata:
|
||||
name: k8s-developer-role
|
||||
namespace: <namespace>
|
||||
...
|
||||
metadata:
|
||||
name: k8s-developer-rolebinding
|
||||
namespace: <namespace>
|
||||
|
||||
Apply the changes with:
|
||||
|
||||
cd bootstrap/rbac/overlays/<namespace>
|
||||
kustomize build . | kubectl apply -f -
|
||||
|
||||
You should see the following:
|
||||
|
||||
role.rbac.authorization.k8s.io/k8s-developer-role-default created
|
||||
role.rbac.authorization.k8s.io/k8s-developer-role created
|
||||
role.rbac.authorization.k8s.io/k8s-developer-role-monitoring created
|
||||
rolebinding.rbac.authorization.k8s.io/k8s-developer-rolebinding-default created
|
||||
rolebinding.rbac.authorization.k8s.io/k8s-developer-rolebinding created
|
||||
rolebinding.rbac.authorization.k8s.io/k8s-developer-rolebinding-monitoring created
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
# **Deploying Advanced services in an AWS EKS cluster**
|
||||
|
||||
|
||||
## **Porting Hydra**
|
||||
|
||||
### **Customizing the overlay directory**
|
||||
|
||||
Inside `./kustomize/hydra`, create `overlay/` subdirectory for your environment.
|
||||
|
||||
Create a KMS key (the same way as in step *Create secret for DockerHub credentials in EKS cluster* in `./boostrap`). Grab its ARN and add it too `./kustomize/hydra/overlays/.sops.yaml`.
|
||||
|
||||
Replace the `staging` string (and the correct host URLS) for your namespace, inside `kustomization.yaml`, `configmap.yaml`.
|
||||
|
||||
### **Creating sops secrets for Hydra**
|
||||
|
||||
We use sops to encrypt secret values for environment variables representing credentials, database connections, etc. so that Flux can pick these secrets when it needs.
|
||||
|
||||
We place these files inside a `.sops/` directory inside the overlay environment directory.
|
||||
|
||||
Grab the RDS Postgres data and create secret string:
|
||||
|
||||
echo -n "postgres://hydradbadmin:< hydra_passport>@<hydra_db_endpoint" > .sops/DATABASE_URL.enc
|
||||
sops -e -i .sops/DATABASE_URL.enc
|
||||
|
||||
Create password salt:
|
||||
|
||||
echo -n "<random_string" > .sops/OIDC_SUBJECT_TYPE_PAIRWISE_SALT.enc
|
||||
sops -e -i .sops/OIDC_SUBJECT_TYPE_PAIRWISE_SALT.enc
|
||||
|
||||
Create system secret:
|
||||
|
||||
echo -n "<random_string" > .sops/SYSTEM_SECRET.enc
|
||||
sops -e -i .sops/SYSTEM_SECRET.enc
|
||||
|
||||
Generate `secrets.yaml':
|
||||
|
||||
npx --quiet --package @reactioncommerce/merge-sops-secrets@1.2.1 sops-to-secret secret-stub.yaml > secret.yaml
|
||||
|
||||
### **Building and applying**
|
||||
|
||||
Now, just run:
|
||||
|
||||
cd ./kustomize/hydra/overlays/<namespace>
|
||||
kustomize build . | kubectl apply -f -
|
||||
|
||||
|
||||
### **Create MongoDB database and user in Atlas**
|
||||
|
||||
So that you can have MongDB URL and MongDB OPLOG URL for the next step.
|
||||
|
||||
### **Creating sops secrets**
|
||||
|
||||
Create MongDB URL secret:
|
||||
|
||||
echo -n "<atlas url>" .sops/MONGO_URL.enc
|
||||
sops -e -i .sops/MONGO_URL.enc
|
||||
|
||||
Create MongDB OPLOG URL secret:
|
||||
|
||||
echo -n "<atlas ops url>" .sops/MONGO_OPLOG_URL.enc
|
||||
sops -e -i .sops/MONGO_OPLOG_URL.enc
|
||||
|
||||
|
||||
Generate `secrets.yaml':
|
||||
|
||||
npx --quiet --package @reactioncommerce/merge-sops-secrets@1.2.1 sops-to-secret secret-stub.yaml > secret.yaml
|
||||
|
||||
### **Building and applying**
|
||||
|
||||
Now, just run:
|
||||
|
||||
cd ./kustomize/hydra/overlays/<namespace>
|
||||
kustomize build . | kubectl apply -f -
|
||||
|
||||
|
||||
### **Testing pod**
|
||||
|
||||
kubectl get pods -ntest
|
||||
|
||||
Exec to the pod:
|
||||
|
||||
kubectl exec -it <cdc-toolbox-HASH> -ntest -- bash
|
||||
|
||||
## **Setting DNS Records**
|
||||
|
||||
|
||||
### **Adding NS recorders**
|
||||
|
||||
First, add the nameserver records for `ENV.doman.io` in [Route53](https://console.aws.amazon.com/route53/home?region=us-east-2#hosted-zones:).
|
||||
|
||||
### **Adding Certificate**
|
||||
|
||||
You might have to add a net certificate `*.ENV.domain.io/` to [ACM](https://us-east-2.console.aws.amazon.com/acm/home?region=us-east-2#/), then add its record in Route53 (as CNAME), and associate it to the load balancer.
|
||||
|
||||
In the [load balancer dashboard](https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#LoadBalancers:sort=loadBalancerName), go to listeners and make sure `HTTPS : 443` uses that certificate. Make sure the load balancer has the correct security groups.
|
||||
|
||||
### **Add All aliases**
|
||||
|
||||
Then add all the URL aboves as IPv4 aliases pointing them to the load balancer.
|
||||
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
# Setting up a PostgreSQL RDS with CDK in Python
|
||||
|
||||
### Create a virtual environment and install dependencies:
|
||||
|
||||
```
|
||||
virtualenv .env
|
||||
source .env/bin/activate
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Define You RDS DB
|
||||
|
||||
Add any constant variable in `cdk.json` and then define how you want your RDS instance in `postgre_sql_example/postgre_sql_example_stack.py`:
|
||||
|
||||
```
|
||||
class PostgreSqlExampleStack(core.Stack):
|
||||
|
||||
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
|
||||
super().__init__(scope, id, **kwargs)
|
||||
|
||||
# Database Instance
|
||||
instance = rds.DatabaseInstance(self,
|
||||
'examplepostgresdbinstance',
|
||||
master_username=master_username,
|
||||
engine=rds.DatabaseInstanceEngine.POSTGRES, instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
|
||||
vpc=self.vpc,
|
||||
auto_minor_version_upgrade=auto_minor_version_upgrade,
|
||||
availability_zone=availability_zone,
|
||||
database_name=database_name,
|
||||
enable_performance_insights=enable_performance_insights,
|
||||
storage_encrypted=storage_encrypted,
|
||||
multi_az=multi_az,
|
||||
backup_retention=backup_retention,
|
||||
monitoring_interval=monitoring_interval,
|
||||
)
|
||||
```
|
||||
|
||||
### Create synthesized CloudFormation templates
|
||||
|
||||
```
|
||||
cdk synth
|
||||
```
|
||||
|
||||
You can check what changes are introduced into your current AWS resources with:
|
||||
```
|
||||
cdk diff --profile <AWS PROFILE>
|
||||
```
|
||||
|
||||
|
||||
### Deploy to AWS
|
||||
|
||||
If everything looks OK, deploy with:
|
||||
|
||||
```
|
||||
cdk deploy --profile <AWS PROFILE>
|
||||
```
|
||||
|
||||
To check all the stacks in the app:
|
||||
|
||||
```
|
||||
cdk ls
|
||||
```
|
||||
|
||||
### Clean up
|
||||
|
||||
To destroy/remove all the newly created resources, run:
|
||||
|
||||
```
|
||||
cdk destroy --profile <AWS PROFILE>
|
||||
```
|
11
code/kubernetes/python-cdk/python/PostgreSQL_example/app.py
Normal file
11
code/kubernetes/python-cdk/python/PostgreSQL_example/app.py
Normal file
|
@ -0,0 +1,11 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from aws_cdk import core
|
||||
|
||||
from postgre_sql_example.postgre_sql_example_stack import PostgreSqlExampleStack
|
||||
|
||||
|
||||
app = core.App()
|
||||
PostgreSqlExampleStack(app, "postgre-sql-example")
|
||||
|
||||
app.synth()
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"app": "python3 app.py",
|
||||
"context": {
|
||||
"rds.auto_minor_version_upgrade": false,
|
||||
"rds.availability_zone":
|
||||
"rds.backup_retention":
|
||||
"rds.database_name": "postgres_db",
|
||||
"rds.enable_performance_insights": true,
|
||||
"rds.master_username": "postgres",
|
||||
"rds.monitoring_interval": 60,
|
||||
"rds.multi_az": false,
|
||||
"rds.storage_encrypted": false,
|
||||
"vpc.cidr": "10.0.0.0/16",
|
||||
"vpc.max_azs":
|
||||
}
|
||||
}
|
|
@ -0,0 +1,59 @@
|
|||
import json
|
||||
import sys
|
||||
from aws_cdk import (
|
||||
aws_ec2 as ec2,
|
||||
aws_rds as rds,
|
||||
core as core,
|
||||
)
|
||||
|
||||
# Python CDK does not have get_context yet.
|
||||
def _get_context():
|
||||
CONTEXT_FILE = 'cdk.json'
|
||||
try:
|
||||
with open(CONTEXT_FILE, 'r') as f:
|
||||
return json.load(f)['context']
|
||||
except IOError:
|
||||
print('Could not open context file {}. Exiting...'.format(CONTEXT_FILE))
|
||||
sys.exit(1)
|
||||
except KeyError as e:
|
||||
print('Context file {0} is misconfigured {1}. Exiting...'.format(CONTEXT_FILE, e))
|
||||
sys.exit(1)
|
||||
|
||||
class PostgreSqlExampleStack(core.Stack):
|
||||
|
||||
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
|
||||
super().__init__(scope, id, **kwargs)
|
||||
|
||||
# Grab variables from cdk.json
|
||||
context = _get_context()
|
||||
auto_minor_version_upgrade = context["rds.auto_minor_version_upgrade"]
|
||||
availability_zone = context["rds.availability_zone"]
|
||||
backup_retention = core.Duration.days(context["rds.backup_retention"])
|
||||
database_name = context["rds.database_name"]
|
||||
enable_performance_insights = context["rds.enable_performance_insights"]
|
||||
master_username = context["rds.master_username"]
|
||||
monitoring_interval = core.Duration.seconds(context["rds.monitoring_interval"])
|
||||
multi_az = context["rds.multi_az"]
|
||||
storage_encrypted = context["rds.storage_encrypted"]
|
||||
cidr = context["vpc.cidr"]
|
||||
max_azs = context["vpc.max_azs"]
|
||||
|
||||
# Set VPC
|
||||
self.vpc = ec2.Vpc(self, "VPCTest", cidr=cidr, max_azs=max_azs)
|
||||
|
||||
# Database Instance
|
||||
instance = rds.DatabaseInstance(self,
|
||||
'storefrontrdspostgresdbinstance',
|
||||
master_username=master_username,
|
||||
engine=rds.DatabaseInstanceEngine.POSTGRES, instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
|
||||
vpc=self.vpc,
|
||||
auto_minor_version_upgrade=auto_minor_version_upgrade,
|
||||
availability_zone=availability_zone,
|
||||
database_name=database_name,
|
||||
enable_performance_insights=enable_performance_insights,
|
||||
storage_encrypted=storage_encrypted,
|
||||
multi_az=multi_az,
|
||||
backup_retention=backup_retention,
|
||||
monitoring_interval=monitoring_interval,
|
||||
)
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
astroid==2.2.5
|
||||
attrs==19.1.0
|
||||
aws-cdk.assets==1.10.0
|
||||
aws-cdk.aws-cloudwatch==1.10.0
|
||||
aws-cdk.aws-ec2==1.10.0
|
||||
aws-cdk.aws-events==1.10.0
|
||||
aws-cdk.aws-iam==1.10.0
|
||||
aws-cdk.aws-kms==1.10.0
|
||||
aws-cdk.aws-lambda==1.10.0
|
||||
aws-cdk.aws-logs==1.10.0
|
||||
aws-cdk.aws-rds==1.10.0
|
||||
aws-cdk.aws-s3==1.10.0
|
||||
aws-cdk.aws-s3-assets==1.10.0
|
||||
aws-cdk.aws-sam==1.10.0
|
||||
aws-cdk.aws-secretsmanager==1.10.0
|
||||
aws-cdk.aws-sqs==1.10.0
|
||||
aws-cdk.aws-ssm==1.10.0
|
||||
aws-cdk.core==1.10.0
|
||||
aws-cdk.cx-api==1.10.0
|
||||
aws-cdk.region-info==1.10.0
|
||||
cattrs==0.9.0
|
||||
isort==4.3.21
|
||||
jsii==0.17.1
|
||||
lazy-object-proxy==1.4.2
|
||||
mccabe==0.6.1
|
||||
pep8==1.7.1
|
||||
publication==0.0.3
|
||||
pylint==2.3.1
|
||||
python-dateutil==2.8.0
|
||||
six==1.12.0
|
||||
typed-ast==1.4.0
|
||||
typing-extensions==3.7.4
|
||||
virtualenv==16.7.4
|
||||
wrapt==1.11.2
|
|
@ -0,0 +1,45 @@
|
|||
import setuptools
|
||||
|
||||
|
||||
with open("README.md") as fp:
|
||||
long_description = fp.read()
|
||||
|
||||
|
||||
setuptools.setup(
|
||||
name="postgre_sql_example",
|
||||
version="0.0.1",
|
||||
|
||||
description="A postgres CDK Python example",
|
||||
long_description=long_description,
|
||||
long_description_content_type="text/markdown",
|
||||
|
||||
author="author",
|
||||
|
||||
package_dir={"": "postgre_sql_example"},
|
||||
packages=setuptools.find_packages(where="postgre_sql_example"),
|
||||
|
||||
install_requires=[
|
||||
"aws-cdk.core",
|
||||
],
|
||||
|
||||
python_requires=">=3.6",
|
||||
|
||||
classifiers=[
|
||||
"Development Status :: 4 - Beta",
|
||||
|
||||
"Intended Audience :: Developers",
|
||||
|
||||
"License :: OSI Approved :: Apache Software License",
|
||||
|
||||
"Programming Language :: JavaScript",
|
||||
"Programming Language :: Python :: 3 :: Only",
|
||||
"Programming Language :: Python :: 3.6",
|
||||
"Programming Language :: Python :: 3.7",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
|
||||
"Topic :: Software Development :: Code Generators",
|
||||
"Topic :: Utilities",
|
||||
|
||||
"Typing :: Typed",
|
||||
],
|
||||
)
|
69
code/kubernetes/python-cdk/python/VPC_example/README.md
Normal file
69
code/kubernetes/python-cdk/python/VPC_example/README.md
Normal file
|
@ -0,0 +1,69 @@
|
|||
# Setting up a VPC with CDK in Python
|
||||
|
||||
[AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a very neat way to write infrastructure as code, enabling you to create and provision AWS infrastructure deployments predictably and repeatedly.
|
||||
|
||||
You choose your favorite language to code what resources (stacks) you want, and CDK synthetizes them to CloudFormation and helps you to deploy them to AWS.
|
||||
|
||||
In this example we see how to setup a VPC using CDK in Python.
|
||||
|
||||
### Install AWS CDK
|
||||
|
||||
Follow [this instructions](https://github.com/aws/aws-cdk#at-a-glance).
|
||||
|
||||
### Create a virtual environment and install dependencies:
|
||||
|
||||
```
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Note: If you are starting from a blank project with `cdk init app --language=python` instead, you will we need to manually install resources, such as `pip install aws_cdk.aws_ec2`.
|
||||
|
||||
### Define You VPC
|
||||
|
||||
Define how you want your VPC in the file `vpc_example/vpc_example_stack.py`:
|
||||
|
||||
```
|
||||
class VpcExampleStack(core.Stack):
|
||||
|
||||
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
|
||||
super().__init__(scope, id, **kwargs)
|
||||
|
||||
vpc = aws_ec2.Vpc(self, "MiaVPCTest", cidr="10.0.0.0/16", max_azs=3)
|
||||
```
|
||||
|
||||
|
||||
### Create synthesized CloudFormation template
|
||||
|
||||
```
|
||||
cdk synth
|
||||
```
|
||||
|
||||
You can check what changes this introduces into your AWS account:
|
||||
```
|
||||
cdk diff --profile <AWS PROFILE>
|
||||
```
|
||||
|
||||
### Deploy to AWS
|
||||
|
||||
If everything looks right, deploy:
|
||||
|
||||
```
|
||||
cdk deploy --profile <AWS PROFILE>
|
||||
```
|
||||
|
||||
To check all the stacks in the app:
|
||||
|
||||
```
|
||||
cdk ls
|
||||
```
|
||||
|
||||
### Clean up
|
||||
|
||||
To destroy/remove all the newly created resources, run:
|
||||
|
||||
```
|
||||
cdk destroy --profile <AWS PROFILE>
|
||||
```
|
||||
|
11
code/kubernetes/python-cdk/python/VPC_example/app.py
Normal file
11
code/kubernetes/python-cdk/python/VPC_example/app.py
Normal file
|
@ -0,0 +1,11 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from aws_cdk import core
|
||||
|
||||
from vpc_example.vpc_example_stack import VpcExampleStack
|
||||
|
||||
|
||||
app = core.App()
|
||||
VpcExampleStack(app, "vpc-example")
|
||||
|
||||
app.synth()
|
3
code/kubernetes/python-cdk/python/VPC_example/cdk.json
Normal file
3
code/kubernetes/python-cdk/python/VPC_example/cdk.json
Normal file
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"app": "python3 app.py"
|
||||
}
|
BIN
code/kubernetes/python-cdk/python/VPC_example/imgs/vpc.png
Normal file
BIN
code/kubernetes/python-cdk/python/VPC_example/imgs/vpc.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 771 KiB |
|
@ -0,0 +1,14 @@
|
|||
attrs==19.1.0
|
||||
aws-cdk.aws-cloudwatch==1.10.0
|
||||
aws-cdk.aws-ec2==1.10.0
|
||||
aws-cdk.aws-iam==1.10.0
|
||||
aws-cdk.aws-ssm==1.10.0
|
||||
aws-cdk.core==1.10.0
|
||||
aws-cdk.cx-api==1.10.0
|
||||
aws-cdk.region-info==1.10.0
|
||||
cattrs==0.9.0
|
||||
jsii==0.17.1
|
||||
publication==0.0.3
|
||||
python-dateutil==2.8.0
|
||||
six==1.12.0
|
||||
typing-extensions==3.7.4
|
45
code/kubernetes/python-cdk/python/VPC_example/setup.py
Normal file
45
code/kubernetes/python-cdk/python/VPC_example/setup.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
import setuptools
|
||||
|
||||
|
||||
with open("README.md") as fp:
|
||||
long_description = fp.read()
|
||||
|
||||
|
||||
setuptools.setup(
|
||||
name="vpc_example",
|
||||
version="0.0.1",
|
||||
|
||||
description="A VPC CDK Python example",
|
||||
long_description=long_description,
|
||||
long_description_content_type="text/markdown",
|
||||
|
||||
author="author",
|
||||
|
||||
package_dir={"": "vpc_example"},
|
||||
packages=setuptools.find_packages(where="vpc_example"),
|
||||
|
||||
install_requires=[
|
||||
"aws-cdk.core",
|
||||
],
|
||||
|
||||
python_requires=">=3.6",
|
||||
|
||||
classifiers=[
|
||||
"Development Status :: 4 - Beta",
|
||||
|
||||
"Intended Audience :: Developers",
|
||||
|
||||
"License :: OSI Approved :: Apache Software License",
|
||||
|
||||
"Programming Language :: JavaScript",
|
||||
"Programming Language :: Python :: 3 :: Only",
|
||||
"Programming Language :: Python :: 3.6",
|
||||
"Programming Language :: Python :: 3.7",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
|
||||
"Topic :: Software Development :: Code Generators",
|
||||
"Topic :: Utilities",
|
||||
|
||||
"Typing :: Typed",
|
||||
],
|
||||
)
|
|
@ -0,0 +1,8 @@
|
|||
from aws_cdk import core, aws_ec2
|
||||
|
||||
class VpcExampleStack(core.Stack):
|
||||
|
||||
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
|
||||
super().__init__(scope, id, **kwargs)
|
||||
|
||||
vpc = aws_ec2.Vpc(self, "MiaVPCTest", cidr="10.0.0.0/16", max_azs=3)
|
136
code/kubernetes/python-cdk/ts/.gitignore
vendored
Normal file
136
code/kubernetes/python-cdk/ts/.gitignore
vendored
Normal file
|
@ -0,0 +1,136 @@
|
|||
# CDK
|
||||
*.js
|
||||
!jest.config.js
|
||||
*.d.ts
|
||||
node_modules
|
||||
.cdk.staging
|
||||
cdk.out
|
||||
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
lerna-debug.log*
|
||||
.idea
|
||||
.DS_Store
|
||||
# Diagnostic reports (https://nodejs.org/api/report.html)
|
||||
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
|
||||
|
||||
# Runtime data
|
||||
pids
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Directory for instrumented libs generated by jscoverage/JSCover
|
||||
lib-cov
|
||||
|
||||
# Coverage directory used by tools like istanbul
|
||||
coverage
|
||||
*.lcov
|
||||
|
||||
# nyc test coverage
|
||||
.nyc_output
|
||||
|
||||
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
|
||||
.grunt
|
||||
|
||||
# Bower dependency directory (https://bower.io/)
|
||||
bower_components
|
||||
|
||||
# node-waf configuration
|
||||
.lock-wscript
|
||||
|
||||
# Compiled binary addons (https://nodejs.org/api/addons.html)
|
||||
build/Release
|
||||
|
||||
# Dependency directories
|
||||
node_modules/
|
||||
jspm_packages/
|
||||
|
||||
# TypeScript v1 declaration files
|
||||
typings/
|
||||
|
||||
# TypeScript cache
|
||||
*.tsbuildinfo
|
||||
|
||||
# Optional npm cache directory
|
||||
.npm
|
||||
|
||||
# Optional eslint cache
|
||||
.eslintcache
|
||||
|
||||
# Optional REPL history
|
||||
.node_repl_history
|
||||
|
||||
# Output of 'npm pack'
|
||||
*.tgz
|
||||
|
||||
# Yarn Integrity file
|
||||
.yarn-integrity
|
||||
|
||||
# dotenv environment variables file
|
||||
.env
|
||||
.env.test
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# parcel-bundler cache (https://parceljs.org/)
|
||||
.cache
|
||||
|
||||
# next.js build output
|
||||
.next
|
||||
|
||||
# nuxt.js build output
|
||||
.nuxt
|
||||
|
||||
# vuepress build output
|
||||
.vuepress/dist
|
||||
|
||||
# Serverless directories
|
||||
.serverless/
|
||||
|
||||
# FuseBox cache
|
||||
.fusebox/
|
||||
|
||||
# DynamoDB Local files
|
||||
.dynamodb/
|
||||
|
||||
# Kubernetes kubectl configuration
|
||||
kubeconfig
|
||||
|
||||
# CDK
|
||||
cdk.out/
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
pip-wheel-metadata/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Certs
|
||||
*.key
|
||||
*.crt
|
||||
easy-rsa/
|
61
code/kubernetes/python-cdk/ts/MSK_example/README.md
Normal file
61
code/kubernetes/python-cdk/ts/MSK_example/README.md
Normal file
|
@ -0,0 +1,61 @@
|
|||
# CDK MSK Example
|
||||
|
||||
|
||||
|
||||
### Deploy VPC
|
||||
|
||||
[Amazon VPC](https://aws.amazon.com/vpc/) lets you provision a logically isolated section of the AWS Cloud where you can all the resources as in a virtual network.
|
||||
|
||||
These are the default values in `cdk.json`:
|
||||
|
||||
```
|
||||
"vpc.cidr": "10.0.0.0/16",
|
||||
"vpc.maxAzs": 3,
|
||||
```
|
||||
|
||||
Deploy with:
|
||||
|
||||
```
|
||||
cdk deploy VPCStack
|
||||
```
|
||||
|
||||
### Deploy MSK
|
||||
|
||||
[Amazon MSK](https://aws.amazon.com/msk/) is a managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.
|
||||
|
||||
These are the default values in `cdk.json`:
|
||||
|
||||
```
|
||||
"msk.DevMskCluster": "MskCluster",
|
||||
"msk.ClusterTag": "MSK cluster",
|
||||
"msk.brokerNodeGroupBrokerAzDistribution": "DEFAULT",
|
||||
"msk.enhancedMonitoring": "PER_BROKER",
|
||||
"msk.brokerNodeGroupEBSVolumeSize": 100,
|
||||
"msk.brokerNodeGroupInstanceType": "kafka.m5.large",
|
||||
"msk.brokerPort": 9092,
|
||||
"msk.kafkaVersion": "2.2.1",
|
||||
"msk.numberOfBrokerNodes": 3
|
||||
```
|
||||
|
||||
Deploy with:
|
||||
|
||||
```
|
||||
cdk deploy MskClusterStack
|
||||
```
|
||||
|
||||
#### Kafka CLI
|
||||
|
||||
Note that the CLI commands for MKS are given by the keyword `kafka`, for example:
|
||||
|
||||
```
|
||||
aws kafka list-clusters
|
||||
```
|
||||
|
||||
To retrieve `BootstrapBrokerStringTls`, run:
|
||||
|
||||
```
|
||||
aws kafka get-bootstrap-brokers --cluster-arn <cluster ARN>
|
||||
```
|
||||
|
||||
However, access and development within the cluster (e.g. creating topics, accessing brokers) need to be done while connected to the VPN.
|
||||
|
15
code/kubernetes/python-cdk/ts/MSK_example/bin/example.ts
Normal file
15
code/kubernetes/python-cdk/ts/MSK_example/bin/example.ts
Normal file
|
@ -0,0 +1,15 @@
|
|||
#!/usr/bin/env node
|
||||
import 'source-map-support/register';
|
||||
import cdk = require('@aws-cdk/core');
|
||||
|
||||
import { VPCStack } from "../lib/Vpc";
|
||||
import { MskClusterStack } from "../lib/MskCluster";
|
||||
|
||||
const app = new cdk.App();
|
||||
const app_env = {
|
||||
region: <account region>,
|
||||
account: <account number>
|
||||
};
|
||||
|
||||
const vpcStack = new VPCStack(app, 'VPCStack', {env: app_env});
|
||||
new MskClusterStack(app, 'MskClusterStack',{env: app_env, vpc: vpcStack.Vpc});
|
|
@ -0,0 +1,7 @@
|
|||
{
|
||||
"availability-zones:account=<ACOUNT NUMBER>:region=us-west-1": [
|
||||
"us-west-1a",
|
||||
"us-west-1b",
|
||||
"us-west-1c"
|
||||
]
|
||||
}
|
17
code/kubernetes/python-cdk/ts/MSK_example/cdk.json
Normal file
17
code/kubernetes/python-cdk/ts/MSK_example/cdk.json
Normal file
|
@ -0,0 +1,17 @@
|
|||
{
|
||||
"app": "npx ts-node bin/example.ts",
|
||||
"context": {
|
||||
"env.type": "dev",
|
||||
"vpc.cidr": "10.0.0.0/16",
|
||||
"vpc.maxAzs": 3,
|
||||
"msk.DevMskCluster": "MskCluster",
|
||||
"msk.ClusterTag": "MSK cluster",
|
||||
"msk.brokerNodeGroupBrokerAzDistribution": "DEFAULT",
|
||||
"msk.enhancedMonitoring": "PER_BROKER",
|
||||
"msk.brokerNodeGroupEBSVolumeSize": 100,
|
||||
"msk.brokerNodeGroupInstanceType": "kafka.m5.large",
|
||||
"msk.brokerPort": 9092,
|
||||
"msk.kafkaVersion": "2.2.1",
|
||||
"msk.numberOfBrokerNodes": 3
|
||||
}
|
||||
}
|
75
code/kubernetes/python-cdk/ts/MSK_example/lib/MskCluster.ts
Normal file
75
code/kubernetes/python-cdk/ts/MSK_example/lib/MskCluster.ts
Normal file
|
@ -0,0 +1,75 @@
|
|||
import cdk = require("@aws-cdk/core");
|
||||
import ec2 = require("@aws-cdk/aws-ec2");
|
||||
import msk = require("@aws-cdk/aws-msk");
|
||||
|
||||
|
||||
interface MSKStackProps extends cdk.StackProps {
|
||||
vpc: ec2.IVpc;
|
||||
}
|
||||
export class MskClusterStack extends cdk.Stack {
|
||||
private vpc: ec2.IVpc;
|
||||
|
||||
constructor(scope: cdk.Construct, id: string, props?: MSKStackProps) {
|
||||
super(scope, id, props);
|
||||
const current_env = this.node.tryGetContext("env.type");
|
||||
|
||||
//****************************** Context variables **************************************//
|
||||
const clusterName = this.node.tryGetContext("msk.DevMskCluster");
|
||||
const clusterTag = this.node.tryGetContext("msk.mskClusterTag");
|
||||
const brokerNodeGroupBrokerAzDistribution = this.node.tryGetContext("msk.brokerNodeGroupBrokerAzDistribution");
|
||||
const brokerNodeGroupEBSVolumeSize = this.node.tryGetContext("msk.brokerNodeGroupEBSVolumeSize");
|
||||
const brokerNodeGroupInstanceType = this.node.tryGetContext("msk.brokerNodeGroupInstanceType");
|
||||
const brokerPort = this.node.tryGetContext("msk.brokerPort");
|
||||
const kafkaVersion = this.node.tryGetContext("msk.kafkaVersion");
|
||||
const numberOfBrokerNodes = this.node.tryGetContext("msk.numberOfBrokerNodes");
|
||||
const enhancedMonitoring = this.node.tryGetContext("msk.enhancedMonitoring");
|
||||
|
||||
//**************************************** VPC
|
||||
if (props)
|
||||
this.vpc = props.vpc;
|
||||
else
|
||||
this.vpc = ec2.Vpc.fromLookup(this, current_env+"Vpc", {
|
||||
vpcName: "VPCStack/"+current_env+"Vpc"
|
||||
});
|
||||
|
||||
//**************************************** SG
|
||||
const description = "Allow access to "+current_env+" MSK Cluster";
|
||||
const SecurityGroup = new ec2.SecurityGroup(
|
||||
this,
|
||||
current_env+"MskClusterSG",
|
||||
{
|
||||
vpc: this.vpc,
|
||||
securityGroupName: current_env+"MskClusterSG",
|
||||
description: description,
|
||||
allowAllOutbound: true
|
||||
}
|
||||
);
|
||||
SecurityGroup.addIngressRule(
|
||||
ec2.Peer.anyIpv4(),
|
||||
ec2.Port.tcp(brokerPort),
|
||||
description
|
||||
);
|
||||
|
||||
//******************************* MSK Cluster **************************//
|
||||
const cluster = new msk.CfnCluster(this, "MskCluster", {
|
||||
brokerNodeGroupInfo: {
|
||||
clientSubnets: this.vpc.privateSubnets.map(x => x.subnetId),
|
||||
instanceType: brokerNodeGroupInstanceType,
|
||||
brokerAzDistribution: brokerNodeGroupBrokerAzDistribution,
|
||||
storageInfo: {
|
||||
ebsStorageInfo: {
|
||||
volumeSize: brokerNodeGroupEBSVolumeSize
|
||||
}
|
||||
}
|
||||
},
|
||||
clusterName: clusterName,
|
||||
kafkaVersion: kafkaVersion,
|
||||
numberOfBrokerNodes: numberOfBrokerNodes,
|
||||
enhancedMonitoring: enhancedMonitoring,
|
||||
tags: {
|
||||
name: current_env+clusterTag,
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
19
code/kubernetes/python-cdk/ts/MSK_example/lib/Vpc.ts
Normal file
19
code/kubernetes/python-cdk/ts/MSK_example/lib/Vpc.ts
Normal file
|
@ -0,0 +1,19 @@
|
|||
import cdk = require('@aws-cdk/core');
|
||||
import ec2 = require("@aws-cdk/aws-ec2");
|
||||
|
||||
export class VPCStack extends cdk.Stack {
|
||||
readonly Vpc: ec2.IVpc;
|
||||
|
||||
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
|
||||
super(scope, id, props);
|
||||
const current_env = this.node.tryGetContext("env.type");
|
||||
|
||||
const vpc_cidr = this.node.tryGetContext("vpc.cidr");
|
||||
const vpc_maxAzs = this.node.tryGetContext("vpc.maxAzs");
|
||||
const vpc = new ec2.Vpc(this, current_env+"Vpc", {
|
||||
cidr: vpc_cidr,
|
||||
maxAzs: vpc_maxAzs
|
||||
});
|
||||
this.Vpc = vpc;
|
||||
}
|
||||
}
|
3206
code/kubernetes/python-cdk/ts/MSK_example/package-lock.json
generated
Normal file
3206
code/kubernetes/python-cdk/ts/MSK_example/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load diff
28
code/kubernetes/python-cdk/ts/MSK_example/package.json
Normal file
28
code/kubernetes/python-cdk/ts/MSK_example/package.json
Normal file
|
@ -0,0 +1,28 @@
|
|||
{
|
||||
"name": "dev",
|
||||
"version": "0.1.0",
|
||||
"bin": {
|
||||
"dev": "bin/dev.js"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"watch": "tsc -w",
|
||||
"test": "jest",
|
||||
"cdk": "cdk"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@aws-cdk/assert": "^2.68.0",
|
||||
"@types/jest": "^24.0.18",
|
||||
"aws-cdk": "^1.176.0",
|
||||
"jest": "^29.6.1",
|
||||
"ts-jest": "^29.0.3",
|
||||
"ts-node": "^8.4.1",
|
||||
"typescript": "~3.6.4"
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-cdk/aws-ec2": "^1.12.0",
|
||||
"@aws-cdk/aws-msk": "^1.12.0",
|
||||
"@aws-cdk/core": "^1.12.0",
|
||||
"source-map-support": "^0.5.13"
|
||||
}
|
||||
}
|
23
code/kubernetes/python-cdk/ts/MSK_example/tsconfig.json
Normal file
23
code/kubernetes/python-cdk/ts/MSK_example/tsconfig.json
Normal file
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"target":"ES2018",
|
||||
"module": "commonjs",
|
||||
"lib": ["es2016", "es2017.object", "es2017.string"],
|
||||
"declaration": true,
|
||||
"strict": true,
|
||||
"noImplicitAny": true,
|
||||
"strictNullChecks": true,
|
||||
"noImplicitThis": true,
|
||||
"alwaysStrict": true,
|
||||
"noUnusedLocals": false,
|
||||
"noUnusedParameters": false,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": false,
|
||||
"inlineSourceMap": true,
|
||||
"inlineSources": true,
|
||||
"experimentalDecorators": true,
|
||||
"strictPropertyInitialization":false,
|
||||
"typeRoots": ["./node_modules/@types"]
|
||||
},
|
||||
"exclude": ["cdk.out"]
|
||||
}
|
3
code/kubernetes/python-cdk/ts/README.md
Normal file
3
code/kubernetes/python-cdk/ts/README.md
Normal file
|
@ -0,0 +1,3 @@
|
|||
# CDK TypeScript Examples
|
||||
|
||||
* MSK + VPC
|
Loading…
Add table
Add a link
Reference in a new issue