merge files from the blockchain infra repo (#59)

This commit is contained in:
autistic-symposium-helper 2024-11-17 17:03:20 -08:00 committed by GitHub
parent 23f56ef195
commit 2a6449bb85
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
346 changed files with 29097 additions and 132 deletions

6
code/aws/README.md Normal file
View file

@ -0,0 +1,6 @@
## resources on aws
<br>
* [eks](eks)
* [lambda](lambda-function)

480
code/aws/eks/README.md Normal file
View file

@ -0,0 +1,480 @@
# AWS EKS
## Tutorials & Articles
* [Provision a Kubernetes Cluster in Amazon EKS with Weaveworks eksctl and AWS CDK](https://blog.reactioncommerce.com/deploying-kubernetes-clusters-in-aws-eks-with-the-aws-cloud-development-kit/).
## Creating EKS cluster using the eksctl CLI
eksctl create cluster \
--name staging \
--version 1.14 \
--nodegroup-name staging-workers \
--node-type m5.xlarge \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto
### Create RDS PostgreSQL instance
Create `hydra` database and `hydradbadmin` user/role in the database.
hydra=> CREATE DATABASE hydra;
CREATE DATABASE
hydra=> \q
hydra=> CREATE ROLE hydradbadmin;
CREATE ROLE
hydra=> ALTER ROLE hydradbadmin LOGIN;
ALTER ROLE
hydra=> ALTER USER hydradbadmin PASSWORD 'PASS';
ALTER ROLE
DB connection string: `postgres://hydradbadmin:PASS@staging.cjwa4nveh3ws.us-west-2.rds.amazonaws.com:5432/hydra`
### Create MongoDB database and user in Atlas
MONGO_OPLOG_URL: mongodb://domain:PASS@cluster0-shard-00-02-gk3cz.mongodb.net.:27017,[cluster0-shard-00-01-gk3cz.mongodb.net](http://cluster0-shard-00-01-gk3cz.mongodb.net/).:27017,[cluster0-shard-00-00-gk3cz.mongodb.net](http://cluster0-shard-00-00-gk3cz.mongodb.net/).:27017/local?authSource=admin&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true
MONGO_URL: mongodb://domain:PASS@cluster0-shard-00-02-gk3cz.mongodb.net.:27017,[cluster0-shard-00-01-gk3cz.mongodb.net](http://cluster0-shard-00-01-gk3cz.mongodb.net/).:27017,[cluster0-shard-00-00-gk3cz.mongodb.net](http://cluster0-shard-00-00-gk3cz.mongodb.net/).:27017/rc-staging?authSource=admin&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true
### Generate kubeconfig files for administrator and developer roles
Save the above file somewhere, then
export KUBECONFIG=/path/to/file
export AWS_PROFILE=profilename
This configuration uses the `aws-iam-authenticator` binary (needs to exist locally)
and maps an IAM role to an internal Kubernetes RBAC role.
This was created in the EKS cluster with:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: k8s-developer-role
namespace: staging
rules:
- apiGroups:
- ""
- "apps"
- "batch"
- "extensions"
resources:
- "configmaps"
- "cronjobs"
- "deployments"
- "events"
- "ingresses"
- "jobs"
- "pods"
- "pods/attach"
- "pods/exec"
- "pods/log"
- "pods/portforward"
- "secrets"
- "services"
verbs:
- "create"
- "delete"
- "describe"
- "get"
- "list"
- "patch"
- "update"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: k8s-developer-rolebinding
namespace: staging
subjects:
- kind: User
name: k8s-developer-user
roleRef:
kind: Role
name: k8s-developer-role
apiGroup: rbac.authorization.k8s.io
### Install nginx ingress controller and create ALB in front of nginx ingress service
The `Service` type for the `ingress-nginx` service is `NodePort` and not `LoadBalancer`
because we don't want AWS to create a new Load Balancer every time we recreate the ingress.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
selector:
app: ingress-nginx
ports:
- name: http
port: 80
nodePort: 30080
targetPort: http
- name: https
port: 443
nodePort: 30443
targetPort: https
Instead, we provision an ALB and send both HTTP and HTTPS traffic to a Target Group that targets port 30080 on
the EKS worker nodes (which is the `nodePort` in the manifest above for HTTP traffic).
**NOTE**: need to add rule in EKS worker SG to allow SG of ALB to access port 30080.
### Create Kubernetes Secret for DockerHub credentials (for pulling private images)
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
metadata:
name: reaction-docker-hub
data:
.dockerconfigjson: BASE64_OF_DOCKERHUB_AUTH_STRING
DOCKERHUB_AUTH_STRING={"auths":{"https://index.docker.io/v1/":{"username":"rck8s","password":"PASS","auth":"OBTAINED_FROM_DOCKER_CONFIG.JSON"}}}
This Secret was created in several namespaces (`default`, `staging`, `monitoring`, `logging`, `flux-system`)
### Install and customize Flux for GitOps workflow
Flux is installed in its own `flux-system` namespace.
To install it, it we ran:
kustomize build overlays/staging | kubectl apply -f -
The default `Deployment` for Flux is using the `weaveworks/flux` Docker image, which as of its last
version contains an older binary for `kustomize`.
Here is the `Dockerfile` for that image:
FROM fluxcd/flux:1.15.0
ARG REACTION_ENVIRONMENT
ENV SOPS_VERSION 3.4.0
ENV REACTION_ENVIRONMENT=${REACTION_ENVIRONMENT}
RUN /sbin/apk add npm
RUN wget https://github.com/mozilla/sops/releases/download/${SOPS_VERSION}/sops-${SOPS_VERSION}.linux \
-O /usr/local/bin/sops; chmod +x /usr/local/bin/sops
For now, the script `build_and_push_image_staging.sh` sets this variable to `staging`:
#!/bin/bash
COMMIT_TAG=$(git rev-parse --short HEAD)
docker build --build-arg REACTION_ENVIRONMENT=staging -t reaction-flux:staging .
docker tag reaction-flux:staging reactioncommerce/reaction-flux:staging-${COMMIT_TAG}
docker push reactioncommerce/reaction-flux:staging-${COMMIT_TAG}
Flux generates an ssh key upon startup. We need to obtain that key with `fluxctl` and add
it as a deploy key to the `reaction-gitops` GitHub repo:
fluxctl --k8s-fwd-ns=flux-system identity
The `manifest-generation=true` argument allows Flux to inspect and use a special configuration file called
`.flux.yaml` in the root of the associated Git repo. The contents of this file are:
version: 1
commandUpdated:
generators:
- command: ./generate_kustomize_output.sh
Flux will `cd` into the `git-path` (set to `.` in our case in the args above), then will run the `command`
specified in the `.flux.yaml` file. The output of the command needs to be valid YAML, which Flux will apply
to the Kubernetes cluster via `kubectl apply -f -`.
We can run whatever commands we need, following whatever conventions we come up with, inside the `generate_kustomize_output.sh` script. Currently we do something along these lines:
#!/bin/bash
if [ -z $ENVIRONMENT ]; then
echo Please set the ENVIRONMENT environment variable to a value such as staging before running this script.
exit 1
fi
# this is necessary when running npm/npx inside a Docker container
npm config set unsafe-perm true
cd kustomize
for SUBDIR in `ls`; do
if [ "$1" ] && [ "${SUBDIR}" != "$1" ]; then
continue
fi
OVERLAY_DIR=${SUBDIR}/overlays/${ENVIRONMENT}
if [ ! -d "${OVERLAY_DIR}" ]; then
continue
fi
if [ -d "${OVERLAY_DIR}/.sops" ]; then
# decrypt sops-encrypted values and merge them into stub manifests for Secret objects
npx --quiet --package @reactioncommerce/merge-sops-secrets@1.2.1 sops-to-secret ${OVERLAY_DIR}/secret-stub.yaml > ${OVERLAY_DIR}/secret.yaml
fi
# generate kustomize output
kustomize build ${OVERLAY_DIR}
echo "---"
rm -rf ${OVERLAY_DIR}/secret.yaml
done
Flux will do a `git pull` against the branch of the `reaction-gitops` repo specified in the
command-line args (`master` in our case) every 5 minutes, and it will run the `generate_kustomize_output.sh` script, then will run `kubectl apply -f -` against the output of that script, applying any manifests that have changed.
The Flux `git pull` can also be forced with `fluxctl sync`:
fluxctl sync --k8s-fwd-ns flux-system
To redeploy a Flux container for example when the underlying Docker image changes, do this in the
`reaction-gitops` root directory:
cd bootstrap/flux
kustomize build overlays/staging | kubectl apply -f -
### Management of Kubernetes secrets
We use sops to encrypt secret values for environment variables representing credentials, database connections, etc.
We create one file per secret in directories of the format `kustomize/SERVICE/overlays/ENVIRONMENT/.sops.`
We encrypt the files with a KMS key specified in `.sops.yaml` in the directory `kustomize/SERVICE/overlays/ENVIRONMENT`.
Example:
cd kustomize/hydra/overlays/staging
echo -n "postgres://hydradbadmin:PASS@staging.cjwa4nveh3ws.us-west-2.rds.amazonaws.com:5432/hydra" > .sops/DATABASE_URL.enc
sops -e -i .sops/DATABASE_URL.enc
We also create a `secret-stub.yaml` file in the directory `kustomize/SERVICE/overlays/ENVIRONMENT` similar to this:
$ cat overlays/staging/secret-stub.yaml
apiVersion: v1
kind: Secret
metadata:
name: hydra
type: Opaque
data:
DATABASE_URL: BASE64_OF_PLAIN_TEXT_SECRET
OIDC_SUBJECT_TYPE_PAIRWISE_SALT: BASE64_OF_PLAIN_TEXT_SECRET
SYSTEM_SECRET: BASE64_OF_PLAIN_TEXT_SECRET
The Flux container will call the `generate_kustomize_output.sh` script, which will decrypt the files via Pete's `@reactioncommerce/merge-sops-secrets@1.2.1 sops-to-secret` utility and will stitch their values inside `secret-stub.yaml`, saving the output in a `secret.yaml` file which will then be read by `kustomize`.
Here is the relevant section from the `generate_kustomize_output.sh` script:
npx --quiet \
--package @reactioncommerce/merge-sops-secrets@1.2.1 \
sops-to-secret ${OVERLAY_DIR}/secret-stub.yaml > ${OVERLAY_DIR}/secret.yaml
The Flux container needs to be able to use the KMS key for decryption, so we had to create an IAM policy allowing access to this KMS key, then attach the policy to the EKS worker node IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:GetKeyPolicy",
"kms:Decrypt",
"kms:DescribeKey",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:us-west-2:773713188930:key/a8d73206-e37a-4ddf-987e-dbfa6c2cd2f8"
}
]
}
### Kubernetes manifest generation with Kustomize
We use Kustomize to generate Kubernetes manifests in YAML format.
There are several directories under the `kustomize` directory, one for each service to be deployed.
Example directory structure under `kustomize/reaction-storefront`:
|____overlays
| |____staging
| | |____patch-deployment-imagepullsecret.yaml
| | |____kustomization.yaml
| | |____hpa.yaml
| | |____secret-stub.yaml
| | |____.sops
| | | |____SESSION_SECRET.enc
| | | |____OAUTH2_CLIENT_SECRET.enc
| | |____configmap.yaml
| | |____.sops.yaml
|____base
| |____deployment.yaml
| |____ingress.yaml
| |____kustomization.yaml
| |____service.yaml
The manifests under the `base` directory define the various Kubernetes objects that will be created for `reaction-storefront` (similar to YAML manifests under the `templates` directory of a Helm chart, but with no templating). In this example we have a Deployment, a Service and an Ingress defined in their respective files.
The file `base/kustomization.yaml` specifies how these manifests files are collated and how other common information is appended:
$ cat base/kustomization.yaml
# Labels to add to all resources and selectors.
commonLabels:
app.kubernetes.io/component: frontend
app.kubernetes.io/instance: reaction-storefront
app.kubernetes.io/name: reaction-storefront
# Value of this field is prepended to the
# names of all resources
#namePrefix: reaction-storefront
configMapGenerator:
- name: reaction-storefront
# List of resource files that kustomize reads, modifies
# and emits as a YAML string
resources:
- deployment.yaml
- ingress.yaml
- service.yaml
The customization for a specific environment such as `staging` happens in files in the directory `overlays/staging`. Here is the `kustomization.yaml` file from that directory:
$ cat overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: staging-
namespace: staging
images:
- name: docker.io/reactioncommerce/reaction-next-starterkit
newTag: 4e1c281ec5de541ec6b22c52c38e6e2e6e072a1c
resources:
- secret.yaml
- ../../base
patchesJson6902:
- patch: |-
- op: replace
path: /spec/rules/0/host
value: storefront.staging.reactioncommerce.io
target:
group: extensions
kind: Ingress
name: reaction-storefront
version: v1beta1
patchesStrategicMerge:
- configmap.yaml
- patch-deployment-imagepullsecret.yaml
Some things to note:
- You can customize the Docker image and tag used for a container inside a pod
- You can specify a prefix to be added to all object names, so a deployment declared in the `base/deployment.yaml` file with the name `reaction-storefront` will get `staging-` in front and will become `staging-reaction-storefront`
- You can apply patches to the files under `base` and specify values specific to this environment
Patches can be declared either inline in the `kustomization.yaml` file (such as the Ingress patch above), or in separate YAML files (such as the files in the `patchesStrategicMerge` section).
Here is an example of a separate patch file:
$ cat overlays/staging/patch-deployment-imagepullsecret.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: reaction-storefront
spec:
template:
spec:
imagePullSecrets:
- name: reaction-docker-hub
You need to specify enough information in the patch file for `kustomize` to identify the object to be patched. If you think of the YAML manifest as a graph with nodes specified by a succession of keys, then the patch needs to specify which node needs to be modified or added, and what is the new value for that key. In the example above, we add a new key at `spec->template->spec->imagePullSecrets->0 (item index)->name` and set its value to `reaction-docker-hub`.
**Environment variables** for a specific environment are set in the `configmap.yaml` file in the `overlays/ENVIRONMENT` directory. Example for `reaction-storefront`:
$ cat overlays/staging/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: reaction-storefront
data:
CANONICAL_URL: https://storefront.staging.reactioncommerce.io
DEFAULT_CACHE_TTL: "3600"
ELASTICSEARCH_URL: http://elasticsearch-client:9200
EXTERNAL_GRAPHQL_URL: https://api.staging.reactioncommerce.io/graphql-beta
HYDRA_ADMIN_URL: http://staging-hydra:4445
INTERNAL_GRAPHQL_URL: http://staging-reaction-core/graphql-beta
OAUTH2_ADMIN_PORT: "4445"
OAUTH2_AUTH_URL: https://auth.staging.reactioncommerce.io/oauth2/auth
OAUTH2_CLIENT_ID: staging-storefront
OAUTH2_HOST: staging-hydra
OAUTH2_IDP_HOST_URL: https://api.staging.reactioncommerce.io/
OAUTH2_REDIRECT_URL: https://storefront.staging.reactioncommerce.io/callback
OAUTH2_TOKEN_URL: http://staging-hydra:4444/oauth2/token
PRINT_ERRORS: "false"
SEARCH_ENABLED: "false"
SESSION_MAX_AGE_MS: "2592000000"
Another example of a patch is adding `serviceMonitorNamespaceSelector` and `serviceMonitorSelector` sections to a Prometheus manifest file:
$ cat bootstrap/prometheus-operator/overlays/staging/patch-prometheus-application-selectors.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
prometheus: application
name: application
namespace: monitoring
spec:
serviceMonitorNamespaceSelector:
matchExpressions:
- key: name
operator: In
values:
- staging
serviceMonitorSelector:
matchLabels:
monitoring: application
**In short, the Kustomize patching mechanism is powerful, and it represents the main method for customizing manifests for a given environment while keeping intact the default manifests under the `base` directory.**
### Automated PR creation into reaction-gitops from example-storefront
We added a job to the CircleCI workflow for `reactioncommerce/example-storefront` (`master` branch) to create a PR automatically against `reactioncommerce/reaction-gitops`.
The PR contains a single modification of the `reaction-storefront/overlays/staging/kustomize.yaml` file. It sets the Docker image tag to the CIRCLE_SHA1 of the current build by calling `kustomize edit set image [docker.io/${DOCKER_REPOSITORY}:${CIRCLE_SHA1}](http://docker.io/$%7BDOCKER_REPOSITORY%7D:$%7BCIRCLE_SHA1%7D)`.
Details here:
[https://github.com/reactioncommerce/example-storefront/blob/master/.circleci/config.yml#L101](https://github.com/reactioncommerce/example-storefront/blob/master/.circleci/config.yml#L101)
### Set up ElasticSearch and Fluentd for Kubernetes pod logging
Create IAM policy and add it to EKS worker node role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
Create ElasticSearch domain `staging-logs` and configure it to use Amazon Cognito for user authentication for Kibana.
Download `fluentd.yml` from [https://eksworkshop.com/logging/deploy.files/fluentd.yml](https://eksworkshop.com/logging/deploy.files/fluentd.yml) , kustomize it, then install `fluentd` manifests for staging:
$ kustomize build bootstrap/fluentd/overlays/staging | kubectl create -f -

View file

@ -0,0 +1,5 @@
# References for Lambda Functions
- [JavaScript Cloudwatch logging test](https://github.com/go-outside-labs/Curated_Cloud_and_Orchestration/blob/master/lambda_function_examples/monitoring_example).
- [Python functionby SQS and responding to a SNS topic](https://github.com/go-outside-labs/Curated_Cloud_and_Orchestration/blob/master/lambda_function_examples/sqs-sns_example).

View file

@ -0,0 +1,4 @@
!.env.example
.env
node_modules
src/packaged-*.yaml

View file

@ -0,0 +1,37 @@
BASEDIR := "$(PWD)/src"
CMD := docker run -it --rm \
--volume "/var/run/docker.sock:/var/run/docker.sock" \
--volume "$(PWD)/src:/var/opt" \
--volume ~/.aws:/root/.aws \
--env-file .env
AWS_REGION := $(shell aws configure get region)
.PHONY: help
help:
@$(CMD)
.PHONY: build
build:
@$(CMD) build
.PHONY: validate
validate:
@$(CMD) validate
.PHONY: local
local:
@$(CMD) local invoke "MonitoringTest" \
-t "/var/opt/template.yaml" \
-e "/var/opt/event.json" \
--profile "$(AWS_PROFILE)" \
--docker-volume-basedir "$(BASEDIR)"
.PHONY: logs
logs:
@$(CMD) logs -n MonitoringTest --stack-name ${STACK_NAME} -t --region ${AWS_REGION} --profile ${AWS_PROFILE}
.PHONY: package
package:
@$(CMD) package --template-file ./template.yaml --output-template-file ./packaged-template.yaml --s3-bucket ${S3_BUCKET} --region ${AWS_REGION} --profile ${AWS_PROFILE}

View file

@ -0,0 +1,9 @@
### Monitoring Lambda Test Function
Lambda function that looks at its argument and just succeeds or fails based on the input.
This is used to test our monitoring graphs and alerting rules.
Install [aws-cli](https://aws.amazon.com/cli/) and [sam](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-logs.html).

View file

@ -0,0 +1,4 @@
export AWS_REGION=
export S3_BUCKET=
export STACK_NAME=
export SERVICE_NAME=

View file

@ -0,0 +1,10 @@
{
"name": "monitoring",
"version": "1.0.0",
"description": "Lambda function that looks at its argument and just succeeds or fails based on the input.",
"main": "index.js",
"scripts": {
"locally": "node src/index.js"
},
"author": "Mia Stein"
}

View file

@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -o errexit # always exit on error
set -o errtrace # trap errors in functions as well
set -o pipefail # don't ignore exit codes when piping output
IFS=$'\n\t'
cd "$(dirname "${BASH_SOURCE[0]}")/.."
source "$1"
make --makefile=./scripts/deploy.mk all

View file

@ -0,0 +1,30 @@
SAM_INPUT_TEMPLATE=./src/template.yaml
SAM_OUTPUT_TEMPLATE=./src/packaged-template.yaml
.PHONY: validate-env
validate-env:
@./scripts/validate-env.sh \
AWS_ACCESS_KEY_ID \
AWS_REGION \
AWS_SECRET_ACCESS_KEY \
STACK_NAME \
S3_BUCKET
.PHONY: package
package: validate-env
@aws cloudformation package \
--template-file ${SAM_INPUT_TEMPLATE} \
--output-template-file ${SAM_OUTPUT_TEMPLATE} \
--s3-bucket ${S3_BUCKET} \
--region ${AWS_REGION}
.PHONY: deploy
deploy: validate-env package
aws cloudformation deploy \
--template-file ${SAM_OUTPUT_TEMPLATE} \
--stack-name ${SAM_STACK_NAME} \
--capabilities CAPABILITY_IAM \
--region ${AWS_REGION}
.PHONY: all
all: deploy

View file

@ -0,0 +1,27 @@
#!/usr/bin/env bash
set -o errexit # always exit on error
set -o errtrace # trap errors in functions as well
set -o pipefail # don't ignore exit codes when piping output
set -o posix # more strict failures in subshells
IFS=$'\n\t'
##### RUNNING THE SCRIPT #####
# export FUNCTION = <name of the lambda function in aws, can be found by aws lambda list-functions"
# source .env
# ./scripts/invoke.sh {true|false} [count]
cd "$(dirname "${BASH_SOURCE[0]}")/.."
./scripts/validate-env.sh AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
function=$(aws lambda list-functions | jq -r '.Functions[].FunctionName' | grep -E '^monitoring-lambda' | head -1)
payload="{\"forceError\": ${1:-false}}"
outpath="/tmp/monitoring-lambda.out"
count="${2:-1}"
for i in $(seq "${count}"); do
aws lambda invoke \
--function-name "${function}" \
--invocation-type Event \
--payload "${payload}" \
"${outpath}"
done

View file

@ -0,0 +1 @@
sam logs -n REPLACE-LOGS-NAME --stack-name REPLACE-STACK-NAME -t

View file

@ -0,0 +1 @@
sam package --template-file template.yaml --output-template-file packaged.yaml --s3-bucket s3-test-deployment

View file

@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -o errexit # always exit on error
set -o errtrace # trap errors in functions as well
set -o pipefail # don't ignore exit codes when piping output
set -o posix # more strict failures in subshells
IFS=$'\n\t'
declare -a missing
for var in "$@"; do
if [[ -z "${!var}" ]]; then
echo "⚠️ ERROR: Missing required environment variable: ${var}" 1>&2
missing+=("${var}")
fi
done
if [[ -n "${missing[*]}" ]]; then
exit 1
fi

View file

@ -0,0 +1,3 @@
{
"forceError": true
}

View file

@ -0,0 +1,22 @@
/**
* @name monitoring
* @param {Object} context Lambda context object
* @return {Object} Object with a message and the original event
*/
exports.handler = async function(event) {
console.log("got event", event);
if (event.forceError) {
throw new Error ("Intentional Error.")
}
return {
message: "Work complete.",
event
};
}
if (require.main === module) {
const event = require("./event.json");
exports.handler(event);
}

View file

@ -0,0 +1,11 @@
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Monitoring test lambda
Resources:
MonitoringTest:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: nodejs8.10
Description: Monitoring test lambda
MemorySize: 256

View file

@ -0,0 +1,104 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/

View file

@ -0,0 +1,40 @@
install:
@python setup.py install && pip install -r requirements.txt
build:
@/bin/bash ./scripts/build_package.sh
clean:
@rm -rf /tmp/*.mp4 .coverage .tox build dist lib/*.pyc *.egg-info *pyc __pycache__/ ffmpeg* .pytest_cache /tmp/*mp4 /tmp/*jpg
doctoc:
@doctoc README.md
event:
@PYTHONPATH=$(pwd) ./scripts/create_test_event.py
invoke:
@PYTHONPATH=$(pwd) lambda invoke -v
lint:
@pep8 --exclude=build,venv,dist . && echo pep8: no linting errors
fixlint:
@autopep8 --in-place *py lib/*py lib/handlers/*py lib/routes/*py tests/*py scripts/*py
test:
@PYTHONPATH=$(pwd) py.test -v --color=yes --ignore=venv/
deploy:
@/bin/bash scripts/deploy_lambda.sh sandbox
sbox:
@/bin/cp .env.sample_sandbox .env
stag:
@/bin/cp .env.sample_staging .env
prod:
@/bin/cp .env.sample_prod .env
.PHONY: install clean doctoc lint invoke test build deploy event fixlint prod stag sbox

View file

@ -0,0 +1,289 @@
# AWS Lambda Function to Trim Videos with FFMPEG
An AWS Lambda Function to trim videos served from an API endpoint, within two given NTP UTC timestamps.
The stack also uses SQS, SNS, and S3 resources.
----
# Table of Contents
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
- [Running Locally](#running-locally)
- [Create a virtual environment](#create-a-virtual-environment)
- [Configure the environment](#configure-the-environment)
- [Changes when moving to another environment](#changes-when-moving-to-another-environment)
- [Install the dependencies](#install-the-dependencies)
- [Create Sample SQS events](#create-sample-sqs-events)
- [Running the App locally](#running-the-app-locally)
- [AWS Deploynment](#aws-deploynment)
- [Running the App as a Lambda Function](#running-the-app-as-a-lambda-function)
- [Testing the flow in AWS](#testing-the-flow-in-aws)
- [Debugging Errors](#debugging-errors)
- [Contributing](#contributing)
- [Committing new code](#committing-new-code)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
----
# Introduction
As we see in this diagram, this application performs the following steps:
1. Receive a SQS event requesting a clip for a given time interval. An example of SQS event is the follow:
```json
{
"Records": [
{
"body": "{'clipId': '1111111111111', 'retryTimestamps': [], 'cameraId': '1111111111111', 'startTimestampInMs': 1537119363000, 'endTimestampInMs': 1537119423000}",
"receiptHandle": "MessageReceiptHandle",
"md5OfBody": "7b270e59b47ff90a553787216d55d91d",
"eventSourceARN": "arn:aws:sqs:us-west-1:123456789012:MyQueue",
"eventSource": "aws:sqs",
"awsRegion": "us-west-1",
"messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
"attributes": {
"ApproximateFirstReceiveTimestamp": "1523232000001",
"SenderId": "123456789012",
"ApproximateReceiveCount": "1",
"SentTimestamp": "1523232000000"
},
"messageAttributes": {
"SentTimestamp": "1523232000000"
}
}
]
}
```
2. Call the camera API with the endpoint `/cameras/cameraID` to retrieve a camera alias for the given camera id.
3. Call the camera API with the endpoint `/cameras/recording/` to retrieve a list of cam rewind source files within the given time range.
Which would generate this response:
```json
[{
"startDate":"2018-09-16T16:00:17.000Z",
"endDate":"2018-09-16T16:10:17.000Z",
"thumbLargeUrl":URL,
"recordingUrl":URL,
"thumbSmallUrl":URL,
"alias":"test"
}]
```
4. Retrieve the cam rewind source files from the origin S3 bucket (downloading them on disk).
5. Use ffmpeg to trim and merge clips into a single clip and to create several thumbnails.
6. If the clips are available, store them in the destination S3 bucket.
7. If the clips are not available, send a SQS message back to the queue, similar to the initial SQS, with a visibility timeout.
8. Call the camera API with endpoint `/cameras/clips` to update the information about the new clip and send a SNS message with the resulting metadata. An example of SNS message:
```json
{
"clipId": "1111111111111",
"cameraId": "1111111111111",
"startTimestampInMs": 1534305591000,
"endTimestampInMs": 1534305611000,
"status": "CLIP_AVAILABLE",
"bucket": "s3-test",
"clip": {
"url": URL,
"key": "/test.mp4"
},
"thumbnail": {
"url": "https://url_{size}.png",
"key": "/1111111111111/1111111111111{size}.png",
"sizes": [300, 640, 1500, 3000]
}
}
```
# Running Locally
To add new features to this application, follow these steps:
### Create a virtual environment
```bash
virtualenv venv
source venv/bin/activate
```
### Configure the environment
```bash
cp .env.sample_{env} .env
vim .env
```
Where these are the global variables in this file:
| Constant | Definition |
| :----------------------|:-------------------------------------------------------------------------------------- |
| CLIP_DOWNLOAD_DEST | Where the clips are going to be downloaded in disk |
| TIMESTAMP_FORMAT | The timestamp we will be parsing from the clip name strings |
| OLD_FILE_FORMAT | False if the clips to be downloaded have seconds encoded in their names (new format) |
| SQS_RETRY_LIMIT | The limit, in seconds, of retries for CLIP PENDING (default: 15 minutes) |
| OUT_OF_RANGE_LIMIT | The limit, in seconds, of how back in the past clips can be retrieved (default: 3 days)|
| CAM_SERVICES_URL | The url where the camera services is available |
| CLIP_URL | The url where the clips are posted to, accordingly to the environment |
| RECORDINGS_URL | The url where the source recordings are retrieved. |
| THUMBNAIL_SIZES | List of values for which clip thumbnails need to be created |
| VIDEO_MAX_LEN | Maximum length allowed for a clip |
| S3_BUCKET_ORIGIN | AWS S3 bucket where the rewinds are available |
| S3_BUCKET_ORIGIN_DIR | AWS S3 'folder' where the rewinds are available |
| S3_BUCKET_DESTINATION | AWS S3 bucket where the clips will be upload to. |
| AWS_SNS_TOPIC | AWS SNS topic arn |
| AWS_SQS_QUEUE | AWS SQS queue arn |
| AWS_SQS_QUEUE_URL | AWS SQS queue url |
| SQS_TIMEOUT | AWS SQS invisibility timeout in seconds |
#### Changes when moving to another environment
Whenever you move among the environments (prod, sandbox, or staging), you need to change the following variables:
| Constant | Possible value |
| :---------------------- |:------------------------------------------------- |
| CLIP_URL | https://camclips.{ENV}.test.com |
| S3_BUCKET_DESTINATION | cameras-service-clips-cdn-{ENV} |
| AWS_SNS_TOPIC | arn:aws:sns:test_{ENV} |
| AWS_SQS_QUEUE | arn:aws:sqs:test-sqs-{ENV} |
| AWS_SQS_QUEUE_URL | https://sqs.test-sqs-{ENV} |
### Install the dependencies
```bash
make install
```
### Create Sample SQS events
To create an `event.json` file to be tested in this application, run:
```bash
make event
```
Note that this command runs `./scripts/create_test_event.py` considering that the camera `test` is up. In case it is down, you should add a valid camera to the global variables section in that script.
You can create testing `event.json` to test alternate flows such as:
* **Clip pending** (i.e. when the requested clip is within 15 minutes to the SQS message timestamp but it was not created yet):
```bash
python scripts/create_test_event.py -p
```
* **Clip not available** (i.e. when the requested clip is later than 15 minutes but within 3 days to the SQS message timestamp):
```bash
python scripts/create_test_event.py -n
```
* **Clip out of range** (i.e. when the requested clip is later than 3 days to the SQS message timestamp):
```bash
python scripts/create_test_event.py -o
```
### Running the App locally
```bash
make invoke
```
-----
# AWS Deploynment
### Running the App as a Lambda Function
This creates a `.zip` package and deploys it to the lambda function:
```bash
make deploy
```
Check whether the package has the expected content:
```bash
unzip -l dist/cameras-service-generate-clip.zip
```
Note that this adds FFMPEG's dependencies manually and the Python dependencies are built within a Dockerfile.
### Testing the flow in AWS
You can test this application flow in sandbox and/or staging environment following theses steps:
1. In the [SQS dashboard](https://console.aws.amazon.com/sqs/home?region=us-west-1), select SQS queue and click `Queue action -> Send a Message`.
2. Type the value for `body`, similarly as the a message created in `event.json`. For instance:
```
{'clipId': '111111111111','retryTimestamps': [],'cameraId': '111111111111','startTimestampInMs': 1538412898000,'endTimestampInMs': 1538413498000}
```
1. This should trigger the lambda function and you should see the clips and thumbnails in the environment's S3 bucket in around 20-40 seconds.
### Debugging Errors
Errors will be logged in [CloudWatch](https://us-west-1.console.aws.amazon.com/cloudwatch/home?region=us-west-1#logs:). To make sense of logs in the CLI, you should install [saw](https://github.com/TylerBrock/saw).
For instance, to check error logs for staging in the last hour:
```bash
saw get /aws/lambda/clip-function -1h --filter error
```
----
# Contributing
### Committing new code
Run unit tests with:
```bash
make test
```
When deploying scripts (or to report back to Github on PRs), we ensure that code follows style guidelines with:
```bash
make lint
```
To fix lint errors, use:
```bash
make fixlint
```
Update the documentation (README.md) with:
```bash
make doctoc
```

View file

@ -0,0 +1,4 @@
region: us-west-1
function_name: ffmpeg-trimmer
handler: service.handler
description: Lambda function for creating camera clips by two NTP UTC timestamps.

View file

@ -0,0 +1,66 @@
#!/usr/bin/env python2
#
# Create a clipId to be used in event.json
import requests
import subprocess
import json
import time
def put_request(url, data):
"""
Send the PUT request to create the id, returning
the clipId string.
"""
r = requests.post(url, json=data)
print('--------------------------------------------------------')
print('Request to {}'.format(url))
print('Data sent: {}'.format(data))
print('Status code: {}'.format(r.status_code))
if r.status_code == 200:
print(r.json())
return r.json()['clipId']
else:
return False
def create_timestamps():
"""
Create a timestamp to send in the PUT request.
"""
now = int(time.time()*1000)
sent_ts = str(now)
begin_ts = str(now - 600000)
end_ts = str(now - 600000 + 180000)
return sent_ts, begin_ts, end_ts
def create_data(cam_id, url, begin_ts, end_ts):
"""
Create the data that need to be sent to the
PUT request.
"""
data = {
"cameraId": cam_id,
"startTimestampInMs": begin_ts,
"endTimestampInMs": end_ts
}
return data
def main(url, cam_id):
sent_ts, begin_ts, end_ts = create_timestamps()
data = create_data(cam_id, url, begin_ts, end_ts)
clip_id = put_request(url, data)
print('clipId to be added to event.json: {}'.format(clip_id))
print('send ts, start, end: {0} {1} {2}'.format(
sent_ts, begin_ts, end_ts))

View file

@ -0,0 +1 @@
saw get /aws/lambda/ffmpeg-clip --start -24h --filter error

View file

@ -0,0 +1,20 @@
{
"Records": [
{
"attributes": {
"ApproximateFirstReceiveTimestamp": "XXXXXXXXXXXXXXXXXXX",
"ApproximateReceiveCount": "1",
"SenderId": "XXXXXXXXXXXXXXXXXXX",
"SentTimestamp": "1543318636000"
},
"awsRegion": "us-west-1",
"body": "{'clipId': '5bc67ace8e9c352780437d2c','retryTimestamps': [],'cameraId': '582356e81ee905c72145623e','startTimestampInMs': '1543318156000','endTimestampInMs': '1543318636000'}",
"eventSource": "aws:sqs",
"eventSourceARN": "XXXXXXXXXXXXXXXXXXX",
"md5OfBody": "XXXXXXXXXXXXXXXXXXX",
"messageAttributes": {},
"messageId": "XXXXXXXXXXXXXXXXXXX",
"receiptHandle": "XXXXXXXXXXXXXXXXXXX"
}
]
}

View file

@ -0,0 +1,31 @@
boto3==1.4.4
botocore==1.5.62
certifi==2023.7.22
chardet==3.0.4
click==6.6
docutils==0.12
futures==3.2.0
idna==2.7
jmespath==0.9.0
pyaml==15.8.2
python-dateutil==2.5.3
python-dotenv==0.9.1
python-lambda==3.2.2
PyYAML==5.4
requests==2.31.0
s3transfer==0.1.13
six==1.10.0
urllib3==1.26.5
autopep8==1.4
appdirs==1.4.3
packaging==16.8
pep8==1.7.0
py==1.11.0
pyaml==15.8.2
pyparsing==2.2.0
pytest==3.0.7
virtualenv==15.0.3
jmespath==0.9.0
mock==2.0.0
requests-mock==1.5.2
coverage==4.5.1

View file

@ -0,0 +1,4 @@
packages
lib
app
Dockerfile.build

View file

@ -0,0 +1,9 @@
FROM amazonlinux:1
WORKDIR /opt/app
ADD requirements.txt .
RUN \
yum install -y python27-pip && \
pip install --target=/opt/app -r requirements.txt

View file

@ -0,0 +1,46 @@
#!/usr/bin/env bash
# This script adds additional dependences that are need for the lambda function package.
set -x
PACKAGE_NAME=cameras-clip.zip
# If S3_BUCKET env var isn't set, default it
if [ -z "${S3_BUCKET}" ]; then
S3_BUCKET=s3-test
fi
# Set dist env and create initial zip file
ORIGIN=$pwd
rm -rf dist && mkdir dist
lambda build --local-package . && mv dist/*.zip dist/$PACKAGE_NAME
cd dist/
## Fetch & add binary for FFMPEG
aws s3 cp "s3://${S3_BUCKET}/ffmpeg/ffmpeg-release-64bit-static.tar.xz" . && tar xf ffmpeg-release-64bit-static.tar.xz
zip -j -r9 $PACKAGE_NAME ffmpeg-*-64bit-static/ffmpeg
zip -j -r9 $PACKAGE_NAME ffmpeg-*-64bit-static/ffprobe
# Add this App's source code
cp -r ../lib .
zip -r9 $PACKAGE_NAME lib
# Add dependencies from pip
mkdir packages
cp ../scripts/Dockerfile.build Dockerfile
cp ../scripts/.dockerignore .dockerignore
cp ../requirements.txt .
docker build --tag pillow-build .
CTNHASH="$(docker create pillow-build)"
docker cp "${CTNHASH}":/opt/app/ .
cp -rf app/* packages/
# Package everything
cd packages
zip -ur9 ../$PACKAGE_NAME *
cd ..
# Clean up
#rm -rf ffmpeg-release-64bit-static.tar.xz ffmpeg-*-64bit-static/ packages/ lib/
docker rm ${CTNHASH}
cd $ORIGIN

View file

@ -0,0 +1,177 @@
#!/usr/bin/env python2
#
# For integration tests, different SQS events are needed.
# This script generates events for alternate flows.
# Global variables are defined in main().
import time
import json
import argparse
import datetime
import calendar
import datetime
def time_to_epoch(timestamp, timestamp_format):
"""
Given a timestamp string in seconds, return
the epoch timestamp string, in milliseconds.
"""
date = time.strptime(str(timestamp), timestamp_format)
return str(calendar.timegm(date)) + '000'
def generate_delta_time(delta, timestamp_format, now, days):
"""
Given a clip duration delta, and how many days back
from today, return a begin and end timestamp for the event.
"""
end = now - datetime.timedelta(days=days, minutes=0)
begin = now - datetime.timedelta(days=days, minutes=delta)
return begin.strftime(timestamp_format), end.strftime(timestamp_format)
def get_current_local_time(timestamp):
"""
Return the current time in a datetime object, a
human-readable string, and an epoch time integer.
"""
now = datetime.datetime.now()
human_now = now.strftime(timestamp)
epoch_now = time_to_epoch(human_now, timestamp)
return now, human_now, epoch_now
def create_event(begin, end, event_file, cam_id, epoch_now):
"""
Create an event.json SQS message file for
tests with the new timestamps and save it to the
destination in event_file.
"""
data = {'Records': [
{
"md5OfBody": "XXXXXXXXXXXXXXXXXXX",
"receiptHandle": "XXXXXXXXXXXXXXXXXXX",
"body": ("{'clipId': '1111111111111111',"
"'retryTimestamps': [],"
"'cameraId': '" + str(cam_id) + "',"
"'startTimestampInMs': '" + str(begin) + "',"
"'endTimestampInMs': '" + str(end) + "'}"),
"eventSourceARN": "XXXXXXXXXXXXXXXXXXX",
"eventSource": "aws:sqs",
"awsRegion": "us-west-1",
"messageId": "XXXXXXXXXXXXXXXXXXX",
"attributes": {
"ApproximateFirstReceiveTimestamp": "XXXXXXXXXXXXXXXXXXX",
"SenderId": "XXXXXXXXXXXXXXXXXXX",
"ApproximateReceiveCount": "1",
"SentTimestamp": epoch_now
},
"messageAttributes": {}
}
]
}
with open(event_file, 'w') as f:
json.dump(data, f, separators=(',', ': '), sort_keys=True, indent=2)
return data['Records'][0]['body']
def main():
# Global variables.
EVENT_FILE = 'event.json'
TIMESTAMP_FORMAT = '%d-%m-%Y %H:%M:%S'
DAYS_BEFORE_PENDING = 0
DAYS_BEFORE_AVAILABLE = 0
DAYS_BEFORE_NOT_AVAILABLE = 2
DAYS_BEFORE_OUT_OF_RANGE = 8
# Camera IDs used for tests, they should be checked whether
# they are currently down or not. For instance:
CAM_DOWN = '1111111111111111'
CAM_UP = '1111111111111111'
# This should not be more than 5 minutes (or the rewind clip generator
# app won't accent the event).
SESSION_DURATION_OK = 3
SESSION_DURATION_CLIP_TO_LONG = 8
# Get the time of event to be generated.
parser = argparse.ArgumentParser(
description='Clip duration you are looking for (in mins):')
parser.add_argument('-a', '--clip_available',
action='store_true', help='Event for <15 min')
parser.add_argument('-p', '--clip_pending',
action='store_true', help='Event cam down <15 min')
parser.add_argument('-o', '--clip_out_of_range',
action='store_true', help='Event for >3 days')
parser.add_argument('-n', '--clip_not_available',
action='store_true', help='Event cam down >3 days')
parser.add_argument('-t', '--clip_too_long',
action='store_true', help='Clips > 5 min')
args = parser.parse_args()
# Define what type of event we want.
if args.clip_pending:
days_before = DAYS_BEFORE_PENDING
cam_id = CAM_DOWN
session_duration = SESSION_DURATION_OK
elif args.clip_out_of_range:
days_before = DAYS_BEFORE_OUT_OF_RANGE
cam_id = CAM_UP
session_duration = SESSION_DURATION_OK
elif args.clip_not_available:
days_before = DAYS_BEFORE_NOT_AVAILABLE
cam_id = CAM_DOWN
session_duration = SESSION_DURATION_OK
elif args.clip_too_long:
days_before = DAYS_BEFORE_AVAILABLE
cam_id = CAM_UP
session_duration = SESSION_DURATION_CLIP_TO_LONG
else:
# Defaults to CLIP_AVAILABLE event.
days_before = DAYS_BEFORE_AVAILABLE
cam_id = CAM_UP
session_duration = SESSION_DURATION_OK
# Get current time in human string and epoch int.
now, human_now, epoch_now = get_current_local_time(TIMESTAMP_FORMAT)
# Generates a random begin and end time within the last days.
begin, end = generate_delta_time(
session_duration, TIMESTAMP_FORMAT, now, days_before)
# Convert these times to epoch timestamp and human time.
end_epoch = time_to_epoch(end, TIMESTAMP_FORMAT)
begin_epoch = time_to_epoch(begin, TIMESTAMP_FORMAT)
if begin_epoch and end_epoch:
# Creates the JSON file for the event.
body = create_event(begin_epoch, end_epoch,
EVENT_FILE, cam_id, epoch_now)
print('-----------------------------------------------------')
print('Event test saved at {}'.format(EVENT_FILE))
print('Camera id is {}'.format(cam_id))
print('Timestamp for {0} days ago, delta time is {1} mins').format(
days_before, session_duration)
print('Begin: {0} -> End: {1}'.format(begin_epoch, end_epoch))
print('Begin: {0} -> End: {1}'.format(begin, end))
print('Time: {}'.format(human_now))
print('Body: ')
print(body)
print('-----------------------------------------------------')
else:
print('Could not create timestamps for {}'.format(duration))
if __name__ == '__main__':
main()

View file

@ -0,0 +1,58 @@
#!/bin/bash -ex
# Script that deploy this app to the AWS lambda function, similarly to Jenkins.
USAGE=$(cat <<-END
Usage:
deploy_lambda.sh <environment>
Examples:
deploy_lambda.sh staging
END
)
if [[ "$1" = "-h" ]]; then
echo "${USAGE}"
exit
fi
if [[ -n "$1" ]]; then
SERVER_GROUP=$1
else
echo '[ERROR] You must specify the env: production, sandbox, staging'
echo
echo "${USAGE}"
exit 1
fi
BUILD_ENVIRONMENT=$1
APP_NAME=cameras-service-generate-clip
export AWS_DEFAULT_REGION="us-west-1"
export AWS_REGION="us-west-1"
if [[ "${BUILD_ENVIRONMENT}" == "sandbox" ]]; then
S3_BUCKET=sl-artifacts-dev
else
S3_BUCKET="sl-artifacts-${BUILD_ENVIRONMENT}"
fi
S3_PREFIX="lambda-functions/${APP_NAME}"
S3_BUNDLE_KEY="sl-${APP_NAME}.zip"
S3_TAGGED_BUNDLE_KEY="sl-${APP_NAME}-${BUILD_TAG}.zip"
make clean
make install
make lint
make build
aws \
s3 cp "dist/${S3_BUNDLE_KEY}" "s3://${S3_BUCKET}/${S3_PREFIX}/${S3_BUNDLE_KEY}"
aws \
s3 cp "s3://${S3_BUCKET}/${S3_PREFIX}/${S3_BUNDLE_KEY}" "s3://${S3_BUCKET}/${S3_PREFIX}/${S3_TAGGED_BUNDLE_KEY}"
aws \
lambda update-function-code \
--function-name "sl-${APP_NAME}-${BUILD_ENVIRONMENT}" \
--s3-bucket "${S3_BUCKET}" \
--s3-key "${S3_PREFIX}/${S3_TAGGED_BUNDLE_KEY}"
echo "build description:${APP_NAME}|${BUILD_ENVIRONMENT}|${BUILD_TAG}|"

View file

@ -0,0 +1,3 @@
#!/usr/bin/env bash
curl -i URL?startDate=$(date -v '-1H' +%s)000&endDate=$(date +%s)000

View file

@ -0,0 +1,17 @@
# -*- coding: utf-8 -*-
"""
Service handler module for AWS Lambda function. 'HANDLERS' constant dict is
used to map route requests to correct handler.
"""
import logging
from lib.config import LOG_LEVEL
from lib.routes import root
if LOG_LEVEL in ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'):
level = logging.getLevelName(LOG_LEVEL)
else:
level = logging.INFO
logging.basicConfig(level=level)
handler = root.handler

View file

@ -0,0 +1,7 @@
from distutils.core import setup
setup(
name='rewind_clip_generator',
version='1.0',
packages=['lib', 'lib.routes', 'lib.handlers'],
)

View file

@ -0,0 +1 @@
# -*- coding: utf-8 -*-

View file

@ -0,0 +1,19 @@
{
"clipId": "11111111111",
"cameraId": "11111111111",
"startTimestampInMs": 1534305591000,
"endTimestampInMs": 1534305611000,
"status": "CLIP_AVAILABLE",
"bucket": "sl-cam-clip-archive-prod",
"clip": {
"url": "https://test.mp4",
"key": "/583499c4e411dc743a5d5296/11111111111.mp4"
},
"thumbnail": {
"url": "https://test_{size}.png",
"key": "/11111111111/1111111111_{size}.png",
"sizes": [300, 640, 1500, 3000]
}
}

View file

@ -0,0 +1,24 @@
{
"Records": [
{
"body": "{'clipId': '507f191e810c19729de860ea', 'retryTimestamps': [], 'cameraId': '583499c4e411dc743a5d5296', 'startTimestampInMs': 1537119363000, 'endTimestampInMs': 1537119423000}",
"receiptHandle": "MessageReceiptHandle",
"md5OfBody": "7b270e59b47ff90a553787216d55d91d",
"eventSourceARN": "arn:aws:sqs:us-west-1:123456789012:MyQueue",
"eventSource": "aws:sqs",
"awsRegion": "us-west-1",
"messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
"attributes": {
"ApproximateFirstReceiveTimestamp": "1523232000001",
"SenderId": "123456789012",
"ApproximateReceiveCount": "1",
"SentTimestamp": "1523232000000"
},
"messageAttributes": {
"SentTimestamp": "1523232000000"
}
}
]
}

View file

@ -0,0 +1,10 @@
[
{
"startDate":"2018-08-25T19:20:16.000Z",
"endDate":"2018-08-25T19:30:16.000Z",
"thumbLargeUrl":"https://test_full.jpg",
"recordingUrl":"https://test.mp4",
"thumbSmallUrl":"https://test_small.jpg",
"alias":"test"
}
]

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
""" Test Root service handler module for AWS Lambda function. """
import os
import json
import pytest
from lib.routes import root
fixtures_path = os.path.join(os.path.dirname(__file__), '..', 'fixtures')
@pytest.fixture
def sns_event_record():
sns_event_record_path = os.path.join(fixtures_path, 'SNS_contract.json')
with open(sns_event_record_path, 'r') as sns_event_record_json:
return json.load(sns_event_record_json)
@pytest.fixture
def context():
return {}
class TestHandler():
def test_type_error_for_bad_params(self, context):
try:
root.handler('', context)
except TypeError as e:
pass
else:
self.fail('ExpectedException not raised')

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
""" AWS Wrapper Test Module """
import unittest
import mock
import lib.aws_wrapper
class TestAwsWrapper(unittest.TestCase):
def setUp(self):
self.filename = 'filename_test'
self.destination = 'destination_test'
self.clip_metadata = {'test': 'test'}
self.aw = lib.aws_wrapper.AwsWrapper()
@mock.patch('lib.aws_wrapper.boto3')
def test_download_clip_boto(self, boto3):
self.aw.download_video(self.filename, self.destination)
boto3.resource.assert_called_with('s3')
@mock.patch('lib.aws_wrapper.boto3')
def test_upload_clip_boto(self, boto3):
self.aw.upload_asset(self.filename, self.destination)
boto3.client.assert_called_with('s3')
@mock.patch('lib.aws_wrapper.boto3')
def test_send_sns_msg_boto(self, boto3):
aw = lib.aws_wrapper.AwsWrapper()
aw.send_sns_msg(self.clip_metadata)
boto3.client.assert_called_with('sns')

View file

@ -0,0 +1,52 @@
# -*- coding: utf-8 -*-
""" Cam Wrapper Test Module """
import mock
import unittest
import pytest
import lib.cam_wrapper
import lib.utils
class TestCamWrapper(unittest.TestCase):
def setUp(self):
self.session_start_ms = '1535223360000'
self.session_end_ms = '1535224400000'
self.cameraId = '1111111111111111'
self.clipId = '1111111111111111'
self.metadata_test_clip_key = '/{0}/{1}.mp4'.format(
self.cameraId, self.clipId)
self.metadata_test_tb_key = '/{0}/{1}'.format(
self.cameraId, self.clipId) + '_{size}.jpg'
self.cw = lib.cam_wrapper.CamWrapper(
self.session_start_ms, self.session_end_ms,
self.cameraId, self.clipId)
@mock.patch('lib.utils.get_request')
def test_get_alias(self, mocked_method):
self.cw .get_alias()
self.assertTrue(mocked_method.called)
def test_metadata(self):
self.assertEqual(
self.cw .metadata['clip']['key'], self.metadata_test_clip_key)
self.assertEqual(
self.cw .metadata['thumbnail']['key'], self.metadata_test_tb_key)
@mock.patch('lib.utils.get_request')
def test_get_clip_names(self, mocked_method):
alias = self.cw .get_clip_names()
self.assertTrue(mocked_method.called)
@mock.patch('lib.utils.put_request')
def test_put_clip_metadata(self, mocked_method):
alias = self.cw .put_clip_metadata()
self.assertTrue(mocked_method.called)
def test_update_clip_status(self):
test_status = 'test'
self.cw.update_clip_status(test_status)
self.assertEqual(self.cw.metadata['status'], test_status)

View file

@ -0,0 +1,30 @@
# -*- coding: utf-8 -*-
""" Ffmpeg Wrapper Test Module """
import lib.ffmpeg_wrapper
import unittest
class TestFfmpegWrapper(unittest.TestCase):
def setUp(self):
self.epoch_video = 1.535884819e+12
self.crop_start = '03:39.000'
self.crop_end = '13:01.000'
self.session_start_ms = '1535884600000'
self.session_end_ms = '1535885600000'
self.alias = 'test'
self.clipId = '1111111111111111'
self.clips = []
self.fw = lib.ffmpeg_wrapper.FfmpegWrapper(
self.alias, self.clips,
self.session_start_ms,
self.session_end_ms,
self.clipId)
def test_calculate_crop_time(self):
crop_start, crop_end = self.fw.calculate_trim_time(self.epoch_video)
print crop_start, crop_end, self.crop_end, self.crop_start
self.assertEqual(crop_end, self.crop_end)
self.assertEqual(crop_start, self.crop_start)

View file

@ -0,0 +1,80 @@
# -*- coding: utf-8 -*-
""" Utils Test Module """
import os
import json
import pytest
import unittest
import mock
import requests
import requests_mock
import lib.utils
fixtures_path = os.path.join(os.path.dirname(__file__), 'fixtures')
@pytest.fixture
def get_fixture(fixture_json):
get_sqs_event = os.path.join(fixtures_path, fixture_json)
with open(get_sqs_event, 'r') as f:
return json.load(f)
class TestClipGeneratorTrigger(unittest.TestCase):
def setUp(self):
self.domain = 'http://test.com'
self.endpoint = 'filetest.mp4'
self.file_url = 'http://test.com/filetest.mp4'
self.clipname = 'camtest.20180815T140019.mp4'
self.epoch_in_ms = 1535224400000
self.timestamp = '20180825T191320'
self.timestamp_format = '%Y%m%dT%H%M%S'
self.msecs = 1807
self.resp = {'test1': 'test2'}
def test_url_join(self):
self.assertEqual('http://test.com/filetest.mp4',
lib.utils.url_join(self.domain,
self.endpoint), msg=None)
def test_get_request(self):
with requests_mock.Mocker() as m:
m.get(self.file_url, json=self.resp)
self.assertTrue(lib.utils.get_request(self.domain, self.endpoint))
def test_get_basename_str(self):
self.assertEqual('filetest.mp4', lib.utils.get_basename_str(
self.file_url), msg=None)
def test_get_timestamp_str(self):
self.assertEqual('20180815T140019000',
lib.utils.get_timestamp_str(self.clipname), msg=None)
def test_get_location_str(self):
self.assertEqual('hbpiernscam', lib.utils.get_location_str(
self.clipname), msg=None)
def test_timestamp_to_epoch(self):
self.assertEqual(self.epoch_in_ms, lib.utils.timestamp_to_epoch(
self.timestamp, self.timestamp_format), msg=None)
def test_epoch_to_timestamp(self):
self.assertEqual(self.timestamp, lib.utils.epoch_to_timestamp(
self.epoch_in_ms, self.timestamp_format), msg=None)
def test_humanize_delta_time(self):
self.assertEqual(
'00:01.807', lib.utils.humanize_delta_time(self.msecs), msg=None)
@mock.patch('lib.utils.os.remove')
def test_remove_file(self, mocked_remove):
lib.utils.remove_file(self.clipname)
self.assertTrue(mocked_remove.called)
@mock.patch('lib.utils.subprocess.check_output')
def test_run_subprocess(self, mocked_subprocess):
lib.utils.run_subprocess(['ls'], 'ok', 'err')
self.assertTrue(mocked_subprocess.called)