Merge the Complement testing Docker images into a single, multi-purpose image. (#12881)

Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
This commit is contained in:
reivilibre 2022-06-08 10:57:05 +01:00 committed by GitHub
parent c316fe8d4a
commit 67f51c84f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 277 additions and 372 deletions

1
changelog.d/12881.misc Normal file
View File

@ -0,0 +1 @@
Merge the Complement testing Docker images into a single, multi-purpose image.

View File

@ -1,5 +1,6 @@
# Inherit from the official Synapse docker image # Inherit from the official Synapse docker image
FROM matrixdotorg/synapse ARG SYNAPSE_VERSION=latest
FROM matrixdotorg/synapse:$SYNAPSE_VERSION
# Install deps # Install deps
RUN \ RUN \

View File

@ -8,13 +8,19 @@ docker images that can be run inside Complement for testing purposes.
Note that running Synapse's unit tests from within the docker image is not supported. Note that running Synapse's unit tests from within the docker image is not supported.
## Testing with SQLite and single-process Synapse ## Using the Complement launch script
> Note that `scripts-dev/complement.sh` is a script that will automatically build `scripts-dev/complement.sh` is a script that will automatically build
> and run an SQLite-based, single-process of Synapse against Complement. and run Synapse against Complement.
Consult the [contributing guide][guideComplementSh] for instructions on how to use it.
The instructions below will set up Complement testing for a single-process,
SQLite-based Synapse deployment. [guideComplementSh]: https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-integration-tests-complement
## Building and running the images manually
Under some circumstances, you may wish to build the images manually.
The instructions below will lead you to doing that.
Start by building the base Synapse docker image. If you wish to run tests with the latest Start by building the base Synapse docker image. If you wish to run tests with the latest
release of Synapse, instead of your current checkout, you can skip this step. From the release of Synapse, instead of your current checkout, you can skip this step. From the
@ -24,12 +30,17 @@ root of the repository:
docker build -t matrixdotorg/synapse -f docker/Dockerfile . docker build -t matrixdotorg/synapse -f docker/Dockerfile .
``` ```
This will build an image with the tag `matrixdotorg/synapse`. Next, build the workerised Synapse docker image, which is a layer over the base
image.
Next, build the Synapse image for Complement.
```sh ```sh
docker build -t complement-synapse -f "docker/complement/Dockerfile" docker/complement docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
```
Finally, build the multi-purpose image for Complement, which is a layer over the workers image.
```sh
docker build -t complement-synapse -f docker/complement/Dockerfile docker/complement
``` ```
This will build an image with the tag `complement-synapse`, which can be handed to This will build an image with the tag `complement-synapse`, which can be handed to
@ -37,49 +48,9 @@ Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Ref
[Complement's documentation](https://github.com/matrix-org/complement/#running) for [Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags. how to run the tests, as well as the various available command line flags.
## Testing with PostgreSQL and single or multi-process Synapse See [the Complement image README](./complement/README.md) for information about the
expected environment variables.
The above docker image only supports running Synapse with SQLite and in a
single-process topology. The following instructions are used to build a Synapse image for
Complement that supports either single or multi-process topology with a PostgreSQL
database backend.
As with the single-process image, build the base Synapse docker image. If you wish to run
tests with the latest release of Synapse, instead of your current checkout, you can skip
this step. From the root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`.
Again, from the root of the repository:
```sh
docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
```
This will build an image with the tag` matrixdotorg/synapse-workers`.
It's worth noting at this point that this image is fully functional, and
can be used for testing against locally. See instructions for using the container
under
[Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone)
below.
Finally, build the Synapse image for Complement, which is based on
`matrixdotorg/synapse-workers`.
```sh
docker build -t matrixdotorg/complement-synapse-workers -f docker/complement/SynapseWorkers.Dockerfile docker/complement
```
This will build an image with the tag `complement-synapse-workers`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Running the Dockerfile-worker image standalone ## Running the Dockerfile-worker image standalone
@ -113,6 +84,9 @@ docker run -d --name synapse \
...substituting `POSTGRES*` variables for those that match a postgres host you have ...substituting `POSTGRES*` variables for those that match a postgres host you have
available (usually a running postgres docker container). available (usually a running postgres docker container).
### Workers
The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to
use when running the container. All possible worker names are defined by the keys of the use when running the container. All possible worker names are defined by the keys of the
`WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the `WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the
@ -125,8 +99,11 @@ type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES`
(e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`). (e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`).
Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers
(leaving only the main process). The container is configured to use redis-based worker (leaving only the main process).
mode. The container will only be configured to use Redis-based worker mode if there are
workers enabled.
### Logging
Logs for workers and the main process are logged to stdout and can be viewed with Logs for workers and the main process are logged to stdout and can be viewed with
standard `docker logs` tooling. Worker logs contain their worker name standard `docker logs` tooling. Worker logs contain their worker name
@ -136,3 +113,21 @@ Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be writ
`<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00: `<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00:
00, according to the container's clock. Logging for the main process must still be 00, according to the container's clock. Logging for the main process must still be
configured by modifying the homeserver's log config in your Synapse data volume. configured by modifying the homeserver's log config in your Synapse data volume.
### Application Services
Setting the `SYNAPSE_AS_REGISTRATION_DIR` environment variable to the path of
a directory (within the container) will cause the configuration script to scan
that directory for `.yaml`/`.yml` registration files.
Synapse will be configured to load these configuration files.
### TLS Termination
Nginx is present in the image to route requests to the appropriate workers,
but it does not serve TLS by default.
You can configure `SYNAPSE_TLS_CERT` and `SYNAPSE_TLS_KEY` to point to a
TLS certificate and key (respectively), both in PEM (textual) format.
In this case, Nginx will additionally serve using HTTPS on port 8448.

View File

@ -1,22 +1,45 @@
# A dockerfile which builds an image suitable for testing Synapse under # This dockerfile builds on top of 'docker/Dockerfile-workers' in matrix-org/synapse
# complement. # by including a built-in postgres instance, as well as setting up the homeserver so
# that it is ready for testing via Complement.
#
# Instructions for building this image from those it depends on is detailed in this guide:
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
ARG SYNAPSE_VERSION=latest ARG SYNAPSE_VERSION=latest
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
FROM matrixdotorg/synapse:${SYNAPSE_VERSION} # Install postgresql
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y postgresql-13
ENV SERVER_NAME=localhost # Configure a user and create a database for Synapse
RUN pg_ctlcluster 13 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 13 main stop
COPY conf/* /conf/ # Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# generate a signing key # tweaks to get Synapse ready for testing.
RUN generate_signing_key -o /conf/server.signing.key # To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
WORKDIR /data WORKDIR /data
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
# Copy the entrypoint
COPY conf/start_for_complement.sh /
# Expose nginx's listener ports
EXPOSE 8008 8448 EXPOSE 8008 8448
ENTRYPOINT ["/conf/start.sh"] ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \ HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD curl -fSs http://localhost:8008/health || exit 1 CMD /bin/sh /healthcheck.sh

View File

@ -1 +1,32 @@
Stuff for building the docker image used for testing under complement. # Unified Complement image for Synapse
This is an image for testing Synapse with [the *Complement* integration test suite][complement].
It contains some insecure defaults that are only suitable for testing purposes,
so **please don't use this image for a production server**.
This multi-purpose image is built on top of `Dockerfile-workers` in the parent directory
and can be switched using environment variables between the following configurations:
- Monolithic Synapse with SQLite (`SYNAPSE_COMPLEMENT_DATABASE=sqlite`)
- Monolithic Synapse with Postgres (`SYNAPSE_COMPLEMENT_DATABASE=postgres`)
- Workerised Synapse with Postgres (`SYNAPSE_COMPLEMENT_DATABASE=postgres` and `SYNAPSE_COMPLEMENT_USE_WORKERS=true`)
The image is self-contained; it contains an integrated Postgres, Redis and Nginx.
## How to get Complement to pass the environment variables through
To pass these environment variables, use [Complement's `COMPLEMENT_SHARE_ENV_PREFIX`][complementEnv]
variable to configure an environment prefix to pass through, then prefix the above options
with that prefix.
Example:
```
COMPLEMENT_SHARE_ENV_PREFIX=PASS_ PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
```
Consult `scripts-dev/complement.sh` in the repository root for a real example.
[complement]: https://github.com/matrix-org/complement
[complementEnv]: https://github.com/matrix-org/complement/pull/382

View File

@ -1,40 +0,0 @@
# This dockerfile builds on top of 'docker/Dockerfile-worker' in matrix-org/synapse
# by including a built-in postgres instance, as well as setting up the homeserver so
# that it is ready for testing via Complement.
#
# Instructions for building this image from those it depends on is detailed in this guide:
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
FROM matrixdotorg/synapse-workers
# Install postgresql
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y postgresql-13
# Configure a user and create a database for Synapse
RUN pg_ctlcluster 13 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 13 main stop
# Modify the shared homeserver config with postgres support, certificate setup
# and the disabling of rate-limiting
COPY conf-workers/workers-shared.yaml /conf/workers/shared.yaml
WORKDIR /data
COPY conf-workers/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
# Copy the entrypoint
COPY conf-workers/start-complement-synapse-workers.sh /
# Expose nginx's listener ports
EXPOSE 8008 8448
ENTRYPOINT ["/start-complement-synapse-workers.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh

View File

@ -1,61 +0,0 @@
#!/bin/bash
#
# Default ENTRYPOINT for the docker image used for testing synapse with workers under complement
set -e
function log {
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
echo "$d $@"
}
# Set the server name of the homeserver
export SYNAPSE_SERVER_NAME=${SERVER_NAME}
# No need to report stats here
export SYNAPSE_REPORT_STATS=no
# Set postgres authentication details which will be placed in the homeserver config file
export POSTGRES_PASSWORD=somesecret
export POSTGRES_USER=postgres
export POSTGRES_HOST=localhost
# Specify the workers to test with
export SYNAPSE_WORKER_TYPES="\
event_persister, \
event_persister, \
background_worker, \
frontend_proxy, \
event_creator, \
user_dir, \
media_repository, \
federation_inbound, \
federation_reader, \
federation_sender, \
synchrotron, \
appservice, \
pusher"
# Add Complement's appservice registration directory, if there is one
# (It can be absent when there are no application services in this test!)
if [ -d /complement/appservice ]; then
export SYNAPSE_AS_REGISTRATION_DIR=/complement/appservice
fi
# Generate a TLS key, then generate a certificate by having Complement's CA sign it
# Note that both the key and certificate are in PEM format (not DER).
openssl genrsa -out /conf/server.tls.key 2048
openssl req -new -key /conf/server.tls.key -out /conf/server.tls.csr \
-subj "/CN=${SERVER_NAME}"
openssl x509 -req -in /conf/server.tls.csr \
-CA /complement/ca/ca.crt -CAkey /complement/ca/ca.key -set_serial 1 \
-out /conf/server.tls.crt
export SYNAPSE_TLS_CERT=/conf/server.tls.crt
export SYNAPSE_TLS_KEY=/conf/server.tls.key
# Run the script that writes the necessary config files and starts supervisord, which in turn
# starts everything else
exec /configure_workers_and_start.py

View File

@ -1,129 +0,0 @@
## Server ##
server_name: SERVER_NAME
log_config: /conf/log_config.yaml
report_stats: False
signing_key_path: /conf/server.signing.key
trusted_key_servers: []
enable_registration: true
enable_registration_without_verification: true
## Listeners ##
tls_certificate_path: /conf/server.tls.crt
tls_private_key_path: /conf/server.tls.key
bcrypt_rounds: 4
registration_shared_secret: complement
listeners:
- port: 8448
bind_addresses: ['::']
type: http
tls: true
resources:
- names: [federation]
- port: 8008
bind_addresses: ['::']
type: http
resources:
- names: [client]
## Database ##
database:
name: "sqlite3"
args:
# We avoid /data, as it is a volume and is not transferred when the container is committed,
# which is a fundamental necessity in complement.
database: "/conf/homeserver.db"
## Federation ##
# trust certs signed by the complement CA
federation_custom_ca_list:
- /complement/ca/ca.crt
# unblacklist RFC1918 addresses
ip_range_blacklist: []
# Disable server rate-limiting
rc_federation:
window_size: 1000
sleep_limit: 10
sleep_delay: 500
reject_limit: 99999
concurrent: 3
rc_message:
per_second: 9999
burst_count: 9999
rc_registration:
per_second: 9999
burst_count: 9999
rc_login:
address:
per_second: 9999
burst_count: 9999
account:
per_second: 9999
burst_count: 9999
failed_attempts:
per_second: 9999
burst_count: 9999
rc_admin_redaction:
per_second: 9999
burst_count: 9999
rc_joins:
local:
per_second: 9999
burst_count: 9999
remote:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
federation_rr_transactions_per_room_per_second: 9999
## API Configuration ##
# A list of application service config files to use
#
app_service_config_files:
AS_REGISTRATION_FILES
## Experimental Features ##
experimental_features:
# Enable spaces support
spaces_enabled: true
# Enable history backfilling support
msc2716_enabled: true
# server-side support for partial state in /send_join responses
msc3706_enabled: true
# client-side support for partial state in /send_join responses
faster_joins: true
# Enable jump to date endpoint
msc3030_enabled: true
server_notices:
system_mxid_localpart: _server
system_mxid_display_name: "Server Alert"
system_mxid_avatar_url: ""
room_name: "Server Alert"

View File

@ -1,24 +0,0 @@
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
# log to stdout, for easier use with 'docker logs'
stream: 'ext://sys.stdout'
root:
level: INFO
handlers: [console]
disable_existing_loggers: false

View File

@ -1,6 +1,9 @@
[program:postgres] [program:postgres]
command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground
# Only start if START_POSTGRES=1
autostart=%(ENV_START_POSTGRES)s
# Lower priority number = starts first # Lower priority number = starts first
priority=1 priority=1

View File

@ -1,30 +0,0 @@
#!/bin/sh
set -e
sed -i "s/SERVER_NAME/${SERVER_NAME}/g" /conf/homeserver.yaml
# Add the application service registration files to the homeserver.yaml config
for filename in /complement/appservice/*.yaml; do
[ -f "$filename" ] || break
as_id=$(basename "$filename" .yaml)
# Insert the path to the registration file and the AS_REGISTRATION_FILES marker after
# so we can add the next application service in the next iteration of this for loop
sed -i "s/AS_REGISTRATION_FILES/ - \/complement\/appservice\/${as_id}.yaml\nAS_REGISTRATION_FILES/g" /conf/homeserver.yaml
done
# Remove the AS_REGISTRATION_FILES entry
sed -i "s/AS_REGISTRATION_FILES//g" /conf/homeserver.yaml
# generate an ssl key and cert for the server, signed by the complement CA
openssl genrsa -out /conf/server.tls.key 2048
openssl req -new -key /conf/server.tls.key -out /conf/server.tls.csr \
-subj "/CN=${SERVER_NAME}"
openssl x509 -req -in /conf/server.tls.csr \
-CA /complement/ca/ca.crt -CAkey /complement/ca/ca.key -set_serial 1 \
-out /conf/server.tls.crt
exec python -m synapse.app.homeserver -c /conf/homeserver.yaml "$@"

View File

@ -0,0 +1,90 @@
#!/bin/bash
#
# Default ENTRYPOINT for the docker image used for testing synapse with workers under complement
set -e
echo "Complement Synapse launcher"
echo " Args: $@"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS"
function log {
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
echo "$d $@"
}
# Set the server name of the homeserver
export SYNAPSE_SERVER_NAME=${SERVER_NAME}
# No need to report stats here
export SYNAPSE_REPORT_STATS=no
case "$SYNAPSE_COMPLEMENT_DATABASE" in
postgres)
# Set postgres authentication details which will be placed in the homeserver config file
export POSTGRES_PASSWORD=somesecret
export POSTGRES_USER=postgres
export POSTGRES_HOST=localhost
# configure supervisord to start postgres
export START_POSTGRES=true
;;
sqlite)
# Configure supervisord not to start Postgres, as we don't need it
export START_POSTGRES=false
;;
*)
echo "Unknown Synapse database: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE" >&2
exit 1
;;
esac
if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
# Specify the workers to test with
export SYNAPSE_WORKER_TYPES="\
event_persister, \
event_persister, \
background_worker, \
frontend_proxy, \
event_creator, \
user_dir, \
media_repository, \
federation_inbound, \
federation_reader, \
federation_sender, \
synchrotron, \
appservice, \
pusher"
else
# Empty string here means 'main process only'
export SYNAPSE_WORKER_TYPES=""
fi
# Add Complement's appservice registration directory, if there is one
# (It can be absent when there are no application services in this test!)
if [ -d /complement/appservice ]; then
export SYNAPSE_AS_REGISTRATION_DIR=/complement/appservice
fi
# Generate a TLS key, then generate a certificate by having Complement's CA sign it
# Note that both the key and certificate are in PEM format (not DER).
openssl genrsa -out /conf/server.tls.key 2048
openssl req -new -key /conf/server.tls.key -out /conf/server.tls.csr \
-subj "/CN=${SERVER_NAME}"
openssl x509 -req -in /conf/server.tls.csr \
-CA /complement/ca/ca.crt -CAkey /complement/ca/ca.key -set_serial 1 \
-out /conf/server.tls.crt
export SYNAPSE_TLS_CERT=/conf/server.tls.crt
export SYNAPSE_TLS_KEY=/conf/server.tls.key
# Run the script that writes the necessary config files and starts supervisord, which in turn
# starts everything else
exec /configure_workers_and_start.py

View File

@ -1,3 +1,11 @@
{#
This file extends the default 'shared' configuration file (from the 'synapse-workers'
docker image) with Complement-specific tweak.
The base configuration is moved out of the default path to `shared-orig.yaml.j2`
in the Complement Dockerfile and below we include that original file.
#}
## Server ## ## Server ##
report_stats: False report_stats: False
trusted_key_servers: [] trusted_key_servers: []
@ -76,10 +84,16 @@ federation_rr_transactions_per_room_per_second: 9999
## Experimental Features ## ## Experimental Features ##
experimental_features: experimental_features:
# Enable history backfilling support
msc2716_enabled: true
# Enable spaces support # Enable spaces support
spaces_enabled: true spaces_enabled: true
# Enable history backfilling support
msc2716_enabled: true
# server-side support for partial state in /send_join responses
msc3706_enabled: true
{% if not workers_in_use %}
# client-side support for partial state in /send_join responses
faster_joins: true
{% endif %}
# Enable jump to date endpoint # Enable jump to date endpoint
msc3030_enabled: true msc3030_enabled: true
@ -88,3 +102,5 @@ server_notices:
system_mxid_display_name: "Server Alert" system_mxid_display_name: "Server Alert"
system_mxid_avatar_url: "" system_mxid_avatar_url: ""
room_name: "Server Alert" room_name: "Server Alert"
{% include "shared-orig.yaml.j2" %}

View File

@ -3,8 +3,10 @@
# configure_workers_and_start.py uses and amends to this file depending on the workers # configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected. # that have been selected.
{% if enable_redis %}
redis: redis:
enabled: true enabled: true
{% endif %}
{% if appservice_registrations is not none %} {% if appservice_registrations is not none %}
## Application Services ## ## Application Services ##

View File

@ -28,6 +28,9 @@ stderr_logfile_maxbytes=0
username=redis username=redis
autorestart=true autorestart=true
# Redis can be disabled if the image is being used without workers
autostart={{ enable_redis }}
[program:synapse_main] [program:synapse_main]
command=/usr/local/bin/prefix-log /usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml command=/usr/local/bin/prefix-log /usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml
priority=10 priority=10
@ -41,4 +44,4 @@ autorestart=unexpected
exitcodes=0 exitcodes=0
# Additional process blocks # Additional process blocks
{{ worker_config }} {{ worker_config }}

View File

@ -37,8 +37,8 @@ import sys
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Set from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Set
import jinja2
import yaml import yaml
from jinja2 import Environment, FileSystemLoader
MAIN_PROCESS_HTTP_LISTENER_PORT = 8080 MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
@ -236,12 +236,13 @@ def convert(src: str, dst: str, **template_vars: object) -> None:
template_vars: The arguments to replace placeholder variables in the template with. template_vars: The arguments to replace placeholder variables in the template with.
""" """
# Read the template file # Read the template file
with open(src) as infile: # We disable autoescape to prevent template variables from being escaped,
template = infile.read() # as we're not using HTML.
env = Environment(loader=FileSystemLoader(os.path.dirname(src)), autoescape=False)
template = env.get_template(os.path.basename(src))
# Generate a string from the template. We disable autoescape to prevent template # Generate a string from the template.
# variables from being escaped. rendered = template.render(**template_vars)
rendered = jinja2.Template(template, autoescape=False).render(**template_vars)
# Write the generated contents to a file # Write the generated contents to a file
# #
@ -378,8 +379,8 @@ def generate_worker_files(
nginx_locations = {} nginx_locations = {}
# Read the desired worker configuration from the environment # Read the desired worker configuration from the environment
worker_types_env = environ.get("SYNAPSE_WORKER_TYPES") worker_types_env = environ.get("SYNAPSE_WORKER_TYPES", "").strip()
if worker_types_env is None: if not worker_types_env:
# No workers, just the main process # No workers, just the main process
worker_types = [] worker_types = []
else: else:
@ -506,12 +507,16 @@ def generate_worker_files(
if reg_path.suffix.lower() in (".yaml", ".yml") if reg_path.suffix.lower() in (".yaml", ".yml")
] ]
workers_in_use = len(worker_types) > 0
# Shared homeserver config # Shared homeserver config
convert( convert(
"/conf/shared.yaml.j2", "/conf/shared.yaml.j2",
"/conf/workers/shared.yaml", "/conf/workers/shared.yaml",
shared_worker_config=yaml.dump(shared_config), shared_worker_config=yaml.dump(shared_config),
appservice_registrations=appservice_registrations, appservice_registrations=appservice_registrations,
enable_redis=workers_in_use,
workers_in_use=workers_in_use,
) )
# Nginx config # Nginx config
@ -531,6 +536,7 @@ def generate_worker_files(
"/etc/supervisor/supervisord.conf", "/etc/supervisor/supervisord.conf",
main_config_path=config_path, main_config_path=config_path,
worker_config=supervisord_config, worker_config=supervisord_config,
enable_redis=workers_in_use,
) )
# healthcheck config # healthcheck config

View File

@ -304,6 +304,11 @@ To run a specific test, you can specify the whole name structure:
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order
``` ```
The above will run a monolithic (single-process) Synapse with SQLite as the database. For other configurations, try:
- Passing `POSTGRES=1` as an environment variable to use the Postgres database instead.
- Passing `WORKERS=1` as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
### Access database for homeserver after Complement test runs. ### Access database for homeserver after Complement test runs.

View File

@ -43,17 +43,29 @@ fi
# Build the base Synapse image from the local checkout # Build the base Synapse image from the local checkout
docker build -t matrixdotorg/synapse -f "docker/Dockerfile" . docker build -t matrixdotorg/synapse -f "docker/Dockerfile" .
# Build the workers docker image (from the base Synapse image we just built).
docker build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
# Build the unified Complement image (from the worker Synapse image we just built).
docker build -t complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
export COMPLEMENT_BASE_IMAGE=complement-synapse
extra_test_args=() extra_test_args=()
test_tags="synapse_blacklist,msc2716,msc3030,msc3787" test_tags="synapse_blacklist,msc2716,msc3030,msc3787"
# If we're using workers, modify the docker files slightly. # All environment variables starting with PASS_ will be shared.
if [[ -n "$WORKERS" ]]; then # (The prefix is stripped off before reaching the container.)
# Build the workers docker image (from the base Synapse image). export COMPLEMENT_SHARE_ENV_PREFIX=PASS_
docker build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
export COMPLEMENT_BASE_IMAGE=complement-synapse-workers if [[ -n "$WORKERS" ]]; then
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile # Use workers.
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=true
# Workers can only use Postgres as a database.
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
# And provide some more configuration to complement. # And provide some more configuration to complement.
@ -65,17 +77,18 @@ if [[ -n "$WORKERS" ]]; then
# ... and it takes longer than 10m to run the whole suite. # ... and it takes longer than 10m to run the whole suite.
extra_test_args+=("-timeout=60m") extra_test_args+=("-timeout=60m")
else else
export COMPLEMENT_BASE_IMAGE=complement-synapse export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=
COMPLEMENT_DOCKERFILE=Dockerfile if [[ -n "$POSTGRES" ]]; then
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
fi
# We only test faster room joins on monoliths, because they are purposefully # We only test faster room joins on monoliths, because they are purposefully
# being developed without worker support to start with. # being developed without worker support to start with.
test_tags="$test_tags,faster_joins" test_tags="$test_tags,faster_joins"
fi fi
# Build the Complement image from the Synapse image we just built.
docker build -t $COMPLEMENT_BASE_IMAGE -f "docker/complement/$COMPLEMENT_DOCKERFILE" "docker/complement"
# Run the tests! # Run the tests!
echo "Images built; running complement" echo "Images built; running complement"
cd "$COMPLEMENT_DIR" cd "$COMPLEMENT_DIR"