Merge remote-tracking branch 'upstream/release-v1.47'

This commit is contained in:
Tulir Asokan 2021-11-09 16:03:12 +02:00
commit d61a2339b1
167 changed files with 3999 additions and 1283 deletions

View File

@ -3,7 +3,7 @@
# Test for the export-data admin command against sqlite and postgres # Test for the export-data admin command against sqlite and postgres
set -xe set -xe
cd `dirname $0`/../.. cd "$(dirname "$0")/../.."
echo "--- Install dependencies" echo "--- Install dependencies"

View File

@ -7,7 +7,7 @@
set -xe set -xe
cd `dirname $0`/../.. cd "$(dirname "$0")/../.."
echo "--- Install dependencies" echo "--- Install dependencies"

View File

@ -1,12 +1,13 @@
### Pull Request Checklist ### Pull Request Checklist
<!-- Please read CONTRIBUTING.md before submitting your pull request --> <!-- Please read https://matrix-org.github.io/synapse/latest/development/contributing_guide.html before submitting your pull request -->
* [ ] Pull request is based on the develop branch * [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog). The entry should: * [ ] Pull request includes a [changelog file](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#changelog). The entry should:
- Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from `EventStore` to `EventWorkerStore`.". - Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`. - Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!). - End with either a period (.) or an exclamation mark (!).
- Start with a capital letter. - Start with a capital letter.
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off) * [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style)) * [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@ -1,3 +1,90 @@
Synapse 1.47.0rc1 (2021-11-09)
==============================
Deprecations and Removals
-------------------------
- The `user_may_create_room_with_invites` module callback is now deprecated. Please refer to the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#upgrading-to-v1470) for more information. ([\#11206](https://github.com/matrix-org/synapse/issues/11206))
- Remove deprecated admin API to delete rooms (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). ([\#11213](https://github.com/matrix-org/synapse/issues/11213))
Features
--------
- Advertise support for Client-Server API r0.6.1. ([\#11097](https://github.com/matrix-org/synapse/issues/11097))
- Add search by room ID and room alias to the List Room admin API. ([\#11099](https://github.com/matrix-org/synapse/issues/11099))
- Add an `on_new_event` third-party rules callback to allow Synapse modules to act after an event has been sent into a room. ([\#11126](https://github.com/matrix-org/synapse/issues/11126))
- Add a module API method to update a user's membership in a room. ([\#11147](https://github.com/matrix-org/synapse/issues/11147))
- Add metrics for thread pool usage. ([\#11178](https://github.com/matrix-org/synapse/issues/11178))
- Support the stable room type field for [MSC3288](https://github.com/matrix-org/matrix-doc/pull/3288). ([\#11187](https://github.com/matrix-org/synapse/issues/11187))
- Add a module API method to retrieve the current state of a room. ([\#11204](https://github.com/matrix-org/synapse/issues/11204))
- Calculate a default value for `public_baseurl` based on `server_name`. ([\#11210](https://github.com/matrix-org/synapse/issues/11210))
- Add support for serving `/.well-known/matrix/server` files, to redirect federation traffic to port 443. ([\#11211](https://github.com/matrix-org/synapse/issues/11211))
- Add admin APIs to pause, start and check the status of background updates. ([\#11263](https://github.com/matrix-org/synapse/issues/11263))
Bugfixes
--------
- Fix a long-standing bug which allowed hidden devices to receive to-device messages, resulting in unnecessary database bloat. ([\#10097](https://github.com/matrix-org/synapse/issues/10097))
- Fix a long-standing bug where messages in the `device_inbox` table for deleted devices would persist indefinitely. Contributed by @dklimpel and @JohannesKleine. ([\#10969](https://github.com/matrix-org/synapse/issues/10969), [\#11212](https://github.com/matrix-org/synapse/issues/11212))
- Do not accept events if a third-party rule `check_event_allowed` callback raises an exception. ([\#11033](https://github.com/matrix-org/synapse/issues/11033))
- Fix long-standing bug where verification requests could fail in certain cases if a federation whitelist was in place but did not include your own homeserver. ([\#11129](https://github.com/matrix-org/synapse/issues/11129))
- Allow an empty list of `state_events_at_start` to be sent when using the [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) `/batch_send` endpoint and the author of the historical messages is already part of the current room state at the given `?prev_event_id`. ([\#11188](https://github.com/matrix-org/synapse/issues/11188))
- Fix a bug introduced in Synapse 1.45.0 which prevented the `synapse_review_recent_signups` script from running. Contributed by @samuel-p. ([\#11191](https://github.com/matrix-org/synapse/issues/11191))
- Delete `to_device` messages for hidden devices that will never be read, reducing database size. ([\#11199](https://github.com/matrix-org/synapse/issues/11199))
- Fix a long-standing bug wherein a missing `Content-Type` header when downloading remote media would cause Synapse to throw an error. ([\#11200](https://github.com/matrix-org/synapse/issues/11200))
- Fix a long-standing bug which could result in serialization errors and potentially duplicate transaction data when sending ephemeral events to application services. Contributed by @Fizzadar at Beeper. ([\#11207](https://github.com/matrix-org/synapse/issues/11207))
- Fix a bug introduced in Synapse 1.35.0 which made it impossible to join rooms that return a `send_join` response containing floats. ([\#11217](https://github.com/matrix-org/synapse/issues/11217))
- Fix long-standing bug where cross signing keys were not included in the response to `/r0/keys/query` the first time a remote user was queried. ([\#11234](https://github.com/matrix-org/synapse/issues/11234))
- Fix a long-standing bug where all requests that read events from the database could get stuck as a result of losing the database connection. ([\#11240](https://github.com/matrix-org/synapse/issues/11240))
- Fix a bug preventing Synapse from being rolled back to an earlier version when using workers. ([\#11255](https://github.com/matrix-org/synapse/issues/11255), [\#11276](https://github.com/matrix-org/synapse/issues/11276))
- Fix a bug introduced in Synapse 1.37.1 which caused a remote event being processed by a worker to not get processed on restart if the worker was killed. ([\#11262](https://github.com/matrix-org/synapse/issues/11262))
- Only allow old Element/Riot Android clients to send read receipts without a request body. All other clients must include a request body as required by the specification. Contributed by @rogersheu. ([\#11157](https://github.com/matrix-org/synapse/issues/11157))
Updates to the Docker image
---------------------------
- Avoid changing user ID when started as a non-root user, and no explicit `UID` is set. ([\#11209](https://github.com/matrix-org/synapse/issues/11209))
Improved Documentation
----------------------
- Improve example HAProxy config in the docs to properly handle HTTP `Host` headers with port information. This is required for federation over port 443 to work correctly. ([\#11128](https://github.com/matrix-org/synapse/issues/11128))
- Add documentation for using Authentik as an OpenID Connect Identity Provider. Contributed by @samip5. ([\#11151](https://github.com/matrix-org/synapse/issues/11151))
- Clarify lack of support for Windows. ([\#11198](https://github.com/matrix-org/synapse/issues/11198))
- Improve code formatting and fix a few typos in docs. Contributed by @sumnerevans at Beeper. ([\#11221](https://github.com/matrix-org/synapse/issues/11221))
- Add documentation for using LemonLDAP as an OpenID Connect Identity Provider. Contributed by @l00ptr. ([\#11257](https://github.com/matrix-org/synapse/issues/11257))
Internal Changes
----------------
- Add type annotations for the `log_function` decorator. ([\#10943](https://github.com/matrix-org/synapse/issues/10943))
- Add type hints to `synapse.events`. ([\#11098](https://github.com/matrix-org/synapse/issues/11098))
- Remove and document unnecessary `RoomStreamToken` checks in application service ephemeral event code. ([\#11137](https://github.com/matrix-org/synapse/issues/11137))
- Add type hints so that `synapse.http` passes `mypy` checks. ([\#11164](https://github.com/matrix-org/synapse/issues/11164))
- Update scripts to pass Shellcheck lints. ([\#11166](https://github.com/matrix-org/synapse/issues/11166))
- Add knock information in admin export. Contributed by Rafael Gonçalves. ([\#11171](https://github.com/matrix-org/synapse/issues/11171))
- Add tests to check that `ClientIpStore.get_last_client_ip_by_device` and `get_user_ip_and_agents` combine database and in-memory data correctly. ([\#11179](https://github.com/matrix-org/synapse/issues/11179))
- Refactor `Filter` to check different fields depending on the data type. ([\#11194](https://github.com/matrix-org/synapse/issues/11194))
- Improve type hints for the relations datastore. ([\#11205](https://github.com/matrix-org/synapse/issues/11205))
- Replace outdated links in the pull request checklist with links to the rendered documentation. ([\#11225](https://github.com/matrix-org/synapse/issues/11225))
- Fix a bug in unit test `test_block_room_and_not_purge`. ([\#11226](https://github.com/matrix-org/synapse/issues/11226))
- In `ObservableDeferred`, run observers in the order they were registered. ([\#11229](https://github.com/matrix-org/synapse/issues/11229))
- Minor speed up to start up times and getting updates for groups by adding missing index to `local_group_updates.stream_id`. ([\#11231](https://github.com/matrix-org/synapse/issues/11231))
- Add `twine` and `towncrier` as dev dependencies, as they're used by the release script. ([\#11233](https://github.com/matrix-org/synapse/issues/11233))
- Allow `stream_writers.typing` config to be a list of one worker. ([\#11237](https://github.com/matrix-org/synapse/issues/11237))
- Remove debugging statement in tests. ([\#11239](https://github.com/matrix-org/synapse/issues/11239))
- Fix [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical messages backfilling in random order on remote homeservers. ([\#11244](https://github.com/matrix-org/synapse/issues/11244))
- Add an additional test for the `cachedList` method decorator. ([\#11246](https://github.com/matrix-org/synapse/issues/11246))
- Make minor correction to the type of `auth_checkers` callbacks. ([\#11253](https://github.com/matrix-org/synapse/issues/11253))
- Clean up trivial aspects of the Debian package build tooling. ([\#11269](https://github.com/matrix-org/synapse/issues/11269), [\#11273](https://github.com/matrix-org/synapse/issues/11273))
- Blacklist new SyTest that checks that key uploads are valid pending the validation being implemented in Synapse. ([\#11270](https://github.com/matrix-org/synapse/issues/11270))
Synapse 1.46.0 (2021-11-02) Synapse 1.46.0 (2021-11-02)
=========================== ===========================

View File

@ -84,7 +84,9 @@ AUTH="Authorization: Bearer $TOKEN"
################################################################################################### ###################################################################################################
# finally start pruning the room: # finally start pruning the room:
################################################################################################### ###################################################################################################
POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation # this will really delete local events, so the messages in the room really
# disappear unless they are restored by remote federation. This is because
# we pass {"delete_local_events":true} to the curl invocation below.
for ROOM in "${ROOMS_ARRAY[@]}"; do for ROOM in "${ROOMS_ARRAY[@]}"; do
echo "########################################### $(date) ################# " echo "########################################### $(date) ################# "
@ -104,7 +106,7 @@ for ROOM in "${ROOMS_ARRAY[@]}"; do
SLEEP=2 SLEEP=2
set -x set -x
# call purge # call purge
OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID") OUT=$(curl --header "$AUTH" -s -d '{"delete_local_events":true}' POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 ) PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
if [ "$PURGE_ID" == "" ]; then if [ "$PURGE_ID" == "" ]; then
# probably the history purge is already in progress for $ROOM # probably the history purge is already in progress for $ROOM

View File

@ -15,7 +15,7 @@ export DH_VIRTUALENV_INSTALL_ROOT=/opt/venvs
# python won't look in the right directory. At least this way, the error will # python won't look in the right directory. At least this way, the error will
# be a *bit* more obvious. # be a *bit* more obvious.
# #
SNAKE=`readlink -e /usr/bin/python3` SNAKE=$(readlink -e /usr/bin/python3)
# try to set the CFLAGS so any compiled C extensions are compiled with the most # try to set the CFLAGS so any compiled C extensions are compiled with the most
# generic as possible x64 instructions, so that compiling it on a new Intel chip # generic as possible x64 instructions, so that compiling it on a new Intel chip
@ -24,7 +24,7 @@ SNAKE=`readlink -e /usr/bin/python3`
# TODO: add similar things for non-amd64, or figure out a more generic way to # TODO: add similar things for non-amd64, or figure out a more generic way to
# do this. # do this.
case `dpkg-architecture -q DEB_HOST_ARCH` in case $(dpkg-architecture -q DEB_HOST_ARCH) in
amd64) amd64)
export CFLAGS=-march=x86-64 export CFLAGS=-march=x86-64
;; ;;
@ -40,6 +40,7 @@ dh_virtualenv \
--upgrade-pip \ --upgrade-pip \
--preinstall="lxml" \ --preinstall="lxml" \
--preinstall="mock" \ --preinstall="mock" \
--preinstall="wheel" \
--extra-pip-arg="--no-cache-dir" \ --extra-pip-arg="--no-cache-dir" \
--extra-pip-arg="--compile" \ --extra-pip-arg="--compile" \
--extras="all,systemd,test" --extras="all,systemd,test"
@ -56,8 +57,8 @@ case "$DEB_BUILD_OPTIONS" in
*) *)
# Copy tests to a temporary directory so that we can put them on the # Copy tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath. # PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d` tmpdir=$(mktemp -d)
trap "rm -r $tmpdir" EXIT trap 'rm -r $tmpdir' EXIT
cp -r tests "$tmpdir" cp -r tests "$tmpdir"
@ -98,7 +99,7 @@ esac
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml" --output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars. # add a dependency on the right version of python to substvars.
PYPKG=`basename $SNAKE` PYPKG=$(basename "$SNAKE")
echo "synapse:pydepends=$PYPKG" >> debian/matrix-synapse-py3.substvars echo "synapse:pydepends=$PYPKG" >> debian/matrix-synapse-py3.substvars

11
debian/changelog vendored
View File

@ -1,3 +1,14 @@
matrix-synapse-py3 (1.47.0+nmu1) stable; urgency=medium
[ Dan Callahan ]
* Update scripts to pass Shellcheck lints.
* Remove unused Vagrant scripts from debian/ directory.
* Allow building Debian packages for any architecture, not just amd64.
* Preinstall the "wheel" package when building virtualenvs.
* Do not error if /etc/default/matrix-synapse is missing.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Nov 2021 12:16:43 +0000
matrix-synapse-py3 (1.46.0) stable; urgency=medium matrix-synapse-py3 (1.46.0) stable; urgency=medium
[ Richard van der Hoff ] [ Richard van der Hoff ]

2
debian/control vendored
View File

@ -19,7 +19,7 @@ Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse Homepage: https://github.com/matrix-org/synapse
Package: matrix-synapse-py3 Package: matrix-synapse-py3
Architecture: amd64 Architecture: any
Provides: matrix-synapse Provides: matrix-synapse
Conflicts: Conflicts:
matrix-synapse (<< 0.34.0.1-0matrix2), matrix-synapse (<< 0.34.0.1-0matrix2),

View File

@ -2,6 +2,7 @@
set -e set -e
# shellcheck disable=SC1091
. /usr/share/debconf/confmodule . /usr/share/debconf/confmodule
# try to update the debconf db according to whatever is in the config files # try to update the debconf db according to whatever is in the config files

View File

@ -1,5 +1,6 @@
#!/bin/sh -e #!/bin/sh -e
# shellcheck disable=SC1091
. /usr/share/debconf/confmodule . /usr/share/debconf/confmodule
CONFIGFILE_SERVERNAME="/etc/matrix-synapse/conf.d/server_name.yaml" CONFIGFILE_SERVERNAME="/etc/matrix-synapse/conf.d/server_name.yaml"

View File

@ -5,7 +5,7 @@ Description=Synapse Matrix homeserver
Type=notify Type=notify
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=-/etc/default/matrix-synapse
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
ExecReload=/bin/kill -HUP $MAINPID ExecReload=/bin/kill -HUP $MAINPID

View File

@ -1,2 +0,0 @@
.vagrant
*.log

View File

@ -1,23 +0,0 @@
#!/bin/bash
#
# provisioning script for vagrant boxes for testing the matrix-synapse debs.
#
# Will install the most recent matrix-synapse-py3 deb for this platform from
# the /debs directory.
set -e
apt-get update
apt-get install -y lsb-release
deb=`ls /debs/matrix-synapse-py3_*+$(lsb_release -cs)*.deb | sort | tail -n1`
debconf-set-selections <<EOF
matrix-synapse matrix-synapse/report-stats boolean false
matrix-synapse matrix-synapse/server-name string localhost:18448
EOF
dpkg -i "$deb"
sed -i -e '/port: 8...$/{s/8448/18448/; s/8008/18008/}' -e '$aregistration_shared_secret: secret' /etc/matrix-synapse/homeserver.yaml
systemctl restart matrix-synapse

View File

@ -1,13 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
ver = `cd ../../..; dpkg-parsechangelog -S Version`.strip()
Vagrant.configure("2") do |config|
config.vm.box = "debian/stretch64"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "../../../../debs", "/debs", type: "nfs"
config.vm.provision "shell", path: "../provision.sh"
end

View File

@ -1,10 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "../../../../debs", "/debs"
config.vm.provision "shell", path: "../provision.sh"
end

View File

@ -6,14 +6,14 @@ DIR="$( cd "$( dirname "$0" )" && pwd )"
PID_FILE="$DIR/servers.pid" PID_FILE="$DIR/servers.pid"
if [ -f $PID_FILE ]; then if [ -f "$PID_FILE" ]; then
echo "servers.pid exists!" echo "servers.pid exists!"
exit 1 exit 1
fi fi
for port in 8080 8081 8082; do for port in 8080 8081 8082; do
rm -rf $DIR/$port rm -rf "${DIR:?}/$port"
rm -rf $DIR/media_store.$port rm -rf "$DIR/media_store.$port"
done done
rm -rf $DIR/etc rm -rf "${DIR:?}/etc"

View File

@ -4,21 +4,22 @@ DIR="$( cd "$( dirname "$0" )" && pwd )"
CWD=$(pwd) CWD=$(pwd)
cd "$DIR/.." cd "$DIR/.." || exit
mkdir -p demo/etc mkdir -p demo/etc
export PYTHONPATH=$(readlink -f $(pwd)) PYTHONPATH=$(readlink -f "$(pwd)")
export PYTHONPATH
echo $PYTHONPATH echo "$PYTHONPATH"
for port in 8080 8081 8082; do for port in 8080 8081 8082; do
echo "Starting server on port $port... " echo "Starting server on port $port... "
https_port=$((port + 400)) https_port=$((port + 400))
mkdir -p demo/$port mkdir -p demo/$port
pushd demo/$port pushd demo/$port || exit
#rm $DIR/etc/$port.config #rm $DIR/etc/$port.config
python3 -m synapse.app.homeserver \ python3 -m synapse.app.homeserver \
@ -27,75 +28,78 @@ for port in 8080 8081 8082; do
--config-path "$DIR/etc/$port.config" \ --config-path "$DIR/etc/$port.config" \
--report-stats no --report-stats no
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then if ! grep -F "Customisation made by demo/start.sh" -q "$DIR/etc/$port.config"; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo "public_baseurl: http://localhost:$port/" >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
# accidentaly bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port
# This overides some of the default settings above
listeners:
- port: $https_port
type: http
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
PORTLISTENERS
)
echo "${listeners}" >> $DIR/etc/$port.config
# Disable tls for the servers
printf '\n\n# Disable tls on the servers.' >> $DIR/etc/$port.config
echo '# DO NOT USE IN PRODUCTION' >> $DIR/etc/$port.config
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true' >> $DIR/etc/$port.config
echo 'federation_verify_certificates: false' >> $DIR/etc/$port.config
# Set tls paths
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\"" >> $DIR/etc/$port.config
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\"" >> $DIR/etc/$port.config
# Generate tls keys # Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix" openssl req -x509 -newkey rsa:4096 -keyout "$DIR/etc/localhost:$https_port.tls.key" -out "$DIR/etc/localhost:$https_port.tls.crt" -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server # Regenerate configuration
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config {
echo 'trusted_key_servers:' >> $DIR/etc/$port.config printf '\n\n# Customisation made by demo/start.sh\n'
echo ' - server_name: "matrix.org"' >> $DIR/etc/$port.config echo "public_baseurl: http://localhost:$port/"
echo ' accept_keys_insecurely: true' >> $DIR/etc/$port.config echo 'enable_registration: true'
# Reduce the blacklist # Warning, this heredoc depends on the interaction of tabs and spaces.
blacklist=$(cat <<-BLACK # Please don't accidentaly bork me with your fancy settings.
# Set the blacklist so that it doesn't include 127.0.0.1, ::1 listeners=$(cat <<-PORTLISTENERS
federation_ip_range_blacklist: # Configure server to listen on both $https_port and $port
- '10.0.0.0/8' # This overides some of the default settings above
- '172.16.0.0/12' listeners:
- '192.168.0.0/16' - port: $https_port
- '100.64.0.0/10' type: http
- '169.254.0.0/16' tls: true
- 'fe80::/64' resources:
- 'fc00::/7' - names: [client, federation]
BLACK
) - port: $port
echo "${blacklist}" >> $DIR/etc/$port.config tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
PORTLISTENERS
)
echo "${listeners}"
# Disable tls for the servers
printf '\n\n# Disable tls on the servers.'
echo '# DO NOT USE IN PRODUCTION'
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true'
echo 'federation_verify_certificates: false'
# Set tls paths
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\""
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\""
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server'
echo 'trusted_key_servers:'
echo ' - server_name: "matrix.org"'
echo ' accept_keys_insecurely: true'
# Reduce the blacklist
blacklist=$(cat <<-BLACK
# Set the blacklist so that it doesn't include 127.0.0.1, ::1
federation_ip_range_blacklist:
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- 'fe80::/64'
- 'fc00::/7'
BLACK
)
echo "${blacklist}"
} >> "$DIR/etc/$port.config"
fi fi
# Check script parameters # Check script parameters
if [ $# -eq 1 ]; then if [ $# -eq 1 ]; then
if [ $1 = "--no-rate-limit" ]; then if [ "$1" = "--no-rate-limit" ]; then
# Disable any rate limiting # Disable any rate limiting
ratelimiting=$(cat <<-RC ratelimiting=$(cat <<-RC
@ -137,22 +141,22 @@ for port in 8080 8081 8082; do
burst_count: 1000 burst_count: 1000
RC RC
) )
echo "${ratelimiting}" >> $DIR/etc/$port.config echo "${ratelimiting}" >> "$DIR/etc/$port.config"
fi fi
fi fi
if ! grep -F "full_twisted_stacktraces" -q $DIR/etc/$port.config; then if ! grep -F "full_twisted_stacktraces" -q "$DIR/etc/$port.config"; then
echo "full_twisted_stacktraces: true" >> $DIR/etc/$port.config echo "full_twisted_stacktraces: true" >> "$DIR/etc/$port.config"
fi fi
if ! grep -F "report_stats" -q $DIR/etc/$port.config ; then if ! grep -F "report_stats" -q "$DIR/etc/$port.config" ; then
echo "report_stats: false" >> $DIR/etc/$port.config echo "report_stats: false" >> "$DIR/etc/$port.config"
fi fi
python3 -m synapse.app.homeserver \ python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \ --config-path "$DIR/etc/$port.config" \
-D \ -D \
popd popd || exit
done done
cd "$CWD" cd "$CWD" || exit

View File

@ -8,7 +8,7 @@ for pid_file in $FILES; do
pid=$(cat "$pid_file") pid=$(cat "$pid_file")
if [[ $pid ]]; then if [[ $pid ]]; then
echo "Killing $pid_file with $pid" echo "Killing $pid_file with $pid"
kill $pid kill "$pid"
fi fi
done done

View File

@ -65,7 +65,8 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data * `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the database and media store. Defaults to `/data`. such as the database and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data * `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`. directories. If unset, and no user is set via `docker run --user`, defaults
to `991`, `991`.
## Running synapse ## Running synapse
@ -97,7 +98,9 @@ The following environment variables are supported in `run` mode:
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`. `<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_WORKER`: module to execute, used when running synapse with workers. * `SYNAPSE_WORKER`: module to execute, used when running synapse with workers.
Defaults to `synapse.app.homeserver`, which is suitable for non-worker mode. Defaults to `synapse.app.homeserver`, which is suitable for non-worker mode.
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`. * `UID`, `GID`: the user and group id to run Synapse as. If unset, and no user
is set via `docker run --user`, defaults to `991`, `991`. Note that this user
must have permission to read the config files, and write to the data directories.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`. * `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this: For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this:
@ -186,7 +189,7 @@ point to another Dockerfile.
## Disabling the healthcheck ## Disabling the healthcheck
If you are using a non-standard port or tls inside docker you can disable the healthcheck If you are using a non-standard port or tls inside docker you can disable the healthcheck
whilst running the above `docker run` commands. whilst running the above `docker run` commands.
``` ```
--no-healthcheck --no-healthcheck
@ -212,7 +215,7 @@ If you wish to point the healthcheck at a different port with docker command, ad
## Setting the healthcheck in docker-compose file ## Setting the healthcheck in docker-compose file
You can add the following to set a custom healthcheck in a docker compose file. You can add the following to set a custom healthcheck in a docker compose file.
You will need docker-compose version >2.1 for this to work. You will need docker-compose version >2.1 for this to work.
``` ```
healthcheck: healthcheck:
@ -226,5 +229,5 @@ healthcheck:
## Using jemalloc ## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator. Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse You can read about jemalloc by reading the Synapse
[README](https://github.com/matrix-org/synapse/blob/HEAD/README.rst#help-synapse-is-slow-and-eats-all-my-ram-cpu). [README](https://github.com/matrix-org/synapse/blob/HEAD/README.rst#help-synapse-is-slow-and-eats-all-my-ram-cpu).

View File

@ -5,7 +5,7 @@
set -ex set -ex
# Get the codename from distro env # Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro` DIST=$(cut -d ':' -f2 <<< "${distro:?}")
# we get a read-only copy of the source: make a writeable copy # we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build cp -aT /synapse/source /synapse/build
@ -17,7 +17,7 @@ cd /synapse/build
# Section to determine which "component" it should go into (see # Section to determine which "component" it should go into (see
# https://manpages.debian.org/stretch/reprepro/reprepro.1.en.html#GUESSING) # https://manpages.debian.org/stretch/reprepro/reprepro.1.en.html#GUESSING)
DEB_VERSION=`dpkg-parsechangelog -SVersion` DEB_VERSION=$(dpkg-parsechangelog -SVersion)
case $DEB_VERSION in case $DEB_VERSION in
*~rc*|*~a*|*~b*|*~c*) *~rc*|*~a*|*~b*|*~c*)
sed -ie '/^Section:/c\Section: prerelease' debian/control sed -ie '/^Section:/c\Section: prerelease' debian/control

View File

@ -120,6 +120,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
] ]
if ownership is not None: if ownership is not None:
log(f"Setting ownership on /data to {ownership}")
subprocess.check_output(["chown", "-R", ownership, "/data"]) subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["gosu", ownership] + args args = ["gosu", ownership] + args
@ -144,12 +145,18 @@ def run_generate_config(environ, ownership):
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml") config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data") data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
log(f"Setting ownership on {data_dir} to {ownership}")
subprocess.check_output(["chown", ownership, data_dir])
# create a suitable log config from our template # create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name) log_config_file = "%s/%s.log.config" % (config_dir, server_name)
if not os.path.exists(log_config_file): if not os.path.exists(log_config_file):
log("Creating log config %s" % (log_config_file,)) log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ) convert("/conf/log.config", log_config_file, environ)
# generate the main config file, and a signing key.
args = [ args = [
"python", "python",
"-m", "-m",
@ -168,29 +175,23 @@ def run_generate_config(environ, ownership):
"--open-private-ports", "--open-private-ports",
] ]
# log("running %s" % (args, )) # log("running %s" % (args, ))
os.execv("/usr/local/bin/python", args)
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
else:
os.execv("/usr/local/bin/python", args)
def main(args, environ): def main(args, environ):
mode = args[1] if len(args) > 1 else "run" mode = args[1] if len(args) > 1 else "run"
desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991"))
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
if (desired_uid == os.getuid()) and (desired_gid == os.getgid()):
ownership = None
else:
ownership = "{}:{}".format(desired_uid, desired_gid)
if ownership is None: # if we were given an explicit user to switch to, do so
log("Will not perform chmod/gosu as UserID already matches request") ownership = None
if "UID" in environ:
desired_uid = int(environ["UID"])
desired_gid = int(environ.get("GID", "991"))
ownership = f"{desired_uid}:{desired_gid}"
elif os.getuid() == 0:
# otherwise, if we are running as root, use user 991
ownership = "991:991"
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
# In generate mode, generate a configuration and missing keys, then exit # In generate mode, generate a configuration and missing keys, then exit
if mode == "generate": if mode == "generate":

View File

@ -15,12 +15,12 @@ in `homeserver.yaml`, to the list of authorized domains. If you have not set
1. Agree to the terms of service and submit. 1. Agree to the terms of service and submit.
1. Copy your site key and secret key and add them to your `homeserver.yaml` 1. Copy your site key and secret key and add them to your `homeserver.yaml`
configuration file configuration file
``` ```yaml
recaptcha_public_key: YOUR_SITE_KEY recaptcha_public_key: YOUR_SITE_KEY
recaptcha_private_key: YOUR_SECRET_KEY recaptcha_private_key: YOUR_SECRET_KEY
``` ```
1. Enable the CAPTCHA for new registrations 1. Enable the CAPTCHA for new registrations
``` ```yaml
enable_registration_captcha: true enable_registration_captcha: true
``` ```
1. Go to the settings page for the CAPTCHA you just created 1. Go to the settings page for the CAPTCHA you just created

View File

@ -51,6 +51,7 @@
- [Administration](usage/administration/README.md) - [Administration](usage/administration/README.md)
- [Admin API](usage/administration/admin_api/README.md) - [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md) - [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md)
- [Delete Group](admin_api/delete_group.md) - [Delete Group](admin_api/delete_group.md)
- [Event Reports](admin_api/event_reports.md) - [Event Reports](admin_api/event_reports.md)
- [Media](admin_api/media_admin_api.md) - [Media](admin_api/media_admin_api.md)

View File

@ -99,7 +99,7 @@ server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following: It returns a JSON body like the following:
```jsonc ```json
{ {
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY", "event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"event_json": { "event_json": {
@ -132,7 +132,7 @@ It returns a JSON body like the following:
}, },
"type": "m.room.message", "type": "m.room.message",
"unsigned": { "unsigned": {
"age_ts": 1592291711430, "age_ts": 1592291711430
} }
}, },
"id": <report_id>, "id": <report_id>,

View File

@ -27,7 +27,7 @@ Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set `delete_local_events` in the body: To delete local message events as well, set `delete_local_events` in the body:
``` ```json
{ {
"delete_local_events": true "delete_local_events": true
} }

View File

@ -28,7 +28,7 @@ server admin: see [Admin API](../usage/administration/admin_api).
Response: Response:
``` ```json
{ {
"room_id": "!636q39766251:server.com" "room_id": "!636q39766251:server.com"
} }

View File

@ -38,9 +38,14 @@ The following query parameters are available:
- `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room. - `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room.
- `state_events` - Rooms are ordered by number of state events. Largest to smallest. - `state_events` - Rooms are ordered by number of state events. Largest to smallest.
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting * `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`. this value to `b` will reverse the above sort order. Defaults to `f`.
* `search_term` - Filter rooms by their room name. Search term can be contained in any * `search_term` - Filter rooms by their room name, canonical alias and room id.
part of the room name. Defaults to no filtering. Specifically, rooms are selected if the search term is contained in
- the room's name,
- the local part of the room's canonical alias, or
- the complete (local and server part) room's id (case sensitive).
Defaults to no filtering.
**Response** **Response**
@ -87,7 +92,7 @@ GET /_synapse/admin/v1/rooms
A response body like the following is returned: A response body like the following is returned:
```jsonc ```json
{ {
"rooms": [ "rooms": [
{ {
@ -170,7 +175,7 @@ GET /_synapse/admin/v1/rooms?order_by=size
A response body like the following is returned: A response body like the following is returned:
```jsonc ```json
{ {
"rooms": [ "rooms": [
{ {
@ -208,7 +213,7 @@ A response body like the following is returned:
} }
], ],
"offset": 0, "offset": 0,
"total_rooms": 150 "total_rooms": 150,
"next_token": 100 "next_token": 100
} }
``` ```
@ -224,7 +229,7 @@ GET /_synapse/admin/v1/rooms?order_by=size&from=100
A response body like the following is returned: A response body like the following is returned:
```jsonc ```json
{ {
"rooms": [ "rooms": [
{ {
@ -380,7 +385,7 @@ A response body like the following is returned:
# Delete Room API # Delete Room API
The Delete Room admin API allows server admins to remove rooms from server The Delete Room admin API allows server admins to remove rooms from the server
and block these rooms. and block these rooms.
Shuts down a room. Moves all local users and room aliases automatically to a Shuts down a room. Moves all local users and room aliases automatically to a
@ -520,16 +525,6 @@ With all that being said, if you still want to try and recover the room:
4. If `new_room_user_id` was given, a 'Content Violation' will have been 4. If `new_room_user_id` was given, a 'Content Violation' will have been
created. Consider whether you want to delete that roomm. created. Consider whether you want to delete that roomm.
## Deprecated endpoint
The previous deprecated API will be removed in a future release, it was:
```
POST /_synapse/admin/v1/rooms/<room_id>/delete
```
It behaves the same way than the current endpoint except the path and the method.
# Make Room Admin API # Make Room Admin API
Grants another user the highest power available to a local user who is in the room. Grants another user the highest power available to a local user who is in the room.

View File

@ -10,7 +10,9 @@ The necessary tools are detailed below.
First install them with: First install them with:
pip install -e ".[lint,mypy]" ```sh
pip install -e ".[lint,mypy]"
```
- **black** - **black**
@ -21,7 +23,9 @@ First install them with:
Have `black` auto-format your code (it shouldn't change any Have `black` auto-format your code (it shouldn't change any
functionality) with: functionality) with:
black . --exclude="\.tox|build|env" ```sh
black . --exclude="\.tox|build|env"
```
- **flake8** - **flake8**
@ -30,7 +34,9 @@ First install them with:
Check all application and test code with: Check all application and test code with:
flake8 synapse tests ```sh
flake8 synapse tests
```
- **isort** - **isort**
@ -39,7 +45,9 @@ First install them with:
Auto-fix imports with: Auto-fix imports with:
isort -rc synapse tests ```sh
isort -rc synapse tests
```
`-rc` means to recursively search the given directories. `-rc` means to recursively search the given directories.
@ -66,15 +74,19 @@ save as it takes a while and is very resource intensive.
Example: Example:
from synapse.types import UserID ```python
... from synapse.types import UserID
user_id = UserID(local, server) ...
user_id = UserID(local, server)
```
is preferred over: is preferred over:
from synapse import types ```python
... from synapse import types
user_id = types.UserID(local, server) ...
user_id = types.UserID(local, server)
```
(or any other variant). (or any other variant).
@ -134,28 +146,30 @@ Some guidelines follow:
Example: Example:
## Frobnication ## ```yaml
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated. # The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following. # To enable it, uncomment the following.
# #
#frobnicator_enabled: true #frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber. # By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber. # The following will make it use an alternative frobber.
# #
#frobincator_frobber: special_frobber #frobincator_frobber: special_frobber
# Settings for the frobber # Settings for the frobber
# #
frobber: frobber:
# frobbing speed. Defaults to 1. # frobbing speed. Defaults to 1.
# #
#speed: 10 #speed: 10
# frobbing distance. Defaults to 1000. # frobbing distance. Defaults to 1000.
# #
#distance: 100 #distance: 100
```
Note that the sample configuration is generated from the synapse code Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config`. and is maintained by a script, `scripts-dev/generate_sample_config`.

View File

@ -99,7 +99,7 @@ construct URIs where users can give their consent.
see if an unauthenticated user is viewing the page. This is typically see if an unauthenticated user is viewing the page. This is typically
wrapped around the form that would be used to actually agree to the document: wrapped around the form that would be used to actually agree to the document:
``` ```html
{% if not public_version %} {% if not public_version %}
<!-- The variables used here are only provided when the 'u' param is given to the homeserver --> <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
<form method="post" action="consent"> <form method="post" action="consent">

View File

@ -1,4 +1,8 @@
# Delegation # Delegation of incoming federation traffic
In the following documentation, we use the term `server_name` to refer to that setting
in your homeserver configuration file. It appears at the ends of user ids, and tells
other homeservers where they can find your server.
By default, other homeservers will expect to be able to reach yours via By default, other homeservers will expect to be able to reach yours via
your `server_name`, on port 8448. For example, if you set your `server_name` your `server_name`, on port 8448. For example, if you set your `server_name`
@ -12,13 +16,21 @@ to a different server and/or port (e.g. `synapse.example.com:443`).
## .well-known delegation ## .well-known delegation
To use this method, you need to be able to alter the To use this method, you need to be able to configure the server at
`server_name` 's https server to serve the `/.well-known/matrix/server` `https://<server_name>` to serve a file at
URL. Having an active server (with a valid TLS certificate) serving your `https://<server_name>/.well-known/matrix/server`. There are two ways to do this, shown below.
`server_name` domain is out of the scope of this documentation.
The URL `https://<server_name>/.well-known/matrix/server` should Note that the `.well-known` file is hosted on the default port for `https` (port 443).
return a JSON structure containing the key `m.server` like so:
### External server
For maximum flexibility, you need to configure an external server such as nginx, Apache
or HAProxy to serve the `https://<server_name>/.well-known/matrix/server` file. Setting
up such a server is out of the scope of this documentation, but note that it is often
possible to configure your [reverse proxy](reverse_proxy.md) for this.
The URL `https://<server_name>/.well-known/matrix/server` should be configured
return a JSON structure containing the key `m.server` like this:
```json ```json
{ {
@ -26,8 +38,9 @@ return a JSON structure containing the key `m.server` like so:
} }
``` ```
In our example, this would mean that URL `https://example.com/.well-known/matrix/server` In our example (where we want federation traffic to be routed to
should return: `https://synapse.example.com`, on port 443), this would mean that
`https://example.com/.well-known/matrix/server` should return:
```json ```json
{ {
@ -38,16 +51,29 @@ should return:
Note, specifying a port is optional. If no port is specified, then it defaults Note, specifying a port is optional. If no port is specified, then it defaults
to 8448. to 8448.
With .well-known delegation, federating servers will check for a valid TLS ### Serving a `.well-known/matrix/server` file with Synapse
certificate for the delegated hostname (in our example: `synapse.example.com`).
If you are able to set up your domain so that `https://<server_name>` is routed to
Synapse (i.e., the only change needed is to direct federation traffic to port 443
instead of port 8448), then it is possible to configure Synapse to serve a suitable
`.well-known/matrix/server` file. To do so, add the following to your `homeserver.yaml`
file:
```yaml
serve_server_wellknown: true
```
**Note**: this *only* works if `https://<server_name>` is routed to Synapse, so is
generally not suitable if Synapse is hosted at a subdomain such as
`https://synapse.example.com`.
## SRV DNS record delegation ## SRV DNS record delegation
It is also possible to do delegation using a SRV DNS record. However, that is It is also possible to do delegation using a SRV DNS record. However, that is generally
considered an advanced topic since it's a bit complex to set up, and `.well-known` not recommended, as it can be difficult to configure the TLS certificates correctly in
delegation is already enough in most cases. this case, and it offers little advantage over `.well-known` delegation.
However, if you really need it, you can find some documentation on how such a However, if you really need it, you can find some documentation on what such a
record should look like and how Synapse will use it in [the Matrix record should look like and how Synapse will use it in [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names). specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names).
@ -68,27 +94,9 @@ wouldn't need any delegation set up.
domain `server_name` points to, you will need to let other servers know how to domain `server_name` points to, you will need to let other servers know how to
find it using delegation. find it using delegation.
### Do you still recommend against using a reverse proxy on the federation port? ### Should I use a reverse proxy for federation traffic?
We no longer actively recommend against using a reverse proxy. Many admins will Generally, using a reverse proxy for both the federation and client traffic is a good
find it easier to direct federation traffic to a reverse proxy and manage their idea, since it saves handling TLS traffic in Synapse. See
own TLS certificates, and this is a supported configuration. [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
reverse proxy. reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
This is no longer necessary. If you are using a reverse proxy for all of your
TLS traffic, then you can set `no_tls: True` in the Synapse config.
In that case, the only reason Synapse needs the certificate is to populate a legacy
`tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0
and later, and the only time pre-0.99 Synapses will check it is when attempting to
fetch the server keys - and generally this is delegated via `matrix.org`, which
is running a modern version of Synapse.
### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy.

View File

@ -8,23 +8,23 @@ easy to run CAS implementation built on top of Django.
1. Create a new virtualenv: `python3 -m venv <your virtualenv>` 1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate` 2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
3. Install Django and django-mama-cas: 3. Install Django and django-mama-cas:
``` ```sh
python -m pip install "django<3" "django-mama-cas==2.4.0" python -m pip install "django<3" "django-mama-cas==2.4.0"
``` ```
4. Create a Django project in the current directory: 4. Create a Django project in the current directory:
``` ```sh
django-admin startproject cas_test . django-admin startproject cas_test .
``` ```
5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas 5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
6. Setup the SQLite database: `python manage.py migrate` 6. Setup the SQLite database: `python manage.py migrate`
7. Create a user: 7. Create a user:
``` ```sh
python manage.py createsuperuser python manage.py createsuperuser
``` ```
1. Use whatever you want as the username and password. 1. Use whatever you want as the username and password.
2. Leave the other fields blank. 2. Leave the other fields blank.
8. Use the built-in Django test server to serve the CAS endpoints on port 8000: 8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
``` ```sh
python manage.py runserver python manage.py runserver
``` ```

View File

@ -15,6 +15,11 @@ license - in our case, this is almost always Apache Software License v2 (see
# 2. What do I need? # 2. What do I need?
If you are running Windows, the Windows Subsystem for Linux (WSL) is strongly
recommended for development. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download). The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git). The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
@ -41,8 +46,6 @@ can find many good git tutorials on the web.
# 4. Install the dependencies # 4. Install the dependencies
## Under Unix (macOS, Linux, BSD, ...)
Once you have installed Python 3 and added the source, please open a terminal and Once you have installed Python 3 and added the source, please open a terminal and
setup a *virtualenv*, as follows: setup a *virtualenv*, as follows:
@ -56,10 +59,6 @@ pip install tox
This will install the developer dependencies for the project. This will install the developer dependencies for the project.
## Under Windows
TBD
# 5. Get in touch. # 5. Get in touch.

View File

@ -89,7 +89,9 @@ To do so, use `scripts-dev/make_full_schema.sh`. This will produce new
Ensure postgres is installed, then run: Ensure postgres is installed, then run:
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/ ```sh
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
```
NB at the time of writing, this script predates the split into separate `state`/`main` NB at the time of writing, this script predates the split into separate `state`/`main`
databases so will require updates to handle that correctly. databases so will require updates to handle that correctly.

View File

@ -15,7 +15,7 @@ To make Synapse (and therefore Element) use it:
sp_config: sp_config:
allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388 allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
metadata: metadata:
local: ["samling.xml"] local: ["samling.xml"]
``` ```
5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`: 5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
```yaml ```yaml

View File

@ -69,9 +69,9 @@ A default policy can be defined as such, in the `retention` section of
the configuration file: the configuration file:
```yaml ```yaml
default_policy: default_policy:
min_lifetime: 1d min_lifetime: 1d
max_lifetime: 1y max_lifetime: 1y
``` ```
Here, `min_lifetime` and `max_lifetime` have the same meaning and level Here, `min_lifetime` and `max_lifetime` have the same meaning and level
@ -95,14 +95,14 @@ depending on an event's room's policy. This can be done by setting the
file. An example of such configuration could be: file. An example of such configuration could be:
```yaml ```yaml
purge_jobs: purge_jobs:
- longest_max_lifetime: 3d - longest_max_lifetime: 3d
interval: 12h interval: 12h
- shortest_max_lifetime: 3d - shortest_max_lifetime: 3d
longest_max_lifetime: 1w longest_max_lifetime: 1w
interval: 1d interval: 1d
- shortest_max_lifetime: 1w - shortest_max_lifetime: 1w
interval: 2d interval: 2d
``` ```
In this example, we define three jobs: In this example, we define three jobs:
@ -141,8 +141,8 @@ purging old events in a room. These limits can be defined as such in the
`retention` section of the configuration file: `retention` section of the configuration file:
```yaml ```yaml
allowed_lifetime_min: 1d allowed_lifetime_min: 1d
allowed_lifetime_max: 1y allowed_lifetime_max: 1y
``` ```
The limits are considered when running purge jobs. If necessary, the The limits are considered when running purge jobs. If necessary, the

View File

@ -10,8 +10,8 @@ registered by using the Module API's `register_password_auth_provider_callbacks`
_First introduced in Synapse v1.46.0_ _First introduced in Synapse v1.46.0_
``` ```python
auth_checkers: Dict[Tuple[str,Tuple], Callable] auth_checkers: Dict[Tuple[str, Tuple[str, ...]], Callable]
``` ```
A dict mapping from tuples of a login type identifier (such as `m.login.password`) and a A dict mapping from tuples of a login type identifier (such as `m.login.password`) and a

View File

@ -123,42 +123,6 @@ callback returns `True`, Synapse falls through to the next one. The value of the
callback that does not return `True` will be used. If this happens, Synapse will not call callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. any of the subsequent implementations of this callback.
### `user_may_create_room_with_invites`
_First introduced in Synapse v1.44.0_
```python
async def user_may_create_room_with_invites(
user: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool
```
Called when processing a room creation request (right after `user_may_create_room`).
The module is given the Matrix user ID of the user trying to create a room, as well as a
list of Matrix users to invite and a list of third-party identifiers (3PID, e.g. email
addresses) to invite.
An invited Matrix user to invite is represented by their Matrix user IDs, and an invited
3PIDs is represented by a dict that includes the 3PID medium (e.g. "email") through its
`medium` key and its address (e.g. "alice@example.com") through its `address` key.
See [the Matrix specification](https://matrix.org/docs/spec/appendices#pid-types) for more
information regarding third-party identifiers.
If no invite and/or 3PID invite were specified in the room creation request, the
corresponding list(s) will be empty.
**Note**: This callback is not called when a room is cloned (e.g. during a room upgrade)
since no invites are sent when cloning a room. To cover this case, modules also need to
implement `user_may_create_room`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `user_may_create_room_alias` ### `user_may_create_room_alias`
_First introduced in Synapse v1.37.0_ _First introduced in Synapse v1.37.0_

View File

@ -43,6 +43,14 @@ event with new data by returning the new event's data as a dictionary. In order
that, it is recommended the module calls `event.get_dict()` to get the current event as a that, it is recommended the module calls `event.get_dict()` to get the current event as a
dictionary, and modify the returned dictionary accordingly. dictionary, and modify the returned dictionary accordingly.
If `check_event_allowed` raises an exception, the module is assumed to have failed.
The event will not be accepted but is not treated as explicitly rejected, either.
An HTTP request causing the module check will likely result in a 500 Internal
Server Error.
When the boolean returned by the module is `False`, the event is rejected.
(Module developers should not use exceptions for rejection.)
Note that replacing the event only works for events sent by local users, not for events Note that replacing the event only works for events sent by local users, not for events
received over federation. received over federation.
@ -119,6 +127,27 @@ callback returns `True`, Synapse falls through to the next one. The value of the
callback that does not return `True` will be used. If this happens, Synapse will not call callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. any of the subsequent implementations of this callback.
### `on_new_event`
_First introduced in Synapse v1.47.0_
```python
async def on_new_event(
event: "synapse.events.EventBase",
state_events: "synapse.types.StateMap",
) -> None:
```
Called after sending an event into a room. The module is passed the event, as well
as the state of the room _after_ the event. This means that if the event is a state event,
it will be included in this state.
Note that this callback is called when the event has already been processed and stored
into the room, which means this callback cannot be used to deny persisting the event. To
deny an incoming event, see [`check_event_for_spam`](spam_checker_callbacks.md#check_event_for_spam) instead.
If multiple modules implement this callback, Synapse runs them all in order.
## Example ## Example
The example below is a module that implements the third-party rules callback The example below is a module that implements the third-party rules callback

View File

@ -21,6 +21,8 @@ such as [Github][github-idp].
[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect [google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
[auth0]: https://auth0.com/ [auth0]: https://auth0.com/
[authentik]: https://goauthentik.io/
[lemonldap]: https://lemonldap-ng.org/
[okta]: https://www.okta.com/ [okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex [dex-idp]: https://github.com/dexidp/dex
[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols [keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
@ -209,6 +211,76 @@ oidc_providers:
display_name_template: "{{ user.name }}" display_name_template: "{{ user.name }}"
``` ```
### Authentik
[Authentik][authentik] is an open-source IdP solution.
1. Create a provider in Authentik, with type OAuth2/OpenID.
2. The parameters are:
- Client Type: Confidential
- JWT Algorithm: RS256
- Scopes: OpenID, Email and Profile
- RSA Key: Select any available key
- Redirect URIs: `[synapse public baseurl]/_synapse/client/oidc/callback`
3. Create an application for synapse in Authentik and link it to the provider.
4. Note the slug of your application, Client ID and Client Secret.
Synapse config:
```yaml
oidc_providers:
- idp_id: authentik
idp_name: authentik
discover: true
issuer: "https://your.authentik.example.org/application/o/your-app-slug/" # TO BE FILLED: domain and slug
client_id: "your client id" # TO BE FILLED
client_secret: "your client secret" # TO BE FILLED
scopes:
- "openid"
- "profile"
- "email"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}}"
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
```
### LemonLDAP
[LemonLDAP::NG][lemonldap] is an open-source IdP solution.
1. Create an OpenID Connect Relying Parties in LemonLDAP::NG
2. The parameters are:
- Client ID under the basic menu of the new Relying Parties (`Options > Basic >
Client ID`)
- Client secret (`Options > Basic > Client secret`)
- JWT Algorithm: RS256 within the security menu of the new Relying Parties
(`Options > Security > ID Token signature algorithm` and `Options > Security >
Access Token signature algorithm`)
- Scopes: OpenID, Email and Profile
- Allowed redirection addresses for login (`Options > Basic > Allowed
redirection addresses for login` ) :
`[synapse public baseurl]/_synapse/client/oidc/callback`
Synapse config:
```yaml
oidc_providers:
- idp_id: lemonldap
idp_name: lemonldap
discover: true
issuer: "https://auth.example.org/" # TO BE FILLED: replace with your domain
client_id: "your client id" # TO BE FILLED
client_secret: "your client secret" # TO BE FILLED
scopes:
- "openid"
- "profile"
- "email"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}}"
# TO BE FILLED: If your users have names in LemonLDAP::NG and you want those in Synapse, this should be replaced with user.name|capitalize or any valid filter.
display_name_template: "{{ user.preferred_username|capitalize }}"
```
### GitHub ### GitHub
[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but [GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but

View File

@ -29,16 +29,20 @@ connect to a postgres database.
Assuming your PostgreSQL database user is called `postgres`, first authenticate as the database user with: Assuming your PostgreSQL database user is called `postgres`, first authenticate as the database user with:
su - postgres ```sh
# Or, if your system uses sudo to get administrative rights su - postgres
sudo -u postgres bash # Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
```
Then, create a postgres user and a database with: Then, create a postgres user and a database with:
# this will prompt for a password for the new user ```sh
createuser --pwprompt synapse_user # this will prompt for a password for the new user
createuser --pwprompt synapse_user
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
```
The above will create a user called `synapse_user`, and a database called The above will create a user called `synapse_user`, and a database called
`synapse`. `synapse`.
@ -145,20 +149,26 @@ Firstly, shut down the currently running synapse server and copy its
database file (typically `homeserver.db`) to another location. Once the database file (typically `homeserver.db`) to another location. Once the
copy is complete, restart synapse. For instance: copy is complete, restart synapse. For instance:
./synctl stop ```sh
cp homeserver.db homeserver.db.snapshot ./synctl stop
./synctl start cp homeserver.db homeserver.db.snapshot
./synctl start
```
Copy the old config file into a new config file: Copy the old config file into a new config file:
cp homeserver.yaml homeserver-postgres.yaml ```sh
cp homeserver.yaml homeserver-postgres.yaml
```
Edit the database section as described in the section *Synapse config* Edit the database section as described in the section *Synapse config*
above and with the SQLite snapshot located at `homeserver.db.snapshot` above and with the SQLite snapshot located at `homeserver.db.snapshot`
simply run: simply run:
synapse_port_db --sqlite-database homeserver.db.snapshot \ ```sh
--postgres-config homeserver-postgres.yaml synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
```
The flag `--curses` displays a coloured curses progress UI. The flag `--curses` displays a coloured curses progress UI.
@ -170,16 +180,20 @@ To complete the conversion shut down the synapse server and run the port
script one last time, e.g. if the SQLite database is at `homeserver.db` script one last time, e.g. if the SQLite database is at `homeserver.db`
run: run:
synapse_port_db --sqlite-database homeserver.db \ ```sh
--postgres-config homeserver-postgres.yaml synapse_port_db --sqlite-database homeserver.db \
--postgres-config homeserver-postgres.yaml
```
Once that has completed, change the synapse config to point at the Once that has completed, change the synapse config to point at the
PostgreSQL database configuration file `homeserver-postgres.yaml`: PostgreSQL database configuration file `homeserver-postgres.yaml`:
./synctl stop ```sh
mv homeserver.yaml homeserver-old-sqlite.yaml ./synctl stop
mv homeserver-postgres.yaml homeserver.yaml mv homeserver.yaml homeserver-old-sqlite.yaml
./synctl start mv homeserver-postgres.yaml homeserver.yaml
./synctl start
```
Synapse should now be running against PostgreSQL. Synapse should now be running against PostgreSQL.

View File

@ -52,7 +52,7 @@ to proxied traffic.)
### nginx ### nginx
``` ```nginx
server { server {
listen 443 ssl http2; listen 443 ssl http2;
listen [::]:443 ssl http2; listen [::]:443 ssl http2;
@ -141,7 +141,7 @@ matrix.example.com {
### Apache ### Apache
``` ```apache
<VirtualHost *:443> <VirtualHost *:443>
SSLEngine on SSLEngine on
ServerName matrix.example.com ServerName matrix.example.com
@ -170,7 +170,7 @@ matrix.example.com {
**NOTE 2**: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (`mod_security2`). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two `</VirtualHost>` above: **NOTE 2**: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (`mod_security2`). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two `</VirtualHost>` above:
``` ```apache
<IfModule security2_module> <IfModule security2_module>
SecRuleEngine off SecRuleEngine off
</IfModule> </IfModule>
@ -188,7 +188,7 @@ frontend https
http-request set-header X-Forwarded-For %[src] http-request set-header X-Forwarded-For %[src]
# Matrix client traffic # Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com acl matrix-host hdr(host) -i matrix.example.com matrix.example.com:443
acl matrix-path path_beg /_matrix acl matrix-path path_beg /_matrix
acl matrix-path path_beg /_synapse/client acl matrix-path path_beg /_synapse/client

View File

@ -91,8 +91,28 @@ pid_file: DATADIR/homeserver.pid
# Otherwise, it should be the URL to reach Synapse's client HTTP listener (see # Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
# 'listeners' below). # 'listeners' below).
# #
# Defaults to 'https://<server_name>/'.
#
#public_baseurl: https://example.com/ #public_baseurl: https://example.com/
# Uncomment the following to tell other servers to send federation traffic on
# port 443.
#
# By default, other servers will try to reach our server on port 8448, which can
# be inconvenient in some environments.
#
# Provided 'https://<server_name>/' on port 443 is routed to Synapse, this
# option configures Synapse to serve a file at
# 'https://<server_name>/.well-known/matrix/server'. This will tell other
# servers to send traffic to port 443 instead.
#
# See https://matrix-org.github.io/synapse/latest/delegate.html for more
# information.
#
# Defaults to 'false'.
#
#serve_server_wellknown: true
# Set the soft limit on the number of file descriptors synapse can use # Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the # Zero is used to indicate synapse should set the soft limit to the
# hard limit. # hard limit.
@ -1247,7 +1267,7 @@ oembed:
# in on this server. # in on this server.
# #
# (By default, no suggestion is made, so it is left up to the client. # (By default, no suggestion is made, so it is left up to the client.
# This setting is ignored unless public_baseurl is also set.) # This setting is ignored unless public_baseurl is also explicitly set.)
# #
#default_identity_server: https://matrix.org #default_identity_server: https://matrix.org
@ -1272,8 +1292,6 @@ oembed:
# by the Matrix Identity Service API specification: # by the Matrix Identity Service API specification:
# https://matrix.org/docs/spec/identity_service/latest # https://matrix.org/docs/spec/identity_service/latest
# #
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates: account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.com #email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process #msisdn: http://localhost:8090 # Delegate SMS sending to this local process
@ -1963,11 +1981,10 @@ sso:
# phishing attacks from evil.site. To avoid this, include a slash after the # phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/". # hostname: "https://my.client/".
# #
# If public_baseurl is set, then the login fallback page (used by clients # The login fallback page (used by clients that don't natively support the
# that don't natively support the required login flows) is whitelisted in # required login flows) is whitelisted in addition to any URLs in this list.
# addition to any URLs in this list.
# #
# By default, this list is empty. # By default, this list contains only the login fallback page.
# #
#client_whitelist: #client_whitelist:
# - https://riot.im/develop # - https://riot.im/develop

View File

@ -356,12 +356,14 @@ make install
##### Windows ##### Windows
If you wish to run or develop Synapse on Windows, the Windows Subsystem For Running Synapse natively on Windows is not officially supported.
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can If you wish to run or develop Synapse on Windows, the Windows Subsystem for
be found at <https://docs.microsoft.com/en-us/windows/wsl/install-win10> for Linux provides a Linux environment which is capable of using the Debian, Fedora,
Windows 10 and <https://docs.microsoft.com/en-us/windows/wsl/install-on-server> or source installation methods. More information about WSL can be found at
for Windows Server. <https://docs.microsoft.com/en-us/windows/wsl/install> for Windows 10/11 and
<https://docs.microsoft.com/en-us/windows/wsl/install-on-server> for
Windows Server.
## Setting up Synapse ## Setting up Synapse

View File

@ -20,7 +20,9 @@ Finally, to actually run your worker-based synapse, you must pass synctl the `-a
commandline option to tell it to operate on all the worker configurations found commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.: in the given directory, e.g.:
synctl -a $CONFIG/workers start ```sh
synctl -a $CONFIG/workers start
```
Currently one should always restart all workers when restarting or upgrading Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting synapse, unless you explicitly know it's safe not to. For instance, restarting
@ -29,4 +31,6 @@ notifications.
To manipulate a specific worker, you pass the -w option to synctl: To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/worker1.yaml restart ```sh
synctl -w $CONFIG/workers/worker1.yaml restart
```

View File

@ -15,7 +15,7 @@ Type=notify
NotifyAccess=main NotifyAccess=main
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=-/etc/default/matrix-synapse
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
ExecReload=/bin/kill -HUP $MAINPID ExecReload=/bin/kill -HUP $MAINPID
Restart=always Restart=always

View File

@ -10,7 +10,7 @@ Type=notify
NotifyAccess=main NotifyAccess=main
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=-/etc/default/matrix-synapse
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
ExecReload=/bin/kill -HUP $MAINPID ExecReload=/bin/kill -HUP $MAINPID

View File

@ -40,7 +40,9 @@ This will install and start a systemd service called `coturn`.
1. Configure it: 1. Configure it:
./configure ```sh
./configure
```
You may need to install `libevent2`: if so, you should do so in You may need to install `libevent2`: if so, you should do so in
the way recommended by your operating system. You can ignore the way recommended by your operating system. You can ignore
@ -49,22 +51,28 @@ This will install and start a systemd service called `coturn`.
1. Build and install it: 1. Build and install it:
make ```sh
make install make
make install
```
### Configuration ### Configuration
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant 1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
lines, with example values, are: lines, with example values, are:
use-auth-secret ```
static-auth-secret=[your secret key here] use-auth-secret
realm=turn.myserver.org static-auth-secret=[your secret key here]
realm=turn.myserver.org
```
See `turnserver.conf` for explanations of the options. One way to generate See `turnserver.conf` for explanations of the options. One way to generate
the `static-auth-secret` is with `pwgen`: the `static-auth-secret` is with `pwgen`:
pwgen -s 64 1 ```sh
pwgen -s 64 1
```
A `realm` must be specified, but its value is somewhat arbitrary. (It is A `realm` must be specified, but its value is somewhat arbitrary. (It is
sent to clients as part of the authentication flow.) It is conventional to sent to clients as part of the authentication flow.) It is conventional to
@ -73,7 +81,9 @@ This will install and start a systemd service called `coturn`.
1. You will most likely want to configure coturn to write logs somewhere. The 1. You will most likely want to configure coturn to write logs somewhere. The
easiest way is normally to send them to the syslog: easiest way is normally to send them to the syslog:
syslog ```sh
syslog
```
(in which case, the logs will be available via `journalctl -u coturn` on a (in which case, the logs will be available via `journalctl -u coturn` on a
systemd system). Alternatively, coturn can be configured to write to a systemd system). Alternatively, coturn can be configured to write to a
@ -83,31 +93,35 @@ This will install and start a systemd service called `coturn`.
connect to arbitrary IP addresses and ports. The following configuration is connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point: suggested as a minimum starting point:
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay. ```
no-tcp-relay # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any) # don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too. # given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255 denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255 denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255 denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work # special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1 allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS. # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user. user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200 total-quota=1200
```
1. Also consider supporting TLS/DTLS. To do this, add the following settings 1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`: to `turnserver.conf`:
# TLS certificates, including intermediate certs. ```
# For Let's Encrypt certificates, use `fullchain.pem` here. # TLS certificates, including intermediate certs.
cert=/path/to/fullchain.pem # For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file # TLS private key file
pkey=/path/to/privkey.pem pkey=/path/to/privkey.pem
```
In this case, replace the `turn:` schemes in the `turn_uri` settings below In this case, replace the `turn:` schemes in the `turn_uri` settings below
with `turns:`. with `turns:`.
@ -126,7 +140,9 @@ This will install and start a systemd service called `coturn`.
If you want to try it anyway, you will at least need to tell coturn its If you want to try it anyway, you will at least need to tell coturn its
external IP address: external IP address:
external-ip=192.88.99.1 ```
external-ip=192.88.99.1
```
... and your NAT gateway must forward all of the relayed ports directly ... and your NAT gateway must forward all of the relayed ports directly
(eg, port 56789 on the external IP must be always be forwarded to port (eg, port 56789 on the external IP must be always be forwarded to port
@ -186,7 +202,7 @@ After updating the homeserver configuration, you must restart synapse:
./synctl restart ./synctl restart
``` ```
* If you use systemd: * If you use systemd:
``` ```sh
systemctl restart matrix-synapse.service systemctl restart matrix-synapse.service
``` ```
... and then reload any clients (or wait an hour for them to refresh their ... and then reload any clients (or wait an hour for them to refresh their

View File

@ -85,6 +85,29 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
``` ```
# Upgrading to v1.47.0
## Removal of old Room Admin API
The following admin APIs were deprecated in [Synapse 1.34](https://github.com/matrix-org/synapse/blob/v1.34.0/CHANGES.md#deprecations-and-removals)
(released on 2021-05-17) and have now been removed:
- `POST /_synapse/admin/v1/<room_id>/delete`
Any scripts still using the above APIs should be converted to use the
[Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api).
## Deprecation of the `user_may_create_room_with_invites` module callback
The `user_may_create_room_with_invites` is deprecated and will be removed in a future
version of Synapse. Modules implementing this callback can instead implement
[`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
module API method to infer whether the invite is happening in the context of creating a
room.
We plan to remove this callback in January 2022.
# Upgrading to v1.45.0 # Upgrading to v1.45.0
## Changes required to media storage provider modules when reading from the Synapse configuration object ## Changes required to media storage provider modules when reading from the Synapse configuration object
@ -1163,16 +1186,20 @@ For more information on configuring TLS certificates see the
For users who have installed Synapse into a virtualenv, we recommend For users who have installed Synapse into a virtualenv, we recommend
doing this by creating a new virtualenv. For example: doing this by creating a new virtualenv. For example:
virtualenv -p python3 ~/synapse/env3 ```sh
source ~/synapse/env3/bin/activate virtualenv -p python3 ~/synapse/env3
pip install matrix-synapse source ~/synapse/env3/bin/activate
pip install matrix-synapse
```
You can then start synapse as normal, having activated the new You can then start synapse as normal, having activated the new
virtualenv: virtualenv:
cd ~/synapse ```sh
source env3/bin/activate cd ~/synapse
synctl start source env3/bin/activate
synctl start
```
Users who have installed from distribution packages should see the Users who have installed from distribution packages should see the
relevant package documentation. See below for notes on Debian relevant package documentation. See below for notes on Debian
@ -1184,34 +1211,38 @@ For more information on configuring TLS certificates see the
`<server>.log.config` file. For example, if your `log.config` `<server>.log.config` file. For example, if your `log.config`
file contains: file contains:
handlers: ```yaml
file: handlers:
class: logging.handlers.RotatingFileHandler file:
formatter: precise class: logging.handlers.RotatingFileHandler
filename: homeserver.log formatter: precise
maxBytes: 104857600 filename: homeserver.log
backupCount: 10 maxBytes: 104857600
filters: [context] backupCount: 10
console: filters: [context]
class: logging.StreamHandler console:
formatter: precise class: logging.StreamHandler
filters: [context] formatter: precise
filters: [context]
```
Then you should update this to be: Then you should update this to be:
handlers: ```yaml
file: handlers:
class: logging.handlers.RotatingFileHandler file:
formatter: precise class: logging.handlers.RotatingFileHandler
filename: homeserver.log formatter: precise
maxBytes: 104857600 filename: homeserver.log
backupCount: 10 maxBytes: 104857600
filters: [context] backupCount: 10
encoding: utf8 filters: [context]
console: encoding: utf8
class: logging.StreamHandler console:
formatter: precise class: logging.StreamHandler
filters: [context] formatter: precise
filters: [context]
```
There is no need to revert this change if downgrading to There is no need to revert this change if downgrading to
Python 2. Python 2.
@ -1297,24 +1328,28 @@ with the HS remotely has been removed.
It has been replaced by specifying a list of application service It has been replaced by specifying a list of application service
registrations in `homeserver.yaml`: registrations in `homeserver.yaml`:
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"] ```yaml
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
```
Where `registration-01.yaml` looks like: Where `registration-01.yaml` looks like:
url: <String> # e.g. "https://my.application.service.com" ```yaml
as_token: <String> url: <String> # e.g. "https://my.application.service.com"
hs_token: <String> as_token: <String>
sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token hs_token: <String>
namespaces: sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token
users: namespaces:
- exclusive: <Boolean> users:
regex: <String> # e.g. "@prefix_.*" - exclusive: <Boolean>
aliases: regex: <String> # e.g. "@prefix_.*"
- exclusive: <Boolean> aliases:
regex: <String> - exclusive: <Boolean>
rooms: regex: <String>
- exclusive: <Boolean> rooms:
regex: <String> - exclusive: <Boolean>
regex: <String>
```
# Upgrading to v0.8.0 # Upgrading to v0.8.0

View File

@ -0,0 +1,84 @@
# Background Updates API
This API allows a server administrator to manage the background updates being
run against the database.
## Status
This API gets the current status of the background updates.
The API is:
```
GET /_synapse/admin/v1/background_updates/status
```
Returning:
```json
{
"enabled": true,
"current_updates": {
"<db_name>": {
"name": "<background_update_name>",
"total_item_count": 50,
"total_duration_ms": 10000.0,
"average_items_per_ms": 2.2,
},
}
}
```
`enabled` whether the background updates are enabled or disabled.
`db_name` the database name (usually Synapse is configured with a single database named 'master').
For each update:
`name` the name of the update.
`total_item_count` total number of "items" processed (the meaning of 'items' depends on the update in question).
`total_duration_ms` how long the background process has been running, not including time spent sleeping.
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
## Enabled
This API allow pausing background updates.
Background updates should *not* be paused for significant periods of time, as
this can affect the performance of Synapse.
*Note*: This won't persist over restarts.
*Note*: This won't cancel any update query that is currently running. This is
usually fine since most queries are short lived, except for `CREATE INDEX`
background updates which won't be cancelled once started.
The API is:
```
POST /_synapse/admin/v1/background_updates/enabled
```
with the following body:
```json
{
"enabled": false
}
```
`enabled` sets whether the background updates are enabled or disabled.
The API returns the `enabled` param.
```json
{
"enabled": false
}
```
There is also a `GET` version which returns the `enabled` state.

View File

@ -443,19 +443,19 @@ In the `media_repository` worker configuration file, configure the http listener
expose the `media` resource. For example: expose the `media` resource. For example:
```yaml ```yaml
worker_listeners: worker_listeners:
- type: http - type: http
port: 8085 port: 8085
resources: resources:
- names: - names:
- media - media
``` ```
Note that if running multiple media repositories they must be on the same server Note that if running multiple media repositories they must be on the same server
and you must configure a single instance to run the background tasks, e.g.: and you must configure a single instance to run the background tasks, e.g.:
```yaml ```yaml
media_instance_running_background_jobs: "media-repository-1" media_instance_running_background_jobs: "media-repository-1"
``` ```
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately). Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
@ -492,7 +492,9 @@ must therefore be configured with the location of the main instance, via
the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
file. For example: file. For example:
worker_main_http_uri: http://127.0.0.1:8008 ```yaml
worker_main_http_uri: http://127.0.0.1:8008
```
### Historical apps ### Historical apps

View File

@ -16,31 +16,17 @@ no_implicit_optional = True
files = files =
scripts-dev/sign_json, scripts-dev/sign_json,
synapse/__init__.py,
synapse/api, synapse/api,
synapse/appservice, synapse/appservice,
synapse/config, synapse/config,
synapse/crypto, synapse/crypto,
synapse/event_auth.py, synapse/event_auth.py,
synapse/events/builder.py, synapse/events,
synapse/events/presence_router.py,
synapse/events/snapshot.py,
synapse/events/spamcheck.py,
synapse/events/third_party_rules.py,
synapse/events/utils.py,
synapse/events/validator.py,
synapse/federation, synapse/federation,
synapse/groups, synapse/groups,
synapse/handlers, synapse/handlers,
synapse/http/additional_resource.py, synapse/http,
synapse/http/client.py,
synapse/http/federation/matrix_federation_agent.py,
synapse/http/federation/srv_resolver.py,
synapse/http/federation/well_known_resolver.py,
synapse/http/matrixfederationclient.py,
synapse/http/proxyagent.py,
synapse/http/servlet.py,
synapse/http/server.py,
synapse/http/site.py,
synapse/logging, synapse/logging,
synapse/metrics, synapse/metrics,
synapse/module_api, synapse/module_api,
@ -61,6 +47,7 @@ files =
synapse/storage/databases/main/keys.py, synapse/storage/databases/main/keys.py,
synapse/storage/databases/main/pusher.py, synapse/storage/databases/main/pusher.py,
synapse/storage/databases/main/registration.py, synapse/storage/databases/main/registration.py,
synapse/storage/databases/main/relations.py,
synapse/storage/databases/main/session.py, synapse/storage/databases/main/session.py,
synapse/storage/databases/main/stream.py, synapse/storage/databases/main/stream.py,
synapse/storage/databases/main/ui_auth.py, synapse/storage/databases/main/ui_auth.py,

View File

@ -42,10 +42,10 @@ echo "--------------------------"
echo echo
matched=0 matched=0
for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do for f in $(git diff --name-only FETCH_HEAD... -- changelog.d); do
# check that any modified newsfiles on this branch end with a full stop. # check that any modified newsfiles on this branch end with a full stop.
lastchar=`tr -d '\n' < $f | tail -c 1` lastchar=$(tr -d '\n' < "$f" | tail -c 1)
if [ $lastchar != '.' -a $lastchar != '!' ]; then if [ "$lastchar" != '.' ] && [ "$lastchar" != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2 echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1 exit 1

View File

@ -25,7 +25,7 @@
# terminators are found, 0 otherwise. # terminators are found, 0 otherwise.
# cd to the root of the repository # cd to the root of the repository
cd `dirname $0`/.. cd "$(dirname "$0")/.." || exit
# Find and print files with non-unix line terminators # Find and print files with non-unix line terminators
if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then

View File

@ -24,7 +24,7 @@
set -e set -e
# Change to the repository root # Change to the repository root
cd "$(dirname $0)/.." cd "$(dirname "$0")/.."
# Check for a user-specified Complement checkout # Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then if [[ -z "$COMPLEMENT_DIR" ]]; then
@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
EXTRA_COMPLEMENT_ARGS="" EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement # A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS+="-run $1 " EXTRA_COMPLEMENT_ARGS=(-run "$1")
fi fi
# Run the tests! # Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/... go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...

View File

@ -3,7 +3,7 @@
# Exits with 0 if there are no problems, or another code otherwise. # Exits with 0 if there are no problems, or another code otherwise.
# cd to the root of the repository # cd to the root of the repository
cd `dirname $0`/.. cd "$(dirname "$0")/.." || exit
# Restore backup of sample config upon script exit # Restore backup of sample config upon script exit
trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT

View File

@ -60,5 +60,5 @@ DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts
# Update the Debian changelog. # Update the Debian changelog.
ver=${1} ver=${1}
dch -M -v $(sed -Ee 's/(rc|a|b|c)/~\1/' <<<$ver) "New synapse release $ver." dch -M -v "$(sed -Ee 's/(rc|a|b|c)/~\1/' <<<"$ver")" "New synapse release $ver."
dch -M -r -D stable "" dch -M -r -D stable ""

View File

@ -4,7 +4,7 @@
set -e set -e
cd `dirname $0`/.. cd "$(dirname "$0")/.."
SAMPLE_CONFIG="docs/sample_config.yaml" SAMPLE_CONFIG="docs/sample_config.yaml"
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml" SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"

View File

@ -4,6 +4,6 @@ set -e
# Fetch the current GitHub issue number, add one to it -- presto! The likely # Fetch the current GitHub issue number, add one to it -- presto! The likely
# next PR number. # next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"` CURRENT_NUMBER=$(curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number")
CURRENT_NUMBER=$((CURRENT_NUMBER+1)) CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER echo $CURRENT_NUMBER

View File

@ -43,6 +43,7 @@ from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackground
from synapse.storage.databases.main.events_bg_updates import ( from synapse.storage.databases.main.events_bg_updates import (
EventsBackgroundUpdatesStore, EventsBackgroundUpdatesStore,
) )
from synapse.storage.databases.main.group_server import GroupServerWorkerStore
from synapse.storage.databases.main.media_repository import ( from synapse.storage.databases.main.media_repository import (
MediaRepositoryBackgroundUpdateStore, MediaRepositoryBackgroundUpdateStore,
) )
@ -181,6 +182,7 @@ class Store(
StatsStore, StatsStore,
PusherWorkerStore, PusherWorkerStore,
PresenceBackgroundUpdateStore, PresenceBackgroundUpdateStore,
GroupServerWorkerStore,
): ):
def execute(self, f, *args, **kwargs): def execute(self, f, *args, **kwargs):
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs) return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View File

@ -132,6 +132,9 @@ CONDITIONAL_REQUIREMENTS["dev"] = (
"GitPython==3.1.14", "GitPython==3.1.14",
"commonmark==0.9.1", "commonmark==0.9.1",
"pygithub==1.55", "pygithub==1.55",
# The following are executed as commands by the release script.
"twine",
"towncrier",
] ]
) )

View File

@ -20,7 +20,12 @@ from typing import List
import attr import attr
from synapse.config._base import RootConfig, find_config_files, read_config_files from synapse.config._base import (
Config,
RootConfig,
find_config_files,
read_config_files,
)
from synapse.config.database import DatabaseConfig from synapse.config.database import DatabaseConfig
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
@ -126,7 +131,7 @@ def main():
config_dict, config_dict,
) )
since_ms = time.time() * 1000 - config.parse_duration(config_args.since) since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails exclude_users_with_email = config_args.exclude_emails
include_context = not config_args.only_users include_context = not config_args.only_users

View File

@ -596,3 +596,10 @@ class ShadowBanError(Exception):
This should be caught and a proper "fake" success response sent to the user. This should be caught and a proper "fake" success response sent to the user.
""" """
class ModuleFailedException(Exception):
"""
Raised when a module API callback fails, for example because it raised an
exception.
"""

View File

@ -18,7 +18,8 @@ import json
from typing import ( from typing import (
TYPE_CHECKING, TYPE_CHECKING,
Awaitable, Awaitable,
Container, Callable,
Dict,
Iterable, Iterable,
List, List,
Optional, Optional,
@ -217,19 +218,19 @@ class FilterCollection:
return self._filter_json return self._filter_json
def timeline_limit(self) -> int: def timeline_limit(self) -> int:
return self._room_timeline_filter.limit() return self._room_timeline_filter.limit
def presence_limit(self) -> int: def presence_limit(self) -> int:
return self._presence_filter.limit() return self._presence_filter.limit
def ephemeral_limit(self) -> int: def ephemeral_limit(self) -> int:
return self._room_ephemeral_filter.limit() return self._room_ephemeral_filter.limit
def lazy_load_members(self) -> bool: def lazy_load_members(self) -> bool:
return self._room_state_filter.lazy_load_members() return self._room_state_filter.lazy_load_members
def include_redundant_members(self) -> bool: def include_redundant_members(self) -> bool:
return self._room_state_filter.include_redundant_members() return self._room_state_filter.include_redundant_members
def filter_presence( def filter_presence(
self, events: Iterable[UserPresenceState] self, events: Iterable[UserPresenceState]
@ -276,19 +277,25 @@ class Filter:
def __init__(self, filter_json: JsonDict): def __init__(self, filter_json: JsonDict):
self.filter_json = filter_json self.filter_json = filter_json
self.types = self.filter_json.get("types", None) self.limit = filter_json.get("limit", 10)
self.not_types = self.filter_json.get("not_types", []) self.lazy_load_members = filter_json.get("lazy_load_members", False)
self.include_redundant_members = filter_json.get(
"include_redundant_members", False
)
self.rooms = self.filter_json.get("rooms", None) self.types = filter_json.get("types", None)
self.not_rooms = self.filter_json.get("not_rooms", []) self.not_types = filter_json.get("not_types", [])
self.senders = self.filter_json.get("senders", None) self.rooms = filter_json.get("rooms", None)
self.not_senders = self.filter_json.get("not_senders", []) self.not_rooms = filter_json.get("not_rooms", [])
self.contains_url = self.filter_json.get("contains_url", None) self.senders = filter_json.get("senders", None)
self.not_senders = filter_json.get("not_senders", [])
self.labels = self.filter_json.get("org.matrix.labels", None) self.contains_url = filter_json.get("contains_url", None)
self.not_labels = self.filter_json.get("org.matrix.not_labels", [])
self.labels = filter_json.get("org.matrix.labels", None)
self.not_labels = filter_json.get("org.matrix.not_labels", [])
def filters_all_types(self) -> bool: def filters_all_types(self) -> bool:
return "*" in self.not_types return "*" in self.not_types
@ -302,76 +309,95 @@ class Filter:
def check(self, event: FilterEvent) -> bool: def check(self, event: FilterEvent) -> bool:
"""Checks whether the filter matches the given event. """Checks whether the filter matches the given event.
Args:
event: The event, account data, or presence to check against this
filter.
Returns: Returns:
True if the event matches True if the event matches the filter.
""" """
# We usually get the full "events" as dictionaries coming through, # We usually get the full "events" as dictionaries coming through,
# except for presence which actually gets passed around as its own # except for presence which actually gets passed around as its own
# namedtuple type. # namedtuple type.
if isinstance(event, UserPresenceState): if isinstance(event, UserPresenceState):
sender: Optional[str] = event.user_id user_id = event.user_id
room_id = None field_matchers = {
ev_type = "m.presence" "senders": lambda v: user_id == v,
contains_url = False "types": lambda v: "m.presence" == v,
labels: List[str] = [] }
return self._check_fields(field_matchers)
else: else:
content = event.get("content")
# Content is assumed to be a dict below, so ensure it is. This should
# always be true for events, but account_data has been allowed to
# have non-dict content.
if not isinstance(content, dict):
content = {}
sender = event.get("sender", None) sender = event.get("sender", None)
if not sender: if not sender:
# Presence events had their 'sender' in content.user_id, but are # Presence events had their 'sender' in content.user_id, but are
# now handled above. We don't know if anything else uses this # now handled above. We don't know if anything else uses this
# form. TODO: Check this and probably remove it. # form. TODO: Check this and probably remove it.
content = event.get("content") sender = content.get("user_id")
# account_data has been allowed to have non-dict content, so
# check type first
if isinstance(content, dict):
sender = content.get("user_id")
room_id = event.get("room_id", None) room_id = event.get("room_id", None)
ev_type = event.get("type", None) ev_type = event.get("type", None)
content = event.get("content") or {}
# check if there is a string url field in the content for filtering purposes # check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), str)
labels = content.get(EventContentFields.LABELS, []) labels = content.get(EventContentFields.LABELS, [])
return self.check_fields(room_id, sender, ev_type, labels, contains_url) field_matchers = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(ev_type, v),
"labels": lambda v: v in labels,
}
def check_fields( result = self._check_fields(field_matchers)
self, if not result:
room_id: Optional[str], return result
sender: Optional[str],
event_type: Optional[str], contains_url_filter = self.contains_url
labels: Container[str], if contains_url_filter is not None:
contains_url: bool, contains_url = isinstance(content.get("url"), str)
) -> bool: if contains_url_filter != contains_url:
return False
return True
def _check_fields(self, field_matchers: Dict[str, Callable[[str], bool]]) -> bool:
"""Checks whether the filter matches the given event fields. """Checks whether the filter matches the given event fields.
Args:
field_matchers: A map of attribute name to callable to use for checking
particular fields.
The attribute name and an inverse (not_<attribute name>) must
exist on the Filter.
The callable should return true if the event's value matches the
filter's value.
Returns: Returns:
True if the event fields match True if the event fields match
""" """
literal_keys = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v),
"labels": lambda v: v in labels,
}
for name, match_func in literal_keys.items(): for name, match_func in field_matchers.items():
# If the event matches one of the disallowed values, reject it.
not_name = "not_%s" % (name,) not_name = "not_%s" % (name,)
disallowed_values = getattr(self, not_name) disallowed_values = getattr(self, not_name)
if any(map(match_func, disallowed_values)): if any(map(match_func, disallowed_values)):
return False return False
# Other the event does not match at least one of the allowed values,
# reject it.
allowed_values = getattr(self, name) allowed_values = getattr(self, name)
if allowed_values is not None: if allowed_values is not None:
if not any(map(match_func, allowed_values)): if not any(map(match_func, allowed_values)):
return False return False
contains_url_filter = self.filter_json.get("contains_url") # Otherwise, accept it.
if contains_url_filter is not None:
if contains_url_filter != contains_url:
return False
return True return True
def filter_rooms(self, room_ids: Iterable[str]) -> Set[str]: def filter_rooms(self, room_ids: Iterable[str]) -> Set[str]:
@ -385,10 +411,10 @@ class Filter:
""" """
room_ids = set(room_ids) room_ids = set(room_ids)
disallowed_rooms = set(self.filter_json.get("not_rooms", [])) disallowed_rooms = set(self.not_rooms)
room_ids -= disallowed_rooms room_ids -= disallowed_rooms
allowed_rooms = self.filter_json.get("rooms", None) allowed_rooms = self.rooms
if allowed_rooms is not None: if allowed_rooms is not None:
room_ids &= set(allowed_rooms) room_ids &= set(allowed_rooms)
@ -397,15 +423,6 @@ class Filter:
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]: def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
return list(filter(self.check, events)) return list(filter(self.check, events))
def limit(self) -> int:
return self.filter_json.get("limit", 10)
def lazy_load_members(self) -> bool:
return self.filter_json.get("lazy_load_members", False)
def include_redundant_members(self) -> bool:
return self.filter_json.get("include_redundant_members", False)
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter": def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
"""Returns a new filter with the given room IDs appended. """Returns a new filter with the given room IDs appended.

View File

@ -38,9 +38,6 @@ class ConsentURIBuilder:
def __init__(self, hs_config: HomeServerConfig): def __init__(self, hs_config: HomeServerConfig):
if hs_config.key.form_secret is None: if hs_config.key.form_secret is None:
raise ConfigError("form_secret not set in config") raise ConfigError("form_secret not set in config")
if hs_config.server.public_baseurl is None:
raise ConfigError("public_baseurl not set in config")
self._hmac_secret = hs_config.key.form_secret.encode("utf-8") self._hmac_secret = hs_config.key.form_secret.encode("utf-8")
self._public_baseurl = hs_config.server.public_baseurl self._public_baseurl = hs_config.server.public_baseurl

View File

@ -45,6 +45,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
@ -351,6 +352,10 @@ async def start(hs: "HomeServer"):
GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool) GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool)
) )
# Register the threadpools with our metrics.
register_threadpool("default", reactor.getThreadPool())
register_threadpool("gai_resolver", resolver_threadpool)
# Set up the SIGHUP machinery. # Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"): if hasattr(signal, "SIGHUP"):

View File

@ -145,6 +145,20 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values(): for event in state.values():
print(json.dumps(event), file=f) print(json.dumps(event), file=f)
def write_knock(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the knock state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
knock_state = os.path.join(room_directory, "knock_state")
with open(knock_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self): def finished(self):
return self.base_directory return self.base_directory

View File

@ -100,6 +100,7 @@ from synapse.rest.client.register import (
from synapse.rest.health import HealthResource from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.databases.main.censor_events import CensorEventsStore from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
@ -320,6 +321,8 @@ class GenericWorkerServer(HomeServer):
resources.update({CLIENT_API_PREFIX: resource}) resources.update({CLIENT_API_PREFIX: resource})
resources.update(build_synapse_client_resource_tree(self)) resources.update(build_synapse_client_resource_tree(self))
resources.update({"/.well-known": well_known_resource(self)})
elif name == "federation": elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)}) resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media": elif name == "media":

View File

@ -66,7 +66,7 @@ from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import WellKnownResource from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
@ -189,7 +189,7 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/unstable": client_resource, "/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource, "/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource, "/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self), "/.well-known": well_known_resource(self),
"/_synapse/admin": AdminRestResource(self), "/_synapse/admin": AdminRestResource(self),
**build_synapse_client_resource_tree(self), **build_synapse_client_resource_tree(self),
} }

View File

@ -75,10 +75,6 @@ class AccountValidityConfig(Config):
self.account_validity_period * 10.0 / 100.0 self.account_validity_period * 10.0 / 100.0
) )
if self.account_validity_renew_by_email_enabled:
if not self.root.server.public_baseurl:
raise ConfigError("Can't send renewal emails without 'public_baseurl'")
# Load account validity templates. # Load account validity templates.
account_validity_template_dir = account_validity_config.get("template_dir") account_validity_template_dir = account_validity_config.get("template_dir")
if account_validity_template_dir is not None: if account_validity_template_dir is not None:

View File

@ -16,7 +16,7 @@ from typing import Any, List
from synapse.config.sso import SsoAttributeRequirement from synapse.config.sso import SsoAttributeRequirement
from ._base import Config, ConfigError from ._base import Config
from ._util import validate_config from ._util import validate_config
@ -35,14 +35,10 @@ class CasConfig(Config):
if self.cas_enabled: if self.cas_enabled:
self.cas_server_url = cas_config["server_url"] self.cas_server_url = cas_config["server_url"]
# The public baseurl is required because it is used by the redirect
# template.
public_baseurl = self.root.server.public_baseurl
if not public_baseurl:
raise ConfigError("cas_config requires a public_baseurl to be set")
# TODO Update this to a _synapse URL. # TODO Update this to a _synapse URL.
public_baseurl = self.root.server.public_baseurl
self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket" self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket"
self.cas_displayname_attribute = cas_config.get("displayname_attribute") self.cas_displayname_attribute = cas_config.get("displayname_attribute")
required_attributes = cas_config.get("required_attributes") or {} required_attributes = cas_config.get("required_attributes") or {}
self.cas_required_attributes = _parsed_required_attributes_def( self.cas_required_attributes = _parsed_required_attributes_def(

View File

@ -186,11 +186,6 @@ class EmailConfig(Config):
if not self.email_notif_from: if not self.email_notif_from:
missing.append("email.notif_from") missing.append("email.notif_from")
# public_baseurl is required to build password reset and validation links that
# will be emailed to users
if config.get("public_baseurl") is None:
missing.append("public_baseurl")
if missing: if missing:
raise ConfigError( raise ConfigError(
MISSING_PASSWORD_RESET_CONFIG_ERROR % (", ".join(missing),) MISSING_PASSWORD_RESET_CONFIG_ERROR % (", ".join(missing),)
@ -296,9 +291,6 @@ class EmailConfig(Config):
if not self.email_notif_from: if not self.email_notif_from:
missing.append("email.notif_from") missing.append("email.notif_from")
if config.get("public_baseurl") is None:
missing.append("public_baseurl")
if missing: if missing:
raise ConfigError( raise ConfigError(
"email.enable_notifs is True but required keys are missing: %s" "email.enable_notifs is True but required keys are missing: %s"

View File

@ -59,8 +59,6 @@ class OIDCConfig(Config):
) )
public_baseurl = self.root.server.public_baseurl public_baseurl = self.root.server.public_baseurl
if public_baseurl is None:
raise ConfigError("oidc_config requires a public_baseurl to be set")
self.oidc_callback_url = public_baseurl + "_synapse/client/oidc/callback" self.oidc_callback_url = public_baseurl + "_synapse/client/oidc/callback"
@property @property

View File

@ -45,17 +45,6 @@ class RegistrationConfig(Config):
account_threepid_delegates = config.get("account_threepid_delegates") or {} account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email") self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn") self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
if (
self.account_threepid_delegate_msisdn
and not self.root.server.public_baseurl
):
raise ConfigError(
"The configuration option `public_baseurl` is required if "
"`account_threepid_delegate.msisdn` is set, such that "
"clients know where to submit validation tokens to. Please "
"configure `public_baseurl`."
)
self.default_identity_server = config.get("default_identity_server") self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False) self.allow_guest_access = config.get("allow_guest_access", False)
@ -240,7 +229,7 @@ class RegistrationConfig(Config):
# in on this server. # in on this server.
# #
# (By default, no suggestion is made, so it is left up to the client. # (By default, no suggestion is made, so it is left up to the client.
# This setting is ignored unless public_baseurl is also set.) # This setting is ignored unless public_baseurl is also explicitly set.)
# #
#default_identity_server: https://matrix.org #default_identity_server: https://matrix.org
@ -265,8 +254,6 @@ class RegistrationConfig(Config):
# by the Matrix Identity Service API specification: # by the Matrix Identity Service API specification:
# https://matrix.org/docs/spec/identity_service/latest # https://matrix.org/docs/spec/identity_service/latest
# #
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates: account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.com #email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process #msisdn: http://localhost:8090 # Delegate SMS sending to this local process

View File

@ -199,14 +199,11 @@ class SAML2Config(Config):
""" """
import saml2 import saml2
public_baseurl = self.root.server.public_baseurl
if public_baseurl is None:
raise ConfigError("saml2_config requires a public_baseurl to be set")
if self.saml2_grandfathered_mxid_source_attribute: if self.saml2_grandfathered_mxid_source_attribute:
optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute) optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute)
optional_attributes -= required_attributes optional_attributes -= required_attributes
public_baseurl = self.root.server.public_baseurl
metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml" metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml"
response_url = public_baseurl + "_synapse/client/saml2/authn_response" response_url = public_baseurl + "_synapse/client/saml2/authn_response"
return { return {

View File

@ -16,6 +16,7 @@ import itertools
import logging import logging
import os.path import os.path
import re import re
import urllib.parse
from textwrap import indent from textwrap import indent
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
@ -262,11 +263,46 @@ class ServerConfig(Config):
self.print_pidfile = config.get("print_pidfile") self.print_pidfile = config.get("print_pidfile")
self.user_agent_suffix = config.get("user_agent_suffix") self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False) self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.serve_server_wellknown = config.get("serve_server_wellknown", False)
self.public_baseurl = config.get("public_baseurl") # Whether we should serve a "client well-known":
if self.public_baseurl is not None: # (a) at .well-known/matrix/client on our client HTTP listener
if self.public_baseurl[-1] != "/": # (b) in the response to /login
self.public_baseurl += "/" #
# ... which together help ensure that clients use our public_baseurl instead of
# whatever they were told by the user.
#
# For the sake of backwards compatibility with existing installations, this is
# True if public_baseurl is specified explicitly, and otherwise False. (The
# reasoning here is that we have no way of knowing that the default
# public_baseurl is actually correct for existing installations - many things
# will not work correctly, but that's (probably?) better than sending clients
# to a completely broken URL.
self.serve_client_wellknown = False
public_baseurl = config.get("public_baseurl")
if public_baseurl is None:
public_baseurl = f"https://{self.server_name}/"
logger.info("Using default public_baseurl %s", public_baseurl)
else:
self.serve_client_wellknown = True
if public_baseurl[-1] != "/":
public_baseurl += "/"
self.public_baseurl = public_baseurl
# check that public_baseurl is valid
try:
splits = urllib.parse.urlsplit(self.public_baseurl)
except Exception as e:
raise ConfigError(f"Unable to parse URL: {e}", ("public_baseurl",))
if splits.scheme not in ("https", "http"):
raise ConfigError(
f"Invalid scheme '{splits.scheme}': only https and http are supported"
)
if splits.query or splits.fragment:
raise ConfigError(
"public_baseurl cannot contain query parameters or a #-fragment"
)
# Whether to enable user presence. # Whether to enable user presence.
presence_config = config.get("presence") or {} presence_config = config.get("presence") or {}
@ -772,8 +808,28 @@ class ServerConfig(Config):
# Otherwise, it should be the URL to reach Synapse's client HTTP listener (see # Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
# 'listeners' below). # 'listeners' below).
# #
# Defaults to 'https://<server_name>/'.
#
#public_baseurl: https://example.com/ #public_baseurl: https://example.com/
# Uncomment the following to tell other servers to send federation traffic on
# port 443.
#
# By default, other servers will try to reach our server on port 8448, which can
# be inconvenient in some environments.
#
# Provided 'https://<server_name>/' on port 443 is routed to Synapse, this
# option configures Synapse to serve a file at
# 'https://<server_name>/.well-known/matrix/server'. This will tell other
# servers to send traffic to port 443 instead.
#
# See https://matrix-org.github.io/synapse/latest/delegate.html for more
# information.
#
# Defaults to 'false'.
#
#serve_server_wellknown: true
# Set the soft limit on the number of file descriptors synapse can use # Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the # Zero is used to indicate synapse should set the soft limit to the
# hard limit. # hard limit.

View File

@ -101,13 +101,10 @@ class SSOConfig(Config):
# gracefully to the client). This would make it pointless to ask the user for # gracefully to the client). This would make it pointless to ask the user for
# confirmation, since the URL the confirmation page would be showing wouldn't be # confirmation, since the URL the confirmation page would be showing wouldn't be
# the client's. # the client's.
# public_baseurl is an optional setting, so we only add the fallback's URL to the login_fallback_url = (
# list if it's provided (because we can't figure out what that URL is otherwise). self.root.server.public_baseurl + "_matrix/static/client/login"
if self.root.server.public_baseurl: )
login_fallback_url = ( self.sso_client_whitelist.append(login_fallback_url)
self.root.server.public_baseurl + "_matrix/static/client/login"
)
self.sso_client_whitelist.append(login_fallback_url)
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\
@ -128,11 +125,10 @@ class SSOConfig(Config):
# phishing attacks from evil.site. To avoid this, include a slash after the # phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/". # hostname: "https://my.client/".
# #
# If public_baseurl is set, then the login fallback page (used by clients # The login fallback page (used by clients that don't natively support the
# that don't natively support the required login flows) is whitelisted in # required login flows) is whitelisted in addition to any URLs in this list.
# addition to any URLs in this list.
# #
# By default, this list is empty. # By default, this list contains only the login fallback page.
# #
#client_whitelist: #client_whitelist:
# - https://riot.im/develop # - https://riot.im/develop

View File

@ -63,7 +63,8 @@ class WriterLocations:
Attributes: Attributes:
events: The instances that write to the event and backfill streams. events: The instances that write to the event and backfill streams.
typing: The instance that writes to the typing stream. typing: The instances that write to the typing stream. Currently
can only be a single instance.
to_device: The instances that write to the to_device stream. Currently to_device: The instances that write to the to_device stream. Currently
can only be a single instance. can only be a single instance.
account_data: The instances that write to the account data streams. Currently account_data: The instances that write to the account data streams. Currently
@ -75,9 +76,15 @@ class WriterLocations:
""" """
events = attr.ib( events = attr.ib(
default=["master"], type=List[str], converter=_instance_to_list_converter default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
typing = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
) )
typing = attr.ib(default="master", type=str)
to_device = attr.ib( to_device = attr.ib(
default=["master"], default=["master"],
type=List[str], type=List[str],
@ -217,6 +224,11 @@ class WorkerConfig(Config):
% (instance, stream) % (instance, stream)
) )
if len(self.writers.typing) != 1:
raise ConfigError(
"Must only specify one instance to handle `typing` messages."
)
if len(self.writers.to_device) != 1: if len(self.writers.to_device) != 1:
raise ConfigError( raise ConfigError(
"Must only specify one instance to handle `to_device` messages." "Must only specify one instance to handle `to_device` messages."

View File

@ -22,6 +22,7 @@ import attr
from signedjson.key import ( from signedjson.key import (
decode_verify_key_bytes, decode_verify_key_bytes,
encode_verify_key_base64, encode_verify_key_base64,
get_verify_key,
is_signing_algorithm_supported, is_signing_algorithm_supported,
) )
from signedjson.sign import ( from signedjson.sign import (
@ -30,6 +31,7 @@ from signedjson.sign import (
signature_ids, signature_ids,
verify_signed_json, verify_signed_json,
) )
from signedjson.types import VerifyKey
from unpaddedbase64 import decode_base64 from unpaddedbase64 import decode_base64
from twisted.internet import defer from twisted.internet import defer
@ -177,6 +179,8 @@ class Keyring:
clock=hs.get_clock(), clock=hs.get_clock(),
process_batch_callback=self._inner_fetch_key_requests, process_batch_callback=self._inner_fetch_key_requests,
) )
self.verify_key = get_verify_key(hs.signing_key)
self.hostname = hs.hostname
async def verify_json_for_server( async def verify_json_for_server(
self, self,
@ -196,6 +200,7 @@ class Keyring:
validity_time: timestamp at which we require the signing key to validity_time: timestamp at which we require the signing key to
be valid. (0 implies we don't care) be valid. (0 implies we don't care)
""" """
request = VerifyJsonRequest.from_json_object( request = VerifyJsonRequest.from_json_object(
server_name, server_name,
json_object, json_object,
@ -262,6 +267,11 @@ class Keyring:
Codes.UNAUTHORIZED, Codes.UNAUTHORIZED,
) )
# If we are the originating server don't fetch verify key for self over federation
if verify_request.server_name == self.hostname:
await self._process_json(self.verify_key, verify_request)
return
# Add the keys we need to verify to the queue for retrieval. We queue # Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight # up requests for the same server so we don't end up with many in flight
# requests for the same keys. # requests for the same keys.
@ -285,35 +295,8 @@ class Keyring:
if key_result.valid_until_ts < verify_request.minimum_valid_until_ts: if key_result.valid_until_ts < verify_request.minimum_valid_until_ts:
continue continue
verify_key = key_result.verify_key await self._process_json(key_result.verify_key, verify_request)
json_object = verify_request.get_json_object() verified = True
try:
verify_signed_json(
json_object,
verify_request.server_name,
verify_key,
)
verified = True
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
verify_request.server_name,
verify_key.alg,
verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s: %s"
% (
verify_request.server_name,
verify_key.alg,
verify_key.version,
str(e),
),
Codes.UNAUTHORIZED,
)
if not verified: if not verified:
raise SynapseError( raise SynapseError(
@ -322,6 +305,39 @@ class Keyring:
Codes.UNAUTHORIZED, Codes.UNAUTHORIZED,
) )
async def _process_json(
self, verify_key: VerifyKey, verify_request: VerifyJsonRequest
) -> None:
"""Processes the `VerifyJsonRequest`. Raises if the signature can't be
verified.
"""
try:
verify_signed_json(
verify_request.get_json_object(),
verify_request.server_name,
verify_key,
)
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
verify_request.server_name,
verify_key.alg,
verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s: %s"
% (
verify_request.server_name,
verify_key.alg,
verify_key.version,
str(e),
),
Codes.UNAUTHORIZED,
)
async def _inner_fetch_key_requests( async def _inner_fetch_key_requests(
self, requests: List[_FetchKeyRequest] self, requests: List[_FetchKeyRequest]
) -> Dict[str, Dict[str, FetchKeyResult]]: ) -> Dict[str, Dict[str, FetchKeyResult]]:

View File

@ -16,8 +16,23 @@
import abc import abc
import os import os
from typing import Dict, Optional, Tuple, Type from typing import (
TYPE_CHECKING,
Any,
Dict,
Generic,
Iterable,
List,
Optional,
Sequence,
Tuple,
Type,
TypeVar,
Union,
overload,
)
from typing_extensions import Literal
from unpaddedbase64 import encode_base64 from unpaddedbase64 import encode_base64
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
@ -26,6 +41,9 @@ from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze from synapse.util.frozenutils import freeze
from synapse.util.stringutils import strtobool from synapse.util.stringutils import strtobool
if TYPE_CHECKING:
from synapse.events.builder import EventBuilder
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents # Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting a # bugs where we accidentally share e.g. signature dicts. However, converting a
# dict to frozen_dicts is expensive. # dict to frozen_dicts is expensive.
@ -37,7 +55,23 @@ from synapse.util.stringutils import strtobool
USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0")) USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
class DictProperty: T = TypeVar("T")
# DictProperty (and DefaultDictProperty) require the classes they're used with to
# have a _dict property to pull properties from.
#
# TODO _DictPropertyInstance should not include EventBuilder but due to
# https://github.com/python/mypy/issues/5570 it thinks the DictProperty and
# DefaultDictProperty get applied to EventBuilder when it is in a Union with
# EventBase. This is the least invasive hack to get mypy to comply.
#
# Note that DictProperty/DefaultDictProperty cannot actually be used with
# EventBuilder as it lacks a _dict property.
_DictPropertyInstance = Union["_EventInternalMetadata", "EventBase", "EventBuilder"]
class DictProperty(Generic[T]):
"""An object property which delegates to the `_dict` within its parent object.""" """An object property which delegates to the `_dict` within its parent object."""
__slots__ = ["key"] __slots__ = ["key"]
@ -45,12 +79,33 @@ class DictProperty:
def __init__(self, key: str): def __init__(self, key: str):
self.key = key self.key = key
def __get__(self, instance, owner=None): @overload
def __get__(
self,
instance: Literal[None],
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> "DictProperty":
...
@overload
def __get__(
self,
instance: _DictPropertyInstance,
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> T:
...
def __get__(
self,
instance: Optional[_DictPropertyInstance],
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> Union[T, "DictProperty"]:
# if the property is accessed as a class property rather than an instance # if the property is accessed as a class property rather than an instance
# property, return the property itself rather than the value # property, return the property itself rather than the value
if instance is None: if instance is None:
return self return self
try: try:
assert isinstance(instance, (EventBase, _EventInternalMetadata))
return instance._dict[self.key] return instance._dict[self.key]
except KeyError as e1: except KeyError as e1:
# We want this to look like a regular attribute error (mostly so that # We want this to look like a regular attribute error (mostly so that
@ -65,10 +120,12 @@ class DictProperty:
"'%s' has no '%s' property" % (type(instance), self.key) "'%s' has no '%s' property" % (type(instance), self.key)
) from e1.__context__ ) from e1.__context__
def __set__(self, instance, v): def __set__(self, instance: _DictPropertyInstance, v: T) -> None:
assert isinstance(instance, (EventBase, _EventInternalMetadata))
instance._dict[self.key] = v instance._dict[self.key] = v
def __delete__(self, instance): def __delete__(self, instance: _DictPropertyInstance) -> None:
assert isinstance(instance, (EventBase, _EventInternalMetadata))
try: try:
del instance._dict[self.key] del instance._dict[self.key]
except KeyError as e1: except KeyError as e1:
@ -77,7 +134,7 @@ class DictProperty:
) from e1.__context__ ) from e1.__context__
class DefaultDictProperty(DictProperty): class DefaultDictProperty(DictProperty, Generic[T]):
"""An extension of DictProperty which provides a default if the property is """An extension of DictProperty which provides a default if the property is
not present in the parent's _dict. not present in the parent's _dict.
@ -86,13 +143,34 @@ class DefaultDictProperty(DictProperty):
__slots__ = ["default"] __slots__ = ["default"]
def __init__(self, key, default): def __init__(self, key: str, default: T):
super().__init__(key) super().__init__(key)
self.default = default self.default = default
def __get__(self, instance, owner=None): @overload
def __get__(
self,
instance: Literal[None],
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> "DefaultDictProperty":
...
@overload
def __get__(
self,
instance: _DictPropertyInstance,
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> T:
...
def __get__(
self,
instance: Optional[_DictPropertyInstance],
owner: Optional[Type[_DictPropertyInstance]] = None,
) -> Union[T, "DefaultDictProperty"]:
if instance is None: if instance is None:
return self return self
assert isinstance(instance, (EventBase, _EventInternalMetadata))
return instance._dict.get(self.key, self.default) return instance._dict.get(self.key, self.default)
@ -111,22 +189,22 @@ class _EventInternalMetadata:
# in the DAG) # in the DAG)
self.outlier = False self.outlier = False
out_of_band_membership: bool = DictProperty("out_of_band_membership") out_of_band_membership: DictProperty[bool] = DictProperty("out_of_band_membership")
send_on_behalf_of: str = DictProperty("send_on_behalf_of") send_on_behalf_of: DictProperty[str] = DictProperty("send_on_behalf_of")
recheck_redaction: bool = DictProperty("recheck_redaction") recheck_redaction: DictProperty[bool] = DictProperty("recheck_redaction")
soft_failed: bool = DictProperty("soft_failed") soft_failed: DictProperty[bool] = DictProperty("soft_failed")
proactively_send: bool = DictProperty("proactively_send") proactively_send: DictProperty[bool] = DictProperty("proactively_send")
redacted: bool = DictProperty("redacted") redacted: DictProperty[bool] = DictProperty("redacted")
txn_id: str = DictProperty("txn_id") txn_id: DictProperty[str] = DictProperty("txn_id")
token_id: int = DictProperty("token_id") token_id: DictProperty[int] = DictProperty("token_id")
historical: bool = DictProperty("historical") historical: DictProperty[bool] = DictProperty("historical")
# XXX: These are set by StreamWorkerStore._set_before_and_after. # XXX: These are set by StreamWorkerStore._set_before_and_after.
# I'm pretty sure that these are never persisted to the database, so shouldn't # I'm pretty sure that these are never persisted to the database, so shouldn't
# be here # be here
before: RoomStreamToken = DictProperty("before") before: DictProperty[RoomStreamToken] = DictProperty("before")
after: RoomStreamToken = DictProperty("after") after: DictProperty[RoomStreamToken] = DictProperty("after")
order: Tuple[int, int] = DictProperty("order") order: DictProperty[Tuple[int, int]] = DictProperty("order")
def get_dict(self) -> JsonDict: def get_dict(self) -> JsonDict:
return dict(self._dict) return dict(self._dict)
@ -162,9 +240,6 @@ class _EventInternalMetadata:
If the sender of the redaction event is allowed to redact any event If the sender of the redaction event is allowed to redact any event
due to auth rules, then this will always return false. due to auth rules, then this will always return false.
Returns:
bool
""" """
return self._dict.get("recheck_redaction", False) return self._dict.get("recheck_redaction", False)
@ -176,32 +251,23 @@ class _EventInternalMetadata:
sent to clients. sent to clients.
2. They should not be added to the forward extremities (and 2. They should not be added to the forward extremities (and
therefore not to current state). therefore not to current state).
Returns:
bool
""" """
return self._dict.get("soft_failed", False) return self._dict.get("soft_failed", False)
def should_proactively_send(self): def should_proactively_send(self) -> bool:
"""Whether the event, if ours, should be sent to other clients and """Whether the event, if ours, should be sent to other clients and
servers. servers.
This is used for sending dummy events internally. Servers and clients This is used for sending dummy events internally. Servers and clients
can still explicitly fetch the event. can still explicitly fetch the event.
Returns:
bool
""" """
return self._dict.get("proactively_send", True) return self._dict.get("proactively_send", True)
def is_redacted(self): def is_redacted(self) -> bool:
"""Whether the event has been redacted. """Whether the event has been redacted.
This is used for efficiently checking whether an event has been This is used for efficiently checking whether an event has been
marked as redacted without needing to make another database call. marked as redacted without needing to make another database call.
Returns:
bool
""" """
return self._dict.get("redacted", False) return self._dict.get("redacted", False)
@ -241,29 +307,31 @@ class EventBase(metaclass=abc.ABCMeta):
self.internal_metadata = _EventInternalMetadata(internal_metadata_dict) self.internal_metadata = _EventInternalMetadata(internal_metadata_dict)
auth_events = DictProperty("auth_events") depth: DictProperty[int] = DictProperty("depth")
depth = DictProperty("depth") content: DictProperty[JsonDict] = DictProperty("content")
content = DictProperty("content") hashes: DictProperty[Dict[str, str]] = DictProperty("hashes")
hashes = DictProperty("hashes") origin: DictProperty[str] = DictProperty("origin")
origin = DictProperty("origin") origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts")
origin_server_ts = DictProperty("origin_server_ts") redacts: DefaultDictProperty[Optional[str]] = DefaultDictProperty("redacts", None)
prev_events = DictProperty("prev_events") room_id: DictProperty[str] = DictProperty("room_id")
redacts = DefaultDictProperty("redacts", None) sender: DictProperty[str] = DictProperty("sender")
room_id = DictProperty("room_id") # TODO state_key should be Optional[str], this is generally asserted in Synapse
sender = DictProperty("sender") # by calling is_state() first (which ensures this), but it is hard (not possible?)
state_key = DictProperty("state_key") # to properly annotate that calling is_state() asserts that state_key exists
type = DictProperty("type") # and is non-None.
user_id = DictProperty("sender") state_key: DictProperty[str] = DictProperty("state_key")
type: DictProperty[str] = DictProperty("type")
user_id: DictProperty[str] = DictProperty("sender")
@property @property
def event_id(self) -> str: def event_id(self) -> str:
raise NotImplementedError() raise NotImplementedError()
@property @property
def membership(self): def membership(self) -> str:
return self.content["membership"] return self.content["membership"]
def is_state(self): def is_state(self) -> bool:
return hasattr(self, "state_key") and self.state_key is not None return hasattr(self, "state_key") and self.state_key is not None
def get_dict(self) -> JsonDict: def get_dict(self) -> JsonDict:
@ -272,13 +340,13 @@ class EventBase(metaclass=abc.ABCMeta):
return d return d
def get(self, key, default=None): def get(self, key: str, default: Optional[Any] = None) -> Any:
return self._dict.get(key, default) return self._dict.get(key, default)
def get_internal_metadata_dict(self): def get_internal_metadata_dict(self) -> JsonDict:
return self.internal_metadata.get_dict() return self.internal_metadata.get_dict()
def get_pdu_json(self, time_now=None) -> JsonDict: def get_pdu_json(self, time_now: Optional[int] = None) -> JsonDict:
pdu_json = self.get_dict() pdu_json = self.get_dict()
if time_now is not None and "age_ts" in pdu_json["unsigned"]: if time_now is not None and "age_ts" in pdu_json["unsigned"]:
@ -305,49 +373,46 @@ class EventBase(metaclass=abc.ABCMeta):
return template_json return template_json
def __set__(self, instance, value): def __getitem__(self, field: str) -> Optional[Any]:
raise AttributeError("Unrecognized attribute %s" % (instance,))
def __getitem__(self, field):
return self._dict[field] return self._dict[field]
def __contains__(self, field): def __contains__(self, field: str) -> bool:
return field in self._dict return field in self._dict
def items(self): def items(self) -> List[Tuple[str, Optional[Any]]]:
return list(self._dict.items()) return list(self._dict.items())
def keys(self): def keys(self) -> Iterable[str]:
return self._dict.keys() return self._dict.keys()
def prev_event_ids(self): def prev_event_ids(self) -> Sequence[str]:
"""Returns the list of prev event IDs. The order matches the order """Returns the list of prev event IDs. The order matches the order
specified in the event, though there is no meaning to it. specified in the event, though there is no meaning to it.
Returns: Returns:
list[str]: The list of event IDs of this event's prev_events The list of event IDs of this event's prev_events
""" """
return [e for e, _ in self.prev_events] return [e for e, _ in self._dict["prev_events"]]
def auth_event_ids(self): def auth_event_ids(self) -> Sequence[str]:
"""Returns the list of auth event IDs. The order matches the order """Returns the list of auth event IDs. The order matches the order
specified in the event, though there is no meaning to it. specified in the event, though there is no meaning to it.
Returns: Returns:
list[str]: The list of event IDs of this event's auth_events The list of event IDs of this event's auth_events
""" """
return [e for e, _ in self.auth_events] return [e for e, _ in self._dict["auth_events"]]
def freeze(self): def freeze(self) -> None:
"""'Freeze' the event dict, so it cannot be modified by accident""" """'Freeze' the event dict, so it cannot be modified by accident"""
# this will be a no-op if the event dict is already frozen. # this will be a no-op if the event dict is already frozen.
self._dict = freeze(self._dict) self._dict = freeze(self._dict)
def __str__(self): def __str__(self) -> str:
return self.__repr__() return self.__repr__()
def __repr__(self): def __repr__(self) -> str:
rejection = f"REJECTED={self.rejected_reason}, " if self.rejected_reason else "" rejection = f"REJECTED={self.rejected_reason}, " if self.rejected_reason else ""
return ( return (
@ -443,7 +508,7 @@ class FrozenEventV2(EventBase):
else: else:
frozen_dict = event_dict frozen_dict = event_dict
self._event_id = None self._event_id: Optional[str] = None
super().__init__( super().__init__(
frozen_dict, frozen_dict,
@ -455,7 +520,7 @@ class FrozenEventV2(EventBase):
) )
@property @property
def event_id(self): def event_id(self) -> str:
# We have to import this here as otherwise we get an import loop which # We have to import this here as otherwise we get an import loop which
# is hard to break. # is hard to break.
from synapse.crypto.event_signing import compute_event_reference_hash from synapse.crypto.event_signing import compute_event_reference_hash
@ -465,23 +530,23 @@ class FrozenEventV2(EventBase):
self._event_id = "$" + encode_base64(compute_event_reference_hash(self)[1]) self._event_id = "$" + encode_base64(compute_event_reference_hash(self)[1])
return self._event_id return self._event_id
def prev_event_ids(self): def prev_event_ids(self) -> Sequence[str]:
"""Returns the list of prev event IDs. The order matches the order """Returns the list of prev event IDs. The order matches the order
specified in the event, though there is no meaning to it. specified in the event, though there is no meaning to it.
Returns: Returns:
list[str]: The list of event IDs of this event's prev_events The list of event IDs of this event's prev_events
""" """
return self.prev_events return self._dict["prev_events"]
def auth_event_ids(self): def auth_event_ids(self) -> Sequence[str]:
"""Returns the list of auth event IDs. The order matches the order """Returns the list of auth event IDs. The order matches the order
specified in the event, though there is no meaning to it. specified in the event, though there is no meaning to it.
Returns: Returns:
list[str]: The list of event IDs of this event's auth_events The list of event IDs of this event's auth_events
""" """
return self.auth_events return self._dict["auth_events"]
class FrozenEventV3(FrozenEventV2): class FrozenEventV3(FrozenEventV2):
@ -490,7 +555,7 @@ class FrozenEventV3(FrozenEventV2):
format_version = EventFormatVersions.V3 # All events of this type are V3 format_version = EventFormatVersions.V3 # All events of this type are V3
@property @property
def event_id(self): def event_id(self) -> str:
# We have to import this here as otherwise we get an import loop which # We have to import this here as otherwise we get an import loop which
# is hard to break. # is hard to break.
from synapse.crypto.event_signing import compute_event_reference_hash from synapse.crypto.event_signing import compute_event_reference_hash
@ -503,12 +568,14 @@ class FrozenEventV3(FrozenEventV2):
return self._event_id return self._event_id
def _event_type_from_format_version(format_version: int) -> Type[EventBase]: def _event_type_from_format_version(
format_version: int,
) -> Type[Union[FrozenEvent, FrozenEventV2, FrozenEventV3]]:
"""Returns the python type to use to construct an Event object for the """Returns the python type to use to construct an Event object for the
given event format version. given event format version.
Args: Args:
format_version (int): The event format version format_version: The event format version
Returns: Returns:
type: A type that can be initialized as per the initializer of type: A type that can be initialized as per the initializer of

View File

@ -14,7 +14,7 @@
import logging import logging
from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Optional, Tuple
from synapse.api.errors import SynapseError from synapse.api.errors import ModuleFailedException, SynapseError
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.types import Requester, StateMap from synapse.types import Requester, StateMap
@ -36,6 +36,7 @@ CHECK_THREEPID_CAN_BE_INVITED_CALLBACK = Callable[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK = Callable[ CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK = Callable[
[str, StateMap[EventBase], str], Awaitable[bool] [str, StateMap[EventBase], str], Awaitable[bool]
] ]
ON_NEW_EVENT_CALLBACK = Callable[[EventBase, StateMap[EventBase]], Awaitable]
def load_legacy_third_party_event_rules(hs: "HomeServer") -> None: def load_legacy_third_party_event_rules(hs: "HomeServer") -> None:
@ -152,6 +153,7 @@ class ThirdPartyEventRules:
self._check_visibility_can_be_modified_callbacks: List[ self._check_visibility_can_be_modified_callbacks: List[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = [] ] = []
self._on_new_event_callbacks: List[ON_NEW_EVENT_CALLBACK] = []
def register_third_party_rules_callbacks( def register_third_party_rules_callbacks(
self, self,
@ -163,6 +165,7 @@ class ThirdPartyEventRules:
check_visibility_can_be_modified: Optional[ check_visibility_can_be_modified: Optional[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = None, ] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
) -> None: ) -> None:
"""Register callbacks from modules for each hook.""" """Register callbacks from modules for each hook."""
if check_event_allowed is not None: if check_event_allowed is not None:
@ -181,6 +184,9 @@ class ThirdPartyEventRules:
check_visibility_can_be_modified, check_visibility_can_be_modified,
) )
if on_new_event is not None:
self._on_new_event_callbacks.append(on_new_event)
async def check_event_allowed( async def check_event_allowed(
self, event: EventBase, context: EventContext self, event: EventBase, context: EventContext
) -> Tuple[bool, Optional[dict]]: ) -> Tuple[bool, Optional[dict]]:
@ -227,9 +233,10 @@ class ThirdPartyEventRules:
# This module callback needs a rework so that hacks such as # This module callback needs a rework so that hacks such as
# this one are not necessary. # this one are not necessary.
raise e raise e
except Exception as e: except Exception:
logger.warning("Failed to run module API callback %s: %s", callback, e) raise ModuleFailedException(
continue "Failed to run `check_event_allowed` module API callback"
)
# Return if the event shouldn't be allowed or if the module came up with a # Return if the event shouldn't be allowed or if the module came up with a
# replacement dict for the event. # replacement dict for the event.
@ -321,6 +328,31 @@ class ThirdPartyEventRules:
return True return True
async def on_new_event(self, event_id: str) -> None:
"""Let modules act on events after they've been sent (e.g. auto-accepting
invites, etc.)
Args:
event_id: The ID of the event.
Raises:
ModuleFailureError if a callback raised any exception.
"""
# Bail out early without hitting the store if we don't have any callbacks
if len(self._on_new_event_callbacks) == 0:
return
event = await self.store.get_event(event_id)
state_events = await self._get_state_map_for_room(event.room_id)
for callback in self._on_new_event_callbacks:
try:
await callback(event, state_events)
except Exception as e:
logger.exception(
"Failed to run module API callback %s: %s", callback, e
)
async def _get_state_map_for_room(self, room_id: str) -> StateMap[EventBase]: async def _get_state_map_for_room(self, room_id: str) -> StateMap[EventBase]:
"""Given a room ID, return the state events of that room. """Given a room ID, return the state events of that room.

View File

@ -55,7 +55,7 @@ class EventValidator:
] ]
for k in required: for k in required:
if not hasattr(event, k): if k not in event:
raise SynapseError(400, "Event does not have key %s" % (k,)) raise SynapseError(400, "Event does not have key %s" % (k,))
# Check that the following keys have string values # Check that the following keys have string values

View File

@ -227,7 +227,7 @@ class FederationClient(FederationBase):
) )
async def backfill( async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Iterable[str] self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]: ) -> Optional[List[EventBase]]:
"""Requests some more historic PDUs for the given room from the """Requests some more historic PDUs for the given room from the
given destination server. given destination server.
@ -237,6 +237,8 @@ class FederationClient(FederationBase):
room_id: The room_id to backfill. room_id: The room_id to backfill.
limit: The maximum number of events to return. limit: The maximum number of events to return.
extremities: our current backwards extremities, to backfill from extremities: our current backwards extremities, to backfill from
Must be a Collection that is falsy when empty.
(Iterable is not enough here!)
""" """
logger.debug("backfill extrem=%s", extremities) logger.debug("backfill extrem=%s", extremities)
@ -250,11 +252,22 @@ class FederationClient(FederationBase):
logger.debug("backfill transaction_data=%r", transaction_data) logger.debug("backfill transaction_data=%r", transaction_data)
if not isinstance(transaction_data, dict):
# TODO we probably want an exception type specific to federation
# client validation.
raise TypeError("Backfill transaction_data is not a dict.")
transaction_data_pdus = transaction_data.get("pdus")
if not isinstance(transaction_data_pdus, list):
# TODO we probably want an exception type specific to federation
# client validation.
raise TypeError("transaction_data.pdus is not a list.")
room_version = await self.store.get_room_version(room_id) room_version = await self.store.get_room_version(room_id)
pdus = [ pdus = [
event_from_pdu_json(p, room_version, outlier=False) event_from_pdu_json(p, room_version, outlier=False)
for p in transaction_data["pdus"] for p in transaction_data_pdus
] ]
# Check signatures and hash of pdus, removing any from the list that fail checks # Check signatures and hash of pdus, removing any from the list that fail checks

View File

@ -213,6 +213,11 @@ class FederationServer(FederationBase):
self._started_handling_of_staged_events = True self._started_handling_of_staged_events = True
self._handle_old_staged_events() self._handle_old_staged_events()
# Start a periodic check for old staged events. This is to handle
# the case where locks time out, e.g. if another process gets killed
# without dropping its locks.
self._clock.looping_call(self._handle_old_staged_events, 60 * 1000)
# keep this as early as possible to make the calculated origin ts as # keep this as early as possible to make the calculated origin ts as
# accurate as possible. # accurate as possible.
request_time = self._clock.time_msec() request_time = self._clock.time_msec()
@ -295,14 +300,16 @@ class FederationServer(FederationBase):
Returns: Returns:
HTTP response code and body HTTP response code and body
""" """
response = await self.transaction_actions.have_responded(origin, transaction) existing_response = await self.transaction_actions.have_responded(
origin, transaction
)
if response: if existing_response:
logger.debug( logger.debug(
"[%s] We've already responded to this request", "[%s] We've already responded to this request",
transaction.transaction_id, transaction.transaction_id,
) )
return response return existing_response
logger.debug("[%s] Transaction is new", transaction.transaction_id) logger.debug("[%s] Transaction is new", transaction.transaction_id)
@ -632,7 +639,7 @@ class FederationServer(FederationBase):
async def on_make_knock_request( async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str, supported_versions: List[str] self, origin: str, room_id: str, user_id: str, supported_versions: List[str]
) -> Dict[str, Union[EventBase, str]]: ) -> JsonDict:
"""We've received a /make_knock/ request, so we create a partial knock """We've received a /make_knock/ request, so we create a partial knock
event for the room and hand that back, along with the room version, to the knocking event for the room and hand that back, along with the room version, to the knocking
homeserver. We do *not* persist or process this event until the other server has homeserver. We do *not* persist or process this event until the other server has
@ -1230,10 +1237,6 @@ class FederationHandlerRegistry:
self.query_handlers[query_type] = handler self.query_handlers[query_type] = handler
def register_instance_for_edu(self, edu_type: str, instance_name: str) -> None:
"""Register that the EDU handler is on a different instance than master."""
self._edu_type_to_instance[edu_type] = [instance_name]
def register_instances_for_edu( def register_instances_for_edu(
self, edu_type: str, instance_names: List[str] self, edu_type: str, instance_names: List[str]
) -> None: ) -> None:

View File

@ -149,7 +149,6 @@ class TransactionManager:
) )
except HttpResponseException as e: except HttpResponseException as e:
code = e.code code = e.code
response = e.response
set_tag(tags.ERROR, True) set_tag(tags.ERROR, True)

View File

@ -15,7 +15,19 @@
import logging import logging
import urllib import urllib
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Union from typing import (
Any,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Tuple,
Union,
)
import attr import attr
import ijson import ijson
@ -100,7 +112,7 @@ class TransportLayerClient:
@log_function @log_function
async def backfill( async def backfill(
self, destination: str, room_id: str, event_tuples: Iterable[str], limit: int self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
) -> Optional[JsonDict]: ) -> Optional[JsonDict]:
"""Requests `limit` previous PDUs in a given context before list of """Requests `limit` previous PDUs in a given context before list of
PDUs. PDUs.
@ -108,7 +120,9 @@ class TransportLayerClient:
Args: Args:
destination destination
room_id room_id
event_tuples event_tuples:
Must be a Collection that is falsy when empty.
(Iterable is not enough here!)
limit limit
Returns: Returns:
@ -786,7 +800,7 @@ class TransportLayerClient:
@log_function @log_function
def join_group( def join_group(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict: ) -> Awaitable[JsonDict]:
"""Attempts to join a group""" """Attempts to join a group"""
path = _create_v1_path("/groups/%s/users/%s/join", group_id, user_id) path = _create_v1_path("/groups/%s/users/%s/join", group_id, user_id)
@ -1296,14 +1310,17 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
self._coro_state = ijson.items_coro( self._coro_state = ijson.items_coro(
_event_list_parser(room_version, self._response.state), _event_list_parser(room_version, self._response.state),
prefix + "state.item", prefix + "state.item",
use_float=True,
) )
self._coro_auth = ijson.items_coro( self._coro_auth = ijson.items_coro(
_event_list_parser(room_version, self._response.auth_events), _event_list_parser(room_version, self._response.auth_events),
prefix + "auth_chain.item", prefix + "auth_chain.item",
use_float=True,
) )
self._coro_event = ijson.kvitems_coro( self._coro_event = ijson.kvitems_coro(
_event_parser(self._response.event_dict), _event_parser(self._response.event_dict),
prefix + "org.matrix.msc3083.v2.event", prefix + "org.matrix.msc3083.v2.event",
use_float=True,
) )
def write(self, data: bytes) -> int: def write(self, data: bytes) -> int:

View File

@ -90,6 +90,7 @@ class AdminHandler:
Membership.LEAVE, Membership.LEAVE,
Membership.BAN, Membership.BAN,
Membership.INVITE, Membership.INVITE,
Membership.KNOCK,
), ),
) )
@ -122,6 +123,13 @@ class AdminHandler:
invited_state = invite.unsigned["invite_room_state"] invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state) writer.write_invite(room_id, invite, invited_state)
if room.membership == Membership.KNOCK:
event_id = room.event_id
knock = await self.store.get_event(event_id, allow_none=True)
if knock:
knock_state = knock.unsigned["knock_room_state"]
writer.write_knock(room_id, knock, knock_state)
continue continue
# We only want to bother fetching events up to the last time they # We only want to bother fetching events up to the last time they
@ -238,6 +246,20 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
""" """
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod
def write_knock(
self, room_id: str, event: EventBase, state: StateMap[dict]
) -> None:
"""Write a knock for the room, with associated knock state.
Args:
room_id: The room ID the knock is for.
event: The knock event.
state: A subset of the state at the knock, with a subset of the
event keys (type, state_key content and sender).
"""
raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def finished(self) -> Any: def finished(self) -> Any:
"""Called when all data has successfully been exported and written. """Called when all data has successfully been exported and written.

View File

@ -34,6 +34,7 @@ from synapse.metrics.background_process_metrics import (
) )
from synapse.storage.databases.main.directory import RoomAliasMapping from synapse.storage.databases.main.directory import RoomAliasMapping
from synapse.types import JsonDict, RoomAlias, RoomStreamToken, UserID from synapse.types import JsonDict, RoomAlias, RoomStreamToken, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
@ -58,6 +59,10 @@ class ApplicationServicesHandler:
self.current_max = 0 self.current_max = 0
self.is_processing = False self.is_processing = False
self._ephemeral_events_linearizer = Linearizer(
name="appservice_ephemeral_events"
)
def notify_interested_services(self, max_token: RoomStreamToken) -> None: def notify_interested_services(self, max_token: RoomStreamToken) -> None:
"""Notifies (pushes) all application services interested in this event. """Notifies (pushes) all application services interested in this event.
@ -182,7 +187,7 @@ class ApplicationServicesHandler:
def notify_interested_services_ephemeral( def notify_interested_services_ephemeral(
self, self,
stream_key: str, stream_key: str,
new_token: Optional[int], new_token: Union[int, RoomStreamToken],
users: Optional[Collection[Union[str, UserID]]] = None, users: Optional[Collection[Union[str, UserID]]] = None,
) -> None: ) -> None:
""" """
@ -203,7 +208,7 @@ class ApplicationServicesHandler:
Appservices will only receive ephemeral events that fall within their Appservices will only receive ephemeral events that fall within their
registered user and room namespaces. registered user and room namespaces.
new_token: The latest stream token. new_token: The stream token of the event.
users: The users that should be informed of the new event, if any. users: The users that should be informed of the new event, if any.
""" """
if not self.notify_appservices: if not self.notify_appservices:
@ -212,6 +217,19 @@ class ApplicationServicesHandler:
if stream_key not in ("typing_key", "receipt_key", "presence_key"): if stream_key not in ("typing_key", "receipt_key", "presence_key"):
return return
# Assert that new_token is an integer (and not a RoomStreamToken).
# All of the supported streams that this function handles use an
# integer to track progress (rather than a RoomStreamToken - a
# vector clock implementation) as they don't support multiple
# stream writers.
#
# As a result, we simply assert that new_token is an integer.
# If we do end up needing to pass a RoomStreamToken down here
# in the future, using RoomStreamToken.stream (the minimum stream
# position) to convert to an ascending integer value should work.
# Additional context: https://github.com/matrix-org/synapse/pull/11137
assert isinstance(new_token, int)
services = [ services = [
service service
for service in self.store.get_app_services() for service in self.store.get_app_services()
@ -231,14 +249,13 @@ class ApplicationServicesHandler:
self, self,
services: List[ApplicationService], services: List[ApplicationService],
stream_key: str, stream_key: str,
new_token: Optional[int], new_token: int,
users: Collection[Union[str, UserID]], users: Collection[Union[str, UserID]],
) -> None: ) -> None:
logger.debug("Checking interested services for %s" % (stream_key)) logger.debug("Checking interested services for %s" % (stream_key))
with Measure(self.clock, "notify_interested_services_ephemeral"): with Measure(self.clock, "notify_interested_services_ephemeral"):
for service in services: for service in services:
# Only handle typing if we have the latest token if stream_key == "typing_key":
if stream_key == "typing_key" and new_token is not None:
# Note that we don't persist the token (via set_type_stream_id_for_appservice) # Note that we don't persist the token (via set_type_stream_id_for_appservice)
# for typing_key due to performance reasons and due to their highly # for typing_key due to performance reasons and due to their highly
# ephemeral nature. # ephemeral nature.
@ -248,26 +265,37 @@ class ApplicationServicesHandler:
events = await self._handle_typing(service, new_token) events = await self._handle_typing(service, new_token)
if events: if events:
self.scheduler.submit_ephemeral_events_for_as(service, events) self.scheduler.submit_ephemeral_events_for_as(service, events)
continue
elif stream_key == "receipt_key": # Since we read/update the stream position for this AS/stream
events = await self._handle_receipts(service) with (
if events: await self._ephemeral_events_linearizer.queue(
self.scheduler.submit_ephemeral_events_for_as(service, events) (service.id, stream_key)
# Persist the latest handled stream token for this appservice
await self.store.set_type_stream_id_for_appservice(
service, "read_receipt", new_token
) )
):
if stream_key == "receipt_key":
events = await self._handle_receipts(service, new_token)
if events:
self.scheduler.submit_ephemeral_events_for_as(
service, events
)
elif stream_key == "presence_key": # Persist the latest handled stream token for this appservice
events = await self._handle_presence(service, users) await self.store.set_type_stream_id_for_appservice(
if events: service, "read_receipt", new_token
self.scheduler.submit_ephemeral_events_for_as(service, events) )
# Persist the latest handled stream token for this appservice elif stream_key == "presence_key":
await self.store.set_type_stream_id_for_appservice( events = await self._handle_presence(service, users, new_token)
service, "presence", new_token if events:
) self.scheduler.submit_ephemeral_events_for_as(
service, events
)
# Persist the latest handled stream token for this appservice
await self.store.set_type_stream_id_for_appservice(
service, "presence", new_token
)
async def _handle_typing( async def _handle_typing(
self, service: ApplicationService, new_token: int self, service: ApplicationService, new_token: int
@ -304,7 +332,9 @@ class ApplicationServicesHandler:
) )
return typing return typing
async def _handle_receipts(self, service: ApplicationService) -> List[JsonDict]: async def _handle_receipts(
self, service: ApplicationService, new_token: Optional[int]
) -> List[JsonDict]:
""" """
Return the latest read receipts that the given application service should receive. Return the latest read receipts that the given application service should receive.
@ -323,6 +353,12 @@ class ApplicationServicesHandler:
from_key = await self.store.get_type_stream_id_for_appservice( from_key = await self.store.get_type_stream_id_for_appservice(
service, "read_receipt" service, "read_receipt"
) )
if new_token is not None and new_token <= from_key:
logger.debug(
"Rejecting token lower than or equal to stored: %s" % (new_token,)
)
return []
receipts_source = self.event_sources.sources.receipt receipts_source = self.event_sources.sources.receipt
receipts, _ = await receipts_source.get_new_events_as( receipts, _ = await receipts_source.get_new_events_as(
service=service, from_key=from_key service=service, from_key=from_key
@ -330,7 +366,10 @@ class ApplicationServicesHandler:
return receipts return receipts
async def _handle_presence( async def _handle_presence(
self, service: ApplicationService, users: Collection[Union[str, UserID]] self,
service: ApplicationService,
users: Collection[Union[str, UserID]],
new_token: Optional[int],
) -> List[JsonDict]: ) -> List[JsonDict]:
""" """
Return the latest presence updates that the given application service should receive. Return the latest presence updates that the given application service should receive.
@ -353,6 +392,12 @@ class ApplicationServicesHandler:
from_key = await self.store.get_type_stream_id_for_appservice( from_key = await self.store.get_type_stream_id_for_appservice(
service, "presence" service, "presence"
) )
if new_token is not None and new_token <= from_key:
logger.debug(
"Rejecting token lower than or equal to stored: %s" % (new_token,)
)
return []
for user in users: for user in users:
if isinstance(user, str): if isinstance(user, str):
user = UserID.from_string(user) user = UserID.from_string(user)

View File

@ -1989,7 +1989,9 @@ class PasswordAuthProvider:
self, self,
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None, check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None, on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
auth_checkers: Optional[Dict[Tuple[str, Tuple], CHECK_AUTH_CALLBACK]] = None, auth_checkers: Optional[
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
] = None,
) -> None: ) -> None:
# Register check_3pid_auth callback # Register check_3pid_auth callback
if check_3pid_auth is not None: if check_3pid_auth is not None:

View File

@ -247,7 +247,7 @@ class DirectoryHandler:
servers = result.servers servers = result.servers
else: else:
try: try:
fed_result = await self.federation.make_query( fed_result: Optional[JsonDict] = await self.federation.make_query(
destination=room_alias.domain, destination=room_alias.domain,
query_type="directory", query_type="directory",
args={"room_alias": room_alias.to_string()}, args={"room_alias": room_alias.to_string()},

View File

@ -201,95 +201,19 @@ class E2eKeysHandler:
r[user_id] = remote_queries[user_id] r[user_id] = remote_queries[user_id]
# Now fetch any devices that we don't have in our cache # Now fetch any devices that we don't have in our cache
@trace
async def do_remote_query(destination: str) -> None:
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache with their device list.
"""
destination_query = remote_queries_not_in_cache[destination]
# We first consider whether we wish to update the device list cache with
# the users device list. We want to track a user's devices when the
# authenticated user shares a room with the queried user and the query
# has not specified a particular device.
# If we update the cache for the queried user we remove them from further
# queries. We use the more efficient batched query_client_keys for all
# remaining users
user_ids_updated = []
for (user_id, device_list) in destination_query.items():
if user_id in user_ids_updated:
continue
if device_list:
continue
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
continue
# We've decided we're sharing a room with this user and should
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
if self._is_master:
user_devices = await self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
user_devices = await self._user_device_resync_client(
user_id=user_id
)
user_devices = user_devices["devices"]
user_results = results.setdefault(user_id, {})
for device in user_devices:
user_results[device["device_id"]] = device["keys"]
user_ids_updated.append(user_id)
except Exception as e:
failures[destination] = _exception_to_failure(e)
if len(destination_query) == len(user_ids_updated):
# We've updated all the users in the query and we do not need to
# make any further remote calls.
return
# Remove all the users from the query which we have updated
for user_id in user_ids_updated:
destination_query.pop(user_id)
try:
remote_result = await self.federation.query_client_keys(
destination, {"device_keys": destination_query}, timeout=timeout
)
for user_id, keys in remote_result["device_keys"].items():
if user_id in destination_query:
results[user_id] = keys
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
failures[destination] = failure
set_tag("error", True)
set_tag("reason", failure)
await make_deferred_yieldable( await make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
[ [
run_in_background(do_remote_query, destination) run_in_background(
for destination in remote_queries_not_in_cache self._query_devices_for_destination,
results,
cross_signing_keys,
failures,
destination,
queries,
timeout,
)
for destination, queries in remote_queries_not_in_cache.items()
], ],
consumeErrors=True, consumeErrors=True,
).addErrback(unwrapFirstError) ).addErrback(unwrapFirstError)
@ -301,6 +225,121 @@ class E2eKeysHandler:
return ret return ret
@trace
async def _query_devices_for_destination(
self,
results: JsonDict,
cross_signing_keys: JsonDict,
failures: Dict[str, JsonDict],
destination: str,
destination_query: Dict[str, Iterable[str]],
timeout: int,
) -> None:
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache with their device list.
Args:
results: A map from user ID to their device keys, which gets
updated with the newly fetched keys.
cross_signing_keys: Map from user ID to their cross signing keys,
which gets updated with the newly fetched keys.
failures: Map of destinations to failures that have occurred while
attempting to fetch keys.
destination: The remote server to query
destination_query: The query dict of devices to query the remote
server for.
timeout: The timeout for remote HTTP requests.
"""
# We first consider whether we wish to update the device list cache with
# the users device list. We want to track a user's devices when the
# authenticated user shares a room with the queried user and the query
# has not specified a particular device.
# If we update the cache for the queried user we remove them from further
# queries. We use the more efficient batched query_client_keys for all
# remaining users
user_ids_updated = []
for (user_id, device_list) in destination_query.items():
if user_id in user_ids_updated:
continue
if device_list:
continue
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
continue
# We've decided we're sharing a room with this user and should
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
if self._is_master:
resync_results = await self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
resync_results = await self._user_device_resync_client(
user_id=user_id
)
# Add the device keys to the results.
user_devices = resync_results["devices"]
user_results = results.setdefault(user_id, {})
for device in user_devices:
user_results[device["device_id"]] = device["keys"]
user_ids_updated.append(user_id)
# Add any cross signing keys to the results.
master_key = resync_results.get("master_key")
self_signing_key = resync_results.get("self_signing_key")
if master_key:
cross_signing_keys["master_keys"][user_id] = master_key
if self_signing_key:
cross_signing_keys["self_signing_keys"][user_id] = self_signing_key
except Exception as e:
failures[destination] = _exception_to_failure(e)
if len(destination_query) == len(user_ids_updated):
# We've updated all the users in the query and we do not need to
# make any further remote calls.
return
# Remove all the users from the query which we have updated
for user_id in user_ids_updated:
destination_query.pop(user_id)
try:
remote_result = await self.federation.query_client_keys(
destination, {"device_keys": destination_query}, timeout=timeout
)
for user_id, keys in remote_result["device_keys"].items():
if user_id in destination_query:
results[user_id] = keys
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
failures[destination] = failure
set_tag("error", True)
set_tag("reason", failure)
return
async def get_cross_signing_keys_from_cache( async def get_cross_signing_keys_from_cache(
self, query: Iterable[str], from_user_id: Optional[str] self, query: Iterable[str], from_user_id: Optional[str]
) -> Dict[str, Dict[str, dict]]: ) -> Dict[str, Dict[str, dict]]:

View File

@ -477,7 +477,7 @@ class FederationEventHandler:
@log_function @log_function
async def backfill( async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Iterable[str] self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> None: ) -> None:
"""Trigger a backfill request to `dest` for the given `room_id` """Trigger a backfill request to `dest` for the given `room_id`
@ -1643,7 +1643,7 @@ class FederationEventHandler:
event: the event whose auth_events we want event: the event whose auth_events we want
Returns: Returns:
all of the events in `event.auth_events`, after deduplication all of the events listed in `event.auth_events_ids`, after deduplication
Raises: Raises:
AuthError if we were unable to fetch the auth_events for any reason. AuthError if we were unable to fetch the auth_events for any reason.
@ -1916,7 +1916,7 @@ class FederationEventHandler:
event_pos = PersistedEventPosition( event_pos = PersistedEventPosition(
self._instance_name, event.internal_metadata.stream_ordering self._instance_name, event.internal_metadata.stream_ordering
) )
self._notifier.on_new_room_event( await self._notifier.on_new_room_event(
event, event_pos, max_stream_token, extra_users=extra_users event, event_pos, max_stream_token, extra_users=extra_users
) )

View File

@ -537,10 +537,6 @@ class IdentityHandler:
except RequestTimedOutError: except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server") raise SynapseError(500, "Timed out contacting identity server")
# It is already checked that public_baseurl is configured since this code
# should only be used if account_threepid_delegate_msisdn is true.
assert self.hs.config.server.public_baseurl
# we need to tell the client to send the token back to us, since it doesn't # we need to tell the client to send the token back to us, since it doesn't
# otherwise know where to send it, so add submit_url response parameter # otherwise know where to send it, so add submit_url response parameter
# (see also MSC2078) # (see also MSC2078)
@ -879,6 +875,8 @@ class IdentityHandler:
} }
if room_type is not None: if room_type is not None:
invite_config["room_type"] = room_type
# TODO The unstable field is deprecated and should be removed in the future.
invite_config["org.matrix.msc3288.room_type"] = room_type invite_config["org.matrix.msc3288.room_type"] = room_type
# If a custom web client location is available, include it in the request. # If a custom web client location is available, include it in the request.

View File

@ -1320,6 +1320,8 @@ class EventCreationHandler:
# user is actually admin or not). # user is actually admin or not).
is_admin_redaction = False is_admin_redaction = False
if event.type == EventTypes.Redaction: if event.type == EventTypes.Redaction:
assert event.redacts is not None
original_event = await self.store.get_event( original_event = await self.store.get_event(
event.redacts, event.redacts,
redact_behaviour=EventRedactBehaviour.AS_IS, redact_behaviour=EventRedactBehaviour.AS_IS,
@ -1418,6 +1420,8 @@ class EventCreationHandler:
) )
if event.type == EventTypes.Redaction: if event.type == EventTypes.Redaction:
assert event.redacts is not None
original_event = await self.store.get_event( original_event = await self.store.get_event(
event.redacts, event.redacts,
redact_behaviour=EventRedactBehaviour.AS_IS, redact_behaviour=EventRedactBehaviour.AS_IS,
@ -1505,11 +1509,13 @@ class EventCreationHandler:
next_batch_id = event.content.get( next_batch_id = event.content.get(
EventContentFields.MSC2716_NEXT_BATCH_ID EventContentFields.MSC2716_NEXT_BATCH_ID
) )
conflicting_insertion_event_id = ( conflicting_insertion_event_id = None
await self.store.get_insertion_event_by_batch_id( if next_batch_id:
event.room_id, next_batch_id conflicting_insertion_event_id = (
await self.store.get_insertion_event_id_by_batch_id(
event.room_id, next_batch_id
)
) )
)
if conflicting_insertion_event_id is not None: if conflicting_insertion_event_id is not None:
# The current insertion event that we're processing is invalid # The current insertion event that we're processing is invalid
# because an insertion event already exists in the room with the # because an insertion event already exists in the room with the
@ -1542,13 +1548,16 @@ class EventCreationHandler:
# If there's an expiry timestamp on the event, schedule its expiry. # If there's an expiry timestamp on the event, schedule its expiry.
self._message_handler.maybe_schedule_expiry(event) self._message_handler.maybe_schedule_expiry(event)
def _notify() -> None: async def _notify() -> None:
try: try:
self.notifier.on_new_room_event( await self.notifier.on_new_room_event(
event, event_pos, max_stream_token, extra_users=extra_users event, event_pos, max_stream_token, extra_users=extra_users
) )
except Exception: except Exception:
logger.exception("Error notifying about new room event") logger.exception(
"Error notifying about new room event %s",
event.event_id,
)
run_in_background(_notify) run_in_background(_notify)

View File

@ -438,7 +438,7 @@ class PaginationHandler:
} }
state = None state = None
if event_filter and event_filter.lazy_load_members() and len(events) > 0: if event_filter and event_filter.lazy_load_members and len(events) > 0:
# TODO: remove redundant members # TODO: remove redundant members
# FIXME: we also care about invite targets etc. # FIXME: we also care about invite targets etc.

View File

@ -52,6 +52,7 @@ import synapse.metrics
from synapse.api.constants import EventTypes, Membership, PresenceState from synapse.api.constants import EventTypes, Membership, PresenceState
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.appservice import ApplicationService
from synapse.events.presence_router import PresenceRouter from synapse.events.presence_router import PresenceRouter
from synapse.logging.context import run_in_background from synapse.logging.context import run_in_background
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
@ -1551,6 +1552,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
is_guest: bool = False, is_guest: bool = False,
explicit_room_id: Optional[str] = None, explicit_room_id: Optional[str] = None,
include_offline: bool = True, include_offline: bool = True,
service: Optional[ApplicationService] = None,
) -> Tuple[List[UserPresenceState], int]: ) -> Tuple[List[UserPresenceState], int]:
# The process for getting presence events are: # The process for getting presence events are:
# 1. Get the rooms the user is in. # 1. Get the rooms the user is in.

View File

@ -456,7 +456,11 @@ class ProfileHandler:
continue continue
new_name = profile.get("displayname") new_name = profile.get("displayname")
if not isinstance(new_name, str):
new_name = None
new_avatar = profile.get("avatar_url") new_avatar = profile.get("avatar_url")
if not isinstance(new_avatar, str):
new_avatar = None
# We always hit update to update the last_check timestamp # We always hit update to update the last_check timestamp
await self.store.update_remote_profile_cache(user_id, new_name, new_avatar) await self.store.update_remote_profile_cache(user_id, new_name, new_avatar)

View File

@ -525,7 +525,7 @@ class RoomCreationHandler:
): ):
await self.room_member_handler.update_membership( await self.room_member_handler.update_membership(
requester, requester,
UserID.from_string(old_event["state_key"]), UserID.from_string(old_event.state_key),
new_room_id, new_room_id,
"ban", "ban",
ratelimit=False, ratelimit=False,
@ -1183,7 +1183,7 @@ class RoomContextHandler:
else: else:
last_event_id = event_id last_event_id = event_id
if event_filter and event_filter.lazy_load_members(): if event_filter and event_filter.lazy_load_members:
state_filter = StateFilter.from_lazy_load_member_list( state_filter = StateFilter.from_lazy_load_member_list(
ev.sender ev.sender
for ev in itertools.chain( for ev in itertools.chain(

View File

@ -357,7 +357,7 @@ class RoomBatchHandler:
for (event, context) in reversed(events_to_persist): for (event, context) in reversed(events_to_persist):
await self.event_creation_handler.handle_new_client_event( await self.event_creation_handler.handle_new_client_event(
await self.create_requester_for_user_id_from_app_service( await self.create_requester_for_user_id_from_app_service(
event["sender"], app_service_requester.app_service event.sender, app_service_requester.app_service
), ),
event=event, event=event,
context=context, context=context,

View File

@ -1669,7 +1669,9 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# #
# the prev_events consist solely of the previous membership event. # the prev_events consist solely of the previous membership event.
prev_event_ids = [previous_membership_event.event_id] prev_event_ids = [previous_membership_event.event_id]
auth_event_ids = previous_membership_event.auth_event_ids() + prev_event_ids auth_event_ids = (
list(previous_membership_event.auth_event_ids()) + prev_event_ids
)
event, context = await self.event_creation_handler.create_event( event, context = await self.event_creation_handler.create_event(
requester, requester,

View File

@ -249,7 +249,7 @@ class SearchHandler:
) )
events.sort(key=lambda e: -rank_map[e.event_id]) events.sort(key=lambda e: -rank_map[e.event_id])
allowed_events = events[: search_filter.limit()] allowed_events = events[: search_filter.limit]
for e in allowed_events: for e in allowed_events:
rm = room_groups.setdefault( rm = room_groups.setdefault(
@ -271,13 +271,13 @@ class SearchHandler:
# We keep looping and we keep filtering until we reach the limit # We keep looping and we keep filtering until we reach the limit
# or we run out of things. # or we run out of things.
# But only go around 5 times since otherwise synapse will be sad. # But only go around 5 times since otherwise synapse will be sad.
while len(room_events) < search_filter.limit() and i < 5: while len(room_events) < search_filter.limit and i < 5:
i += 1 i += 1
search_result = await self.store.search_rooms( search_result = await self.store.search_rooms(
room_ids, room_ids,
search_term, search_term,
keys, keys,
search_filter.limit() * 2, search_filter.limit * 2,
pagination_token=pagination_token, pagination_token=pagination_token,
) )
@ -299,9 +299,9 @@ class SearchHandler:
) )
room_events.extend(events) room_events.extend(events)
room_events = room_events[: search_filter.limit()] room_events = room_events[: search_filter.limit]
if len(results) < search_filter.limit() * 2: if len(results) < search_filter.limit * 2:
pagination_token = None pagination_token = None
break break
else: else:
@ -311,7 +311,7 @@ class SearchHandler:
group = room_groups.setdefault(event.room_id, {"results": []}) group = room_groups.setdefault(event.room_id, {"results": []})
group["results"].append(event.event_id) group["results"].append(event.event_id)
if room_events and len(room_events) >= search_filter.limit(): if room_events and len(room_events) >= search_filter.limit:
last_event_id = room_events[-1].event_id last_event_id = room_events[-1].event_id
pagination_token = results_map[last_event_id]["pagination_token"] pagination_token = results_map[last_event_id]["pagination_token"]

Some files were not shown because too many files have changed in this diff Show More