Merge remote-tracking branch 'upstream/release-v1.31.0'

This commit is contained in:
Tulir Asokan 2021-03-30 14:03:23 +03:00
commit 7def4428cf
172 changed files with 2588 additions and 787 deletions

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# this script is run by buildkite in a plain `xenial` container; it installs the # this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py35-old tox environment. # minimal requirements for tox and hands over to the py35-old tox environment.

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# #
# Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along # Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# with additional dependencies needed for the test (such as coverage or the PostgreSQL # with additional dependencies needed for the test (such as coverage or the PostgreSQL

View file

@ -1,3 +1,71 @@
Synapse 1.31.0rc1 (2021-03-30)
==============================
**Note:** As announced in v1.25.0, and in line with the deprecation policy for platform dependencies, this is the last release to support Python 3.5 and PostgreSQL 9.5. Future versions of Synapse will require Python 3.6+ and PostgreSQL 9.6+.
This is also the last release that the Synapse team will be publishing packages for Debian Stretch and Ubuntu Xenial.
Features
--------
- Add support to OpenID Connect login for requiring attributes on the `userinfo` response. Contributed by Hubbe King. ([\#9609](https://github.com/matrix-org/synapse/issues/9609))
- Add initial experimental support for a "space summary" API. ([\#9643](https://github.com/matrix-org/synapse/issues/9643), [\#9652](https://github.com/matrix-org/synapse/issues/9652), [\#9653](https://github.com/matrix-org/synapse/issues/9653))
- Add support for the busy presence state as described in [MSC3026](https://github.com/matrix-org/matrix-doc/pull/3026). ([\#9644](https://github.com/matrix-org/synapse/issues/9644))
- Add support for credentials for proxy authentication in the `HTTPS_PROXY` environment variable. ([\#9657](https://github.com/matrix-org/synapse/issues/9657))
Bugfixes
--------
- Fix a longstanding bug that could cause issues when editing a reply to a message. ([\#9585](https://github.com/matrix-org/synapse/issues/9585))
- Fix the `/capabilities` endpoint to return `m.change_password` as disabled if the local password database is not used for authentication. Contributed by @dklimpel. ([\#9588](https://github.com/matrix-org/synapse/issues/9588))
- Check if local passwords are enabled before setting them for the user. ([\#9636](https://github.com/matrix-org/synapse/issues/9636))
- Fix a bug where federation sending can stall due to `concurrent access` database exceptions when it falls behind. ([\#9639](https://github.com/matrix-org/synapse/issues/9639))
- Fix a bug introduced in Synapse 1.30.1 which meant the suggested `pip` incantation to install an updated `cryptography` was incorrect. ([\#9699](https://github.com/matrix-org/synapse/issues/9699))
Updates to the Docker image
---------------------------
- Speed up Docker builds and make it nicer to test against Complement while developing (install all dependencies before copying the project). ([\#9610](https://github.com/matrix-org/synapse/issues/9610))
- Include [opencontainers labels](https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys) in the Docker image. ([\#9612](https://github.com/matrix-org/synapse/issues/9612))
Improved Documentation
----------------------
- Clarify that `register_new_matrix_user` is present also when installed via non-pip package. ([\#9074](https://github.com/matrix-org/synapse/issues/9074))
- Update source install documentation to mention platform prerequisites before the source install steps. ([\#9667](https://github.com/matrix-org/synapse/issues/9667))
- Improve worker documentation for fallback/web auth endpoints. ([\#9679](https://github.com/matrix-org/synapse/issues/9679))
- Update the sample configuration for OIDC authentication. ([\#9695](https://github.com/matrix-org/synapse/issues/9695))
Internal Changes
----------------
- Preparatory steps for removing redundant `outlier` data from `event_json.internal_metadata` column. ([\#9411](https://github.com/matrix-org/synapse/issues/9411))
- Add type hints to the caching module. ([\#9442](https://github.com/matrix-org/synapse/issues/9442))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9499](https://github.com/matrix-org/synapse/issues/9499), [\#9659](https://github.com/matrix-org/synapse/issues/9659))
- Add additional type hints to the Homeserver object. ([\#9631](https://github.com/matrix-org/synapse/issues/9631), [\#9638](https://github.com/matrix-org/synapse/issues/9638), [\#9675](https://github.com/matrix-org/synapse/issues/9675), [\#9681](https://github.com/matrix-org/synapse/issues/9681))
- Only save remote cross-signing and device keys if they're different from the current ones. ([\#9634](https://github.com/matrix-org/synapse/issues/9634))
- Rename storage function to fix spelling and not conflict with another function's name. ([\#9637](https://github.com/matrix-org/synapse/issues/9637))
- Improve performance of federation catch up by sending the latest events in the room to the remote, rather than just the last event sent by the local server. ([\#9640](https://github.com/matrix-org/synapse/issues/9640), [\#9664](https://github.com/matrix-org/synapse/issues/9664))
- In the `federation_client` commandline client, stop automatically adding the URL prefix, so that servlets on other prefixes can be tested. ([\#9645](https://github.com/matrix-org/synapse/issues/9645))
- In the `federation_client` commandline client, handle inline `signing_key`s in `homeserver.yaml`. ([\#9647](https://github.com/matrix-org/synapse/issues/9647))
- Fixed some antipattern issues to improve code quality. ([\#9649](https://github.com/matrix-org/synapse/issues/9649))
- Add a storage method for pulling all current user presence state from the database. ([\#9650](https://github.com/matrix-org/synapse/issues/9650))
- Import `HomeServer` from the proper module. ([\#9665](https://github.com/matrix-org/synapse/issues/9665))
- Increase default join ratelimiting burst rate. ([\#9674](https://github.com/matrix-org/synapse/issues/9674))
- Add type hints to third party event rules and visibility modules. ([\#9676](https://github.com/matrix-org/synapse/issues/9676))
- Bump mypy-zope to 0.2.13 to fix "Cannot determine consistent method resolution order (MRO)" errors when running mypy a second time. ([\#9678](https://github.com/matrix-org/synapse/issues/9678))
- Use interpreter from `$PATH` via `/usr/bin/env` instead of absolute paths in various scripts. ([\#9689](https://github.com/matrix-org/synapse/issues/9689))
- Make it possible to use `dmypy`. ([\#9692](https://github.com/matrix-org/synapse/issues/9692))
- Suppress "CryptographyDeprecationWarning: int_from_bytes is deprecated". ([\#9698](https://github.com/matrix-org/synapse/issues/9698))
- Use `dmypy run` in lint script for improved performance in type-checking while developing. ([\#9701](https://github.com/matrix-org/synapse/issues/9701))
- Fix undetected mypy error when using Python 3.6. ([\#9703](https://github.com/matrix-org/synapse/issues/9703))
- Fix type-checking CI on develop. ([\#9709](https://github.com/matrix-org/synapse/issues/9709))
Synapse 1.30.1 (2021-03-26) Synapse 1.30.1 (2021-03-26)
=========================== ===========================

View file

@ -6,7 +6,7 @@ There are 3 steps to follow under **Installation Instructions**.
- [Choosing your server name](#choosing-your-server-name) - [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse) - [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source) - [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions) - [Platform-specific prerequisites](#platform-specific-prerequisites)
- [Debian/Ubuntu/Raspbian](#debianubunturaspbian) - [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
- [ArchLinux](#archlinux) - [ArchLinux](#archlinux)
- [CentOS/Fedora](#centosfedora) - [CentOS/Fedora](#centosfedora)
@ -60,17 +60,14 @@ that your email address is probably `user@example.com` rather than
(Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).) (Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed.
System requirements: System requirements:
- POSIX-compliant system (tested on Linux & OS X) - POSIX-compliant system (tested on Linux & OS X)
- Python 3.5.2 or later, up to Python 3.9. - Python 3.5.2 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions. See [Platform-Specific
Instructions](#platform-specific-instructions) for information on installing
these on various platforms.
To install the Synapse homeserver run: To install the Synapse homeserver run:
@ -128,7 +125,11 @@ source env/bin/activate
synctl start synctl start
``` ```
#### Platform-Specific Instructions #### Platform-specific prerequisites
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
##### Debian/Ubuntu/Raspbian ##### Debian/Ubuntu/Raspbian
@ -526,14 +527,24 @@ email will be disabled.
The easiest way to create a new user is to do so from a client like [Element](https://element.io/). The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
Alternatively you can do so from the command line if you have installed via pip. Alternatively, you can do so from the command line. This can be done as follows:
This can be done as follows: 1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was
installed via a prebuilt package, `register_new_matrix_user` should already be
on the search path):
```sh
cd ~/synapse
source env/bin/activate
synctl start # if not already running
```
2. Run the following command:
```sh
register_new_matrix_user -c homeserver.yaml http://localhost:8008
```
```sh This will prompt you to add details for the new user, and will then connect to
$ source ~/synapse/env/bin/activate the running Synapse to create the new user. For example:
$ synctl start # if not already running ```
$ register_new_matrix_user -c homeserver.yaml http://localhost:8008
New user localpart: erikj New user localpart: erikj
Password: Password:
Confirm password: Confirm password:

View file

@ -98,9 +98,12 @@ will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client. See the `reverse proxy documentation indicate the protocol used by the client.
<docs/reverse_proxy.md>`_, where the example configurations have been updated to
show how to set this header. Synapse also requires the `Host` header to be preserved.
See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the
example configurations have been updated to show how to set these headers.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it (Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.) sets `X-Forwarded-Proto` by default.)

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# this script will use the api: # this script will use the api:
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst # https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
DOMAIN=yourserver.tld DOMAIN=yourserver.tld
# add this user as admin in your home server: # add this user as admin in your home server:

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
set -e set -e

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )" DIR="$( cd "$( dirname "$0" )" && pwd )"

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )" DIR="$( cd "$( dirname "$0" )" && pwd )"

View file

@ -18,44 +18,47 @@ ARG PYTHON_VERSION=3.8
### ###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder FROM docker.io/python:${PYTHON_VERSION}-slim as builder
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0'
# install the OS build deps # install the OS build deps
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
build-essential \ build-essential \
libffi-dev \ libffi-dev \
libjpeg-dev \ libjpeg-dev \
libpq-dev \ libpq-dev \
libssl-dev \ libssl-dev \
libwebp-dev \ libwebp-dev \
libxml++2.6-dev \ libxml++2.6-dev \
libxslt1-dev \ libxslt1-dev \
openssl \ openssl \
rustc \ rustc \
zlib1g-dev \ zlib1g-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Build dependencies that are not available as wheels, to speed up rebuilds # Copy just what we need to pip install
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
frozendict \
jaeger-client \
opentracing \
# Match the version constraints of Synapse
"prometheus_client>=0.4.0" \
psycopg2 \
pycparser \
pyrsistent \
pyyaml \
simplejson \
threadloop \
thrift
# now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/
COPY scripts /synapse/scripts/ COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/ COPY MANIFEST.in README.rst setup.py synctl /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project so that we this layer in the Docker cache can be
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN pip install --prefix="/install" --no-warn-script-location \ RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all] /synapse[all]
# Copy over the rest of the project
COPY synapse /synapse/synapse/
# Install the synapse package itself and all of its children packages.
#
# This is aiming at installing only the `packages=find_packages(...)` from `setup.py
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
### ###
### Stage 1: runtime ### Stage 1: runtime
@ -64,16 +67,16 @@ RUN pip install --prefix="/install" --no-warn-script-location \
FROM docker.io/python:${PYTHON_VERSION}-slim FROM docker.io/python:${PYTHON_VERSION}-slim
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
curl \ curl \
gosu \ gosu \
libjpeg62-turbo \ libjpeg62-turbo \
libpq5 \ libpq5 \
libwebp6 \ libwebp6 \
xmlsec1 \ xmlsec1 \
libjemalloc2 \ libjemalloc2 \
libssl-dev \ libssl-dev \
openssl \ openssl \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py COPY ./docker/start.py /start.py
@ -86,4 +89,4 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"] ENTRYPOINT ["/start.py"]
HEALTHCHECK --interval=1m --timeout=5s \ HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1 CMD curl -fSs http://localhost:8008/health || exit 1

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# The script to build the Debian package, as ran inside the Docker image. # The script to build the Debian package, as ran inside the Docker image.

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# This script runs the PostgreSQL tests inside a Docker container. It expects # This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the # the relevant source files to be mounted into /src (done automatically by the

View file

@ -104,10 +104,11 @@ example.com:8448 {
``` ```
<VirtualHost *:443> <VirtualHost *:443>
SSLEngine on SSLEngine on
ServerName matrix.example.com; ServerName matrix.example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode AllowEncodedSlashes NoDecode
ProxyPreserveHost on
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
@ -116,7 +117,7 @@ example.com:8448 {
<VirtualHost *:8448> <VirtualHost *:8448>
SSLEngine on SSLEngine on
ServerName example.com; ServerName example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode AllowEncodedSlashes NoDecode
@ -135,6 +136,8 @@ example.com:8448 {
</IfModule> </IfModule>
``` ```
**NOTE 3**: Missing `ProxyPreserveHost on` can lead to a redirect loop.
### HAProxy ### HAProxy
``` ```

View file

@ -869,10 +869,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#rc_joins: #rc_joins:
# local: # local:
# per_second: 0.1 # per_second: 0.1
# burst_count: 3 # burst_count: 10
# remote: # remote:
# per_second: 0.01 # per_second: 0.01
# burst_count: 3 # burst_count: 10
# #
#rc_3pid_validation: #rc_3pid_validation:
# per_second: 0.003 # per_second: 0.003
@ -1758,6 +1758,9 @@ saml2_config:
# Note that, if this is changed, users authenticating via that provider # Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user! # will no longer be recognised as the same user!
# #
# (Use "oidc" here if you are migrating from an old "oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to # idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms. # offer the user a choice of login mechanisms.
# #
@ -1873,6 +1876,24 @@ saml2_config:
# which is set to the claims returned by the UserInfo Endpoint and/or # which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token. # in the ID Token.
# #
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be "Stephensson", but the `groups`
# claim MUST contain "admin".
#
# attribute_requirements:
# - attribute: family_name
# value: "Stephensson"
# - attribute: groups
# value: "admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options. # for information on how to configure these options.
# #
@ -1905,34 +1926,9 @@ oidc_providers:
# localpart_template: "{{ user.login }}" # localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}" # display_name_template: "{{ user.name }}"
# email_template: "{{ user.email }}" # email_template: "{{ user.email }}"
# attribute_requirements:
# For use with Keycloak # - attribute: userGroup
# # value: "synapseUsers"
#- idp_id: keycloak
# idp_name: Keycloak
# issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
# client_id: "synapse"
# client_secret: "copy secret generated in Keycloak UI"
# scopes: ["openid", "profile"]
# For use with Github
#
#- idp_id: github
# idp_name: Github
# idp_brand: github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
# client_secret: "your-client-secret" # TO BE FILLED
# authorization_endpoint: "https://github.com/login/oauth/authorize"
# token_endpoint: "https://github.com/login/oauth/access_token"
# userinfo_endpoint: "https://api.github.com/user"
# scopes: ["read:user"]
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# Enable Central Authentication Service (CAS) for registration and login. # Enable Central Authentication Service (CAS) for registration and login.

View file

@ -232,7 +232,6 @@ expressions:
# Registration/login requests # Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$ ^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$ ^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests # Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact

View file

@ -1,12 +1,13 @@
[mypy] [mypy]
namespace_packages = True namespace_packages = True
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = silent follow_imports = normal
check_untyped_defs = True check_untyped_defs = True
show_error_codes = True show_error_codes = True
show_traceback = True show_traceback = True
mypy_path = stubs mypy_path = stubs
warn_unreachable = True warn_unreachable = True
local_partial_types = True
# To find all folders that pass mypy you run: # To find all folders that pass mypy you run:
# #
@ -20,8 +21,9 @@ files =
synapse/crypto, synapse/crypto,
synapse/event_auth.py, synapse/event_auth.py,
synapse/events/builder.py, synapse/events/builder.py,
synapse/events/validator.py,
synapse/events/spamcheck.py, synapse/events/spamcheck.py,
synapse/events/third_party_rules.py,
synapse/events/validator.py,
synapse/federation, synapse/federation,
synapse/groups, synapse/groups,
synapse/handlers, synapse/handlers,
@ -38,6 +40,7 @@ files =
synapse/push, synapse/push,
synapse/replication, synapse/replication,
synapse/rest, synapse/rest,
synapse/secrets.py,
synapse/server.py, synapse/server.py,
synapse/server_notices, synapse/server_notices,
synapse/spam_checker_api, synapse/spam_checker_api,
@ -71,6 +74,7 @@ files =
synapse/util/metrics.py, synapse/util/metrics.py,
synapse/util/macaroons.py, synapse/util/macaroons.py,
synapse/util/stringutils.py, synapse/util/stringutils.py,
synapse/visibility.py,
tests/replication, tests/replication,
tests/test_utils, tests/test_utils,
tests/handlers/test_password_providers.py, tests/handlers/test_password_providers.py,

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# #
# A script which checks that an appropriate news file has been added on this # A script which checks that an appropriate news file has been added on this
# branch. # branch.

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# Find linting errors in Synapse's default config file. # Find linting errors in Synapse's default config file.
# Exits with 0 if there are no problems, or another code otherwise. # Exits with 0 if there are no problems, or another code otherwise.

View file

@ -22,8 +22,8 @@ import sys
from typing import Any, Optional from typing import Any, Optional
from urllib import parse as urlparse from urllib import parse as urlparse
import nacl.signing
import requests import requests
import signedjson.key
import signedjson.types import signedjson.types
import srvlookup import srvlookup
import yaml import yaml
@ -44,18 +44,6 @@ def encode_base64(input_bytes):
return output_string return output_string
def decode_base64(input_string):
"""Decode a base64 string to bytes inferring padding from the length of the
string."""
input_bytes = input_string.encode("ascii")
input_len = len(input_bytes)
padding = b"=" * (3 - ((input_len + 3) % 4))
output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2
output_bytes = base64.b64decode(input_bytes + padding)
return output_bytes[:output_len]
def encode_canonical_json(value): def encode_canonical_json(value):
return json.dumps( return json.dumps(
value, value,
@ -88,42 +76,6 @@ def sign_json(
return json_object return json_object
NACL_ED25519 = "ed25519"
def decode_signing_key_base64(algorithm, version, key_base64):
"""Decode a base64 encoded signing key
Args:
algorithm (str): The algorithm the key is for (currently "ed25519").
version (str): Identifies this key out of the keys for this entity.
key_base64 (str): Base64 encoded bytes of the key.
Returns:
A SigningKey object.
"""
if algorithm == NACL_ED25519:
key_bytes = decode_base64(key_base64)
key = nacl.signing.SigningKey(key_bytes)
key.version = version
key.alg = NACL_ED25519
return key
else:
raise ValueError("Unsupported algorithm %s" % (algorithm,))
def read_signing_keys(stream):
"""Reads a list of keys from a stream
Args:
stream : A stream to iterate for keys.
Returns:
list of SigningKey objects.
"""
keys = []
for line in stream:
algorithm, version, key_base64 = line.split()
keys.append(decode_signing_key_base64(algorithm, version, key_base64))
return keys
def request( def request(
method: Optional[str], method: Optional[str],
origin_name: str, origin_name: str,
@ -223,23 +175,28 @@ def main():
parser.add_argument("--body", help="Data to send as the body of the HTTP request") parser.add_argument("--body", help="Data to send as the body of the HTTP request")
parser.add_argument( parser.add_argument(
"path", help="request path. We will add '/_matrix/federation/v1/' to this." "path", help="request path, including the '/_matrix/federation/...' prefix."
) )
args = parser.parse_args() args = parser.parse_args()
if not args.server_name or not args.signing_key_path: args.signing_key = None
if args.signing_key_path:
with open(args.signing_key_path) as f:
args.signing_key = f.readline()
if not args.server_name or not args.signing_key:
read_args_from_config(args) read_args_from_config(args)
with open(args.signing_key_path) as f: algorithm, version, key_base64 = args.signing_key.split()
key = read_signing_keys(f)[0] key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64)
result = request( result = request(
args.method, args.method,
args.server_name, args.server_name,
key, key,
args.destination, args.destination,
"/_matrix/federation/v1/" + args.path, args.path,
content=args.body, content=args.body,
) )
@ -255,10 +212,16 @@ def main():
def read_args_from_config(args): def read_args_from_config(args):
with open(args.config, "r") as fh: with open(args.config, "r") as fh:
config = yaml.safe_load(fh) config = yaml.safe_load(fh)
if not args.server_name: if not args.server_name:
args.server_name = config["server_name"] args.server_name = config["server_name"]
if not args.signing_key_path:
args.signing_key_path = config["signing_key_path"] if not args.signing_key:
if "signing_key" in config:
args.signing_key = config["signing_key"]
else:
with open(config["signing_key_path"]) as f:
args.signing_key = f.readline()
class MatrixConnectionAdapter(HTTPAdapter): class MatrixConnectionAdapter(HTTPAdapter):

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# #
# Update/check the docs/sample_config.yaml # Update/check the docs/sample_config.yaml

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# #
# Runs linting scripts over the local Synapse checkout # Runs linting scripts over the local Synapse checkout
# isort - sorts import statements # isort - sorts import statements
@ -95,4 +95,4 @@ isort "${files[@]}"
python3 -m black "${files[@]}" python3 -m black "${files[@]}"
./scripts-dev/config-lint.sh ./scripts-dev/config-lint.sh
flake8 "${files[@]}" flake8 "${files[@]}"
mypy dmypy run

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# #
# This script generates SQL files for creating a brand new Synapse DB with the latest # This script generates SQL files for creating a brand new Synapse DB with the latest
# schema, on both SQLite3 and Postgres. # schema, on both SQLite3 and Postgres.

View file

@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
set -e set -e

View file

@ -51,7 +51,7 @@ def main(src_repo, dest_repo):
parts = line.split("|") parts = line.split("|")
if len(parts) != 2: if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr) print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1) sys.exit(1)
move_media(parts[0], parts[1], src_paths, dest_paths) move_media(parts[0], parts[1], src_paths, dest_paths)

View file

@ -18,7 +18,8 @@ ignore =
# E203: whitespace before ':' (which is contrary to pep8?) # E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def # E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us) # E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501 # B00*: Subsection of the bugbear suite (TODO: add in remaining fixes)
ignore=W503,W504,E203,E731,E501,B006,B007,B008
[isort] [isort]
line_length = 88 line_length = 88

View file

@ -99,10 +99,11 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0", "isort==5.7.0",
"black==20.8b1", "black==20.8b1",
"flake8-comprehensions", "flake8-comprehensions",
"flake8-bugbear",
"flake8", "flake8",
] ]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"] CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.13"]
# Dependencies which are exclusively required by unit test code. This is # Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests. # NOT a list of all modules that are necessary to run the unit tests.

View file

@ -48,7 +48,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.30.1" __version__ = "1.31.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View file

@ -558,6 +558,9 @@ class Auth:
Returns: Returns:
bool: False if no access_token was given, True otherwise. bool: False if no access_token was given, True otherwise.
""" """
# This will always be set by the time Twisted calls us.
assert request.args is not None
query_params = request.args.get(b"access_token") query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers) return bool(query_params) or bool(auth_headers)
@ -574,6 +577,8 @@ class Auth:
MissingClientTokenError: If there isn't a single access_token in the MissingClientTokenError: If there isn't a single access_token in the
request request
""" """
# This will always be set by the time Twisted calls us.
assert request.args is not None
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token") query_params = request.args.get(b"access_token")

View file

@ -51,6 +51,7 @@ class PresenceState:
OFFLINE = "offline" OFFLINE = "offline"
UNAVAILABLE = "unavailable" UNAVAILABLE = "unavailable"
ONLINE = "online" ONLINE = "online"
BUSY = "org.matrix.msc3026.busy"
class JoinRules: class JoinRules:
@ -100,6 +101,9 @@ class EventTypes:
Dummy = "org.matrix.dummy_event" Dummy = "org.matrix.dummy_event"
MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child"
MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent"
class EduTypes: class EduTypes:
Presence = "m.presence" Presence = "m.presence"
@ -160,6 +164,9 @@ class EventContentFields:
# cf https://github.com/matrix-org/matrix-doc/pull/2228 # cf https://github.com/matrix-org/matrix-doc/pull/2228
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after" SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
# cf https://github.com/matrix-org/matrix-doc/pull/1772
MSC1772_ROOM_TYPE = "org.matrix.msc1772.type"
class RoomEncryptionAlgorithms: class RoomEncryptionAlgorithms:
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2" MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"

View file

@ -22,7 +22,9 @@ logger = logging.getLogger(__name__)
try: try:
python_dependencies.check_requirements() python_dependencies.check_requirements()
except python_dependencies.DependencyException as e: except python_dependencies.DependencyException as e:
sys.stderr.writelines(e.message) sys.stderr.writelines(
e.message # noqa: B306, DependencyException.message is a property
)
sys.exit(1) sys.exit(1)

View file

@ -21,8 +21,10 @@ import signal
import socket import socket
import sys import sys
import traceback import traceback
import warnings
from typing import Awaitable, Callable, Iterable from typing import Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn from typing_extensions import NoReturn
from twisted.internet import defer, error, reactor from twisted.internet import defer, error, reactor
@ -195,6 +197,25 @@ def listen_metrics(bind_addresses, port):
start_http_server(port, addr=host, registry=RegistryProxy) start_http_server(port, addr=host, registry=RegistryProxy)
def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict):
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
warnings.filterwarnings(
action="ignore",
category=CryptographyDeprecationWarning,
message="int_from_bytes is deprecated",
)
from synapse.util.manhole import manhole
listen_tcp(
bind_addresses,
port,
manhole(username="matrix", password="rabbithole", globals=manhole_globals),
)
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50): def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
""" """
Create a TCP socket for a port and several addresses Create a TCP socket for a port and several addresses

View file

@ -147,7 +147,6 @@ from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker") logger = logging.getLogger("synapse.app.generic_worker")
@ -302,6 +301,8 @@ class GenericWorkerPresence(BasePresenceHandler):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
) )
self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
hs.get_reactor().addSystemEventTrigger( hs.get_reactor().addSystemEventTrigger(
"before", "before",
"shutdown", "shutdown",
@ -439,8 +440,12 @@ class GenericWorkerPresence(BasePresenceHandler):
PresenceState.ONLINE, PresenceState.ONLINE,
PresenceState.UNAVAILABLE, PresenceState.UNAVAILABLE,
PresenceState.OFFLINE, PresenceState.OFFLINE,
PresenceState.BUSY,
) )
if presence not in valid_presence:
if presence not in valid_presence or (
presence == PresenceState.BUSY and not self._busy_presence_enabled
):
raise SynapseError(400, "Invalid presence state") raise SynapseError(400, "Invalid presence state")
user_id = target_user.to_string() user_id = target_user.to_string()
@ -634,12 +639,8 @@ class GenericWorkerServer(HomeServer):
if listener.type == "http": if listener.type == "http":
self._listen_http(listener) self._listen_http(listener)
elif listener.type == "manhole": elif listener.type == "manhole":
_base.listen_tcp( _base.listen_manhole(
listener.bind_addresses, listener.bind_addresses, listener.port, manhole_globals={"hs": self}
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
) )
elif listener.type == "metrics": elif listener.type == "metrics":
if not self.get_config().enable_metrics: if not self.get_config().enable_metrics:
@ -786,13 +787,6 @@ class FederationSenderHandler:
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def wake_destination(self, server: str): def wake_destination(self, server: str):
self.federation_sender.wake_destination(server) self.federation_sender.wake_destination(server)

View file

@ -67,7 +67,6 @@ from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
@ -288,12 +287,8 @@ class SynapseHomeServer(HomeServer):
if listener.type == "http": if listener.type == "http":
self._listening_services.extend(self._listener_http(config, listener)) self._listening_services.extend(self._listener_http(config, listener))
elif listener.type == "manhole": elif listener.type == "manhole":
listen_tcp( _base.listen_manhole(
listener.bind_addresses, listener.bind_addresses, listener.port, manhole_globals={"hs": self}
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
) )
elif listener.type == "replication": elif listener.type == "replication":
services = listen_tcp( services = listen_tcp(

View file

@ -24,7 +24,7 @@ from ._base import Config, ConfigError
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR" _CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
# Map from canonicalised cache name to cache. # Map from canonicalised cache name to cache.
_CACHES = {} _CACHES = {} # type: Dict[str, Callable[[float], None]]
# a lock on the contents of _CACHES # a lock on the contents of _CACHES
_CACHES_LOCK = threading.Lock() _CACHES_LOCK = threading.Lock()
@ -59,7 +59,9 @@ def _canonicalise_cache_name(cache_name: str) -> str:
return cache_name.lower() return cache_name.lower()
def add_resizable_cache(cache_name: str, cache_resize_callback: Callable): def add_resizable_cache(
cache_name: str, cache_resize_callback: Callable[[float], None]
):
"""Register a cache that's size can dynamically change """Register a cache that's size can dynamically change
Args: Args:

View file

@ -27,3 +27,7 @@ class ExperimentalConfig(Config):
# MSC2858 (multiple SSO identity providers) # MSC2858 (multiple SSO identity providers)
self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool
# Spaces (MSC1772, MSC2946, etc)
self.spaces_enabled = experimental.get("spaces_enabled", False) # type: bool
# MSC3026 (busy presence state)
self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool

View file

@ -404,7 +404,11 @@ def _parse_key_servers(key_servers, federation_verify_certificates):
try: try:
jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA) jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA)
except jsonschema.ValidationError as e: except jsonschema.ValidationError as e:
raise ConfigError("Unable to parse 'trusted_key_servers': " + e.message) raise ConfigError(
"Unable to parse 'trusted_key_servers': {}".format(
e.message # noqa: B306, jsonschema.ValidationError.message is a valid attribute
)
)
for server in key_servers: for server in key_servers:
server_name = server["server_name"] server_name = server["server_name"]

View file

@ -56,7 +56,9 @@ class MetricsConfig(Config):
try: try:
check_requirements("sentry") check_requirements("sentry")
except DependencyException as e: except DependencyException as e:
raise ConfigError(e.message) raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
self.sentry_dsn = config["sentry"].get("dsn") self.sentry_dsn = config["sentry"].get("dsn")
if not self.sentry_dsn: if not self.sentry_dsn:

View file

@ -15,11 +15,12 @@
# limitations under the License. # limitations under the License.
from collections import Counter from collections import Counter
from typing import Iterable, Mapping, Optional, Tuple, Type from typing import Iterable, List, Mapping, Optional, Tuple, Type
import attr import attr
from synapse.config._util import validate_config from synapse.config._util import validate_config
from synapse.config.sso import SsoAttributeRequirement
from synapse.python_dependencies import DependencyException, check_requirements from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import Collection, JsonDict from synapse.types import Collection, JsonDict
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
@ -41,7 +42,9 @@ class OIDCConfig(Config):
try: try:
check_requirements("oidc") check_requirements("oidc")
except DependencyException as e: except DependencyException as e:
raise ConfigError(e.message) from e raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
) from e
# check we don't have any duplicate idp_ids now. (The SSO handler will also # check we don't have any duplicate idp_ids now. (The SSO handler will also
# check for duplicates when the REST listeners get registered, but that happens # check for duplicates when the REST listeners get registered, but that happens
@ -76,6 +79,9 @@ class OIDCConfig(Config):
# Note that, if this is changed, users authenticating via that provider # Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user! # will no longer be recognised as the same user!
# #
# (Use "oidc" here if you are migrating from an old "oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to # idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms. # offer the user a choice of login mechanisms.
# #
@ -191,6 +197,24 @@ class OIDCConfig(Config):
# which is set to the claims returned by the UserInfo Endpoint and/or # which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token. # in the ID Token.
# #
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be "Stephensson", but the `groups`
# claim MUST contain "admin".
#
# attribute_requirements:
# - attribute: family_name
# value: "Stephensson"
# - attribute: groups
# value: "admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options. # for information on how to configure these options.
# #
@ -223,34 +247,9 @@ class OIDCConfig(Config):
# localpart_template: "{{{{ user.login }}}}" # localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}" # display_name_template: "{{{{ user.name }}}}"
# email_template: "{{{{ user.email }}}}" # email_template: "{{{{ user.email }}}}"
# attribute_requirements:
# For use with Keycloak # - attribute: userGroup
# # value: "synapseUsers"
#- idp_id: keycloak
# idp_name: Keycloak
# issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
# client_id: "synapse"
# client_secret: "copy secret generated in Keycloak UI"
# scopes: ["openid", "profile"]
# For use with Github
#
#- idp_id: github
# idp_name: Github
# idp_brand: github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
# client_secret: "your-client-secret" # TO BE FILLED
# authorization_endpoint: "https://github.com/login/oauth/authorize"
# token_endpoint: "https://github.com/login/oauth/access_token"
# userinfo_endpoint: "https://api.github.com/user"
# scopes: ["read:user"]
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}"
""".format( """.format(
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
) )
@ -329,6 +328,10 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
}, },
"allow_existing_users": {"type": "boolean"}, "allow_existing_users": {"type": "boolean"},
"user_mapping_provider": {"type": ["object", "null"]}, "user_mapping_provider": {"type": ["object", "null"]},
"attribute_requirements": {
"type": "array",
"items": SsoAttributeRequirement.JSON_SCHEMA,
},
}, },
} }
@ -465,6 +468,11 @@ def _parse_oidc_config_dict(
jwt_header=client_secret_jwt_key_config["jwt_header"], jwt_header=client_secret_jwt_key_config["jwt_header"],
jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}), jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}),
) )
# parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement
attribute_requirements = [
SsoAttributeRequirement(**x)
for x in oidc_config.get("attribute_requirements", [])
]
return OidcProviderConfig( return OidcProviderConfig(
idp_id=idp_id, idp_id=idp_id,
@ -488,6 +496,7 @@ def _parse_oidc_config_dict(
allow_existing_users=oidc_config.get("allow_existing_users", False), allow_existing_users=oidc_config.get("allow_existing_users", False),
user_mapping_provider_class=user_mapping_provider_class, user_mapping_provider_class=user_mapping_provider_class,
user_mapping_provider_config=user_mapping_provider_config, user_mapping_provider_config=user_mapping_provider_config,
attribute_requirements=attribute_requirements,
) )
@ -577,3 +586,6 @@ class OidcProviderConfig:
# the config of the user mapping provider # the config of the user mapping provider
user_mapping_provider_config = attr.ib() user_mapping_provider_config = attr.ib()
# required attributes to require in userinfo to allow login/registration
attribute_requirements = attr.ib(type=List[SsoAttributeRequirement])

View file

@ -95,11 +95,11 @@ class RatelimitConfig(Config):
self.rc_joins_local = RateLimitConfig( self.rc_joins_local = RateLimitConfig(
config.get("rc_joins", {}).get("local", {}), config.get("rc_joins", {}).get("local", {}),
defaults={"per_second": 0.1, "burst_count": 3}, defaults={"per_second": 0.1, "burst_count": 10},
) )
self.rc_joins_remote = RateLimitConfig( self.rc_joins_remote = RateLimitConfig(
config.get("rc_joins", {}).get("remote", {}), config.get("rc_joins", {}).get("remote", {}),
defaults={"per_second": 0.01, "burst_count": 3}, defaults={"per_second": 0.01, "burst_count": 10},
) )
# Ratelimit cross-user key requests: # Ratelimit cross-user key requests:
@ -187,10 +187,10 @@ class RatelimitConfig(Config):
#rc_joins: #rc_joins:
# local: # local:
# per_second: 0.1 # per_second: 0.1
# burst_count: 3 # burst_count: 10
# remote: # remote:
# per_second: 0.01 # per_second: 0.01
# burst_count: 3 # burst_count: 10
# #
#rc_3pid_validation: #rc_3pid_validation:
# per_second: 0.003 # per_second: 0.003

View file

@ -177,7 +177,9 @@ class ContentRepositoryConfig(Config):
check_requirements("url_preview") check_requirements("url_preview")
except DependencyException as e: except DependencyException as e:
raise ConfigError(e.message) raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
if "url_preview_ip_range_blacklist" not in config: if "url_preview_ip_range_blacklist" not in config:
raise ConfigError( raise ConfigError(

View file

@ -76,7 +76,9 @@ class SAML2Config(Config):
try: try:
check_requirements("saml2") check_requirements("saml2")
except DependencyException as e: except DependencyException as e:
raise ConfigError(e.message) raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
self.saml2_enabled = True self.saml2_enabled = True

View file

@ -39,7 +39,9 @@ class TracerConfig(Config):
try: try:
check_requirements("opentracing") check_requirements("opentracing")
except DependencyException as e: except DependencyException as e:
raise ConfigError(e.message) raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
# The tracer is enabled so sanitize the config # The tracer is enabled so sanitize the config

View file

@ -191,7 +191,7 @@ def _context_info_cb(ssl_connection, where, ret):
# ... we further assume that SSLClientConnectionCreator has set the # ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object. # '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where) tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where)
except: # noqa: E722, taken from the twisted implementation except BaseException: # taken from the twisted implementation
logger.exception("Error during info_callback") logger.exception("Error during info_callback")
f = Failure() f = Failure()
tls_protocol.failVerification(f) tls_protocol.failVerification(f)
@ -219,7 +219,7 @@ class SSLClientConnectionCreator:
# ... and we also gut-wrench a '_synapse_tls_verifier' attribute into the # ... and we also gut-wrench a '_synapse_tls_verifier' attribute into the
# tls_protocol so that the SSL context's info callback has something to # tls_protocol so that the SSL context's info callback has something to
# call to do the cert verification. # call to do the cert verification.
setattr(tls_protocol, "_synapse_tls_verifier", self._verifier) tls_protocol._synapse_tls_verifier = self._verifier
return connection return connection

View file

@ -57,7 +57,7 @@ from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -98,7 +98,7 @@ class DefaultDictProperty(DictProperty):
class _EventInternalMetadata: class _EventInternalMetadata:
__slots__ = ["_dict", "stream_ordering"] __slots__ = ["_dict", "stream_ordering", "outlier"]
def __init__(self, internal_metadata_dict: JsonDict): def __init__(self, internal_metadata_dict: JsonDict):
# we have to copy the dict, because it turns out that the same dict is # we have to copy the dict, because it turns out that the same dict is
@ -108,7 +108,10 @@ class _EventInternalMetadata:
# the stream ordering of this event. None, until it has been persisted. # the stream ordering of this event. None, until it has been persisted.
self.stream_ordering = None # type: Optional[int] self.stream_ordering = None # type: Optional[int]
outlier = DictProperty("outlier") # type: bool # whether this event is an outlier (ie, whether we have the state at that point
# in the DAG)
self.outlier = False
out_of_band_membership = DictProperty("out_of_band_membership") # type: bool out_of_band_membership = DictProperty("out_of_band_membership") # type: bool
send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str
recheck_redaction = DictProperty("recheck_redaction") # type: bool recheck_redaction = DictProperty("recheck_redaction") # type: bool
@ -129,7 +132,7 @@ class _EventInternalMetadata:
return dict(self._dict) return dict(self._dict)
def is_outlier(self) -> bool: def is_outlier(self) -> bool:
return self._dict.get("outlier", False) return self.outlier
def is_out_of_band_membership(self) -> bool: def is_out_of_band_membership(self) -> bool:
"""Whether this is an out of band membership, like an invite or an invite """Whether this is an out of band membership, like an invite or an invite

View file

@ -13,12 +13,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import Callable, Union from typing import TYPE_CHECKING, Union
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.types import Requester, StateMap from synapse.types import Requester, StateMap
if TYPE_CHECKING:
from synapse.server import HomeServer
class ThirdPartyEventRules: class ThirdPartyEventRules:
"""Allows server admins to provide a Python module implementing an extra """Allows server admins to provide a Python module implementing an extra
@ -28,7 +31,7 @@ class ThirdPartyEventRules:
behaviours. behaviours.
""" """
def __init__(self, hs): def __init__(self, hs: "HomeServer"):
self.third_party_rules = None self.third_party_rules = None
self.store = hs.get_datastore() self.store = hs.get_datastore()
@ -95,10 +98,9 @@ class ThirdPartyEventRules:
if self.third_party_rules is None: if self.third_party_rules is None:
return True return True
ret = await self.third_party_rules.on_create_room( return await self.third_party_rules.on_create_room(
requester, config, is_requester_admin requester, config, is_requester_admin
) )
return ret
async def check_threepid_can_be_invited( async def check_threepid_can_be_invited(
self, medium: str, address: str, room_id: str self, medium: str, address: str, room_id: str
@ -119,10 +121,9 @@ class ThirdPartyEventRules:
state_events = await self._get_state_map_for_room(room_id) state_events = await self._get_state_map_for_room(room_id)
ret = await self.third_party_rules.check_threepid_can_be_invited( return await self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events medium, address, state_events
) )
return ret
async def check_visibility_can_be_modified( async def check_visibility_can_be_modified(
self, room_id: str, new_visibility: str self, room_id: str, new_visibility: str
@ -143,7 +144,7 @@ class ThirdPartyEventRules:
check_func = getattr( check_func = getattr(
self.third_party_rules, "check_visibility_can_be_modified", None self.third_party_rules, "check_visibility_can_be_modified", None
) )
if not check_func or not isinstance(check_func, Callable): if not check_func or not callable(check_func):
return True return True
state_events = await self._get_state_map_for_room(room_id) state_events = await self._get_state_map_for_room(room_id)

View file

@ -22,6 +22,7 @@ from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import RoomVersion
from synapse.util.async_helpers import yieldable_gather_results from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.frozenutils import unfreeze
from . import EventBase from . import EventBase
@ -54,6 +55,8 @@ def prune_event(event: EventBase) -> EventBase:
event.internal_metadata.stream_ordering event.internal_metadata.stream_ordering
) )
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
# Mark the event as redacted # Mark the event as redacted
pruned_event.internal_metadata.redacted = True pruned_event.internal_metadata.redacted = True
@ -400,10 +403,19 @@ class EventClientSerializer:
# If there is an edit replace the content, preserving existing # If there is an edit replace the content, preserving existing
# relations. # relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relations = event.content.get("m.relates_to") relations = event.content.get("m.relates_to")
serialized_event["content"] = edit.content.get("m.new_content", {})
if relations: if relations:
serialized_event["content"]["m.relates_to"] = relations # Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relations.copy()
else: else:
serialized_event["content"].pop("m.relates_to", None) serialized_event["content"].pop("m.relates_to", None)

View file

@ -27,11 +27,13 @@ from typing import (
List, List,
Mapping, Mapping,
Optional, Optional,
Sequence,
Tuple, Tuple,
TypeVar, TypeVar,
Union, Union,
) )
import attr
from prometheus_client import Counter from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
@ -62,7 +64,7 @@ from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -455,6 +457,7 @@ class FederationClient(FederationBase):
description: str, description: str,
destinations: Iterable[str], destinations: Iterable[str],
callback: Callable[[str], Awaitable[T]], callback: Callable[[str], Awaitable[T]],
failover_on_unknown_endpoint: bool = False,
) -> T: ) -> T:
"""Try an operation on a series of servers, until it succeeds """Try an operation on a series of servers, until it succeeds
@ -474,6 +477,10 @@ class FederationClient(FederationBase):
next server tried. Normally the stacktrace is logged but this is next server tried. Normally the stacktrace is logged but this is
suppressed if the exception is an InvalidResponseError. suppressed if the exception is an InvalidResponseError.
failover_on_unknown_endpoint: if True, we will try other servers if it looks
like a server doesn't support the endpoint. This is typically useful
if the endpoint in question is new or experimental.
Returns: Returns:
The result of callback, if it succeeds The result of callback, if it succeeds
@ -493,16 +500,31 @@ class FederationClient(FederationBase):
except UnsupportedRoomVersionError: except UnsupportedRoomVersionError:
raise raise
except HttpResponseException as e: except HttpResponseException as e:
if not 500 <= e.code < 600: synapse_error = e.to_synapse_error()
raise e.to_synapse_error() failover = False
else:
logger.warning( if 500 <= e.code < 600:
"Failed to %s via %s: %i %s", failover = True
description,
destination, elif failover_on_unknown_endpoint:
e.code, # there is no good way to detect an "unknown" endpoint. Dendrite
e.args[0], # returns a 404 (with no body); synapse returns a 400
) # with M_UNRECOGNISED.
if e.code == 404 or (
e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED
):
failover = True
if not failover:
raise synapse_error from e
logger.warning(
"Failed to %s via %s: %i %s",
description,
destination,
e.code,
e.args[0],
)
except Exception: except Exception:
logger.warning( logger.warning(
"Failed to %s via %s", description, destination, exc_info=True "Failed to %s via %s", description, destination, exc_info=True
@ -1042,3 +1064,141 @@ class FederationClient(FederationBase):
# If we don't manage to find it, return None. It's not an error if a # If we don't manage to find it, return None. It's not an error if a
# server doesn't give it to us. # server doesn't give it to us.
return None return None
async def get_space_summary(
self,
destinations: Iterable[str],
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> "FederationSpaceSummaryResult":
"""
Call other servers to get a summary of the given space
Args:
destinations: The remote servers. We will try them in turn, omitting any
that have been blacklisted.
room_id: ID of the space to be queried
suggested_only: If true, ask the remote server to only return children
with the "suggested" flag set
max_rooms_per_space: A limit on the number of children to return for each
space
exclude_rooms: A list of room IDs to tell the remote server to skip
Returns:
a parsed FederationSpaceSummaryResult
Raises:
SynapseError if we were unable to get a valid summary from any of the
remote servers
"""
async def send_request(destination: str) -> FederationSpaceSummaryResult:
res = await self.transport_layer.get_space_summary(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
max_rooms_per_space=max_rooms_per_space,
exclude_rooms=exclude_rooms,
)
try:
return FederationSpaceSummaryResult.from_json_dict(res)
except ValueError as e:
raise InvalidResponseError(str(e))
return await self._try_destination_list(
"fetch space summary",
destinations,
send_request,
failover_on_unknown_endpoint=True,
)
@attr.s(frozen=True, slots=True)
class FederationSpaceSummaryEventResult:
"""Represents a single event in the result of a successful get_space_summary call.
It's essentially just a serialised event object, but we do a bit of parsing and
validation in `from_json_dict` and store some of the validated properties in
object attributes.
"""
event_type = attr.ib(type=str)
state_key = attr.ib(type=str)
via = attr.ib(type=Sequence[str])
# the raw data, including the above keys
data = attr.ib(type=JsonDict)
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryEventResult":
"""Parse an event within the result of a /spaces/ request
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid event
"""
event_type = d.get("type")
if not isinstance(event_type, str):
raise ValueError("Invalid event: 'event_type' must be a str")
state_key = d.get("state_key")
if not isinstance(state_key, str):
raise ValueError("Invalid event: 'state_key' must be a str")
content = d.get("content")
if not isinstance(content, dict):
raise ValueError("Invalid event: 'content' must be a dict")
via = content.get("via")
if not isinstance(via, Sequence):
raise ValueError("Invalid event: 'via' must be a list")
if any(not isinstance(v, str) for v in via):
raise ValueError("Invalid event: 'via' must be a list of strings")
return cls(event_type, state_key, via, d)
@attr.s(frozen=True, slots=True)
class FederationSpaceSummaryResult:
"""Represents the data returned by a successful get_space_summary call."""
rooms = attr.ib(type=Sequence[JsonDict])
events = attr.ib(type=Sequence[FederationSpaceSummaryEventResult])
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryResult":
"""Parse the result of a /spaces/ request
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid /spaces/ response
"""
rooms = d.get("rooms")
if not isinstance(rooms, Sequence):
raise ValueError("'rooms' must be a list")
if any(not isinstance(r, dict) for r in rooms):
raise ValueError("Invalid room in 'rooms' list")
events = d.get("events")
if not isinstance(events, Sequence):
raise ValueError("'events' must be a list")
if any(not isinstance(e, dict) for e in events):
raise ValueError("Invalid event in 'events' list")
parsed_events = [
FederationSpaceSummaryEventResult.from_json_dict(e) for e in events
]
return cls(rooms, parsed_events)

View file

@ -35,7 +35,7 @@ from twisted.internet import defer
from twisted.internet.abstract import isIPAddress from twisted.internet.abstract import isIPAddress
from twisted.python import failure from twisted.python import failure
from synapse.api.constants import EduTypes, EventTypes, Membership from synapse.api.constants import EduTypes, EventTypes
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, AuthError,
Codes, Codes,
@ -63,7 +63,7 @@ from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet, ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet, ReplicationGetQueryRestServlet,
) )
from synapse.types import JsonDict, get_domain_from_id from synapse.types import JsonDict
from synapse.util import glob_to_regex, json_decoder, unwrapFirstError from synapse.util import glob_to_regex, json_decoder, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
@ -727,27 +727,6 @@ class FederationServer(FederationBase):
if the event was unacceptable for any other reason (eg, too large, if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events) too many prev_events, couldn't find the prev_events)
""" """
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.sender):
# We continue to accept join events from any server; this is
# necessary for the federation join dance to work correctly.
# (When we join over federation, the "helper" server is
# responsible for sending out the join event, rather than the
# origin. See bug #1893. This is also true for some third party
# invites).
if not (
pdu.type == "m.room.member"
and pdu.content
and pdu.content.get("membership", None)
in (Membership.JOIN, Membership.INVITE)
):
logger.info(
"Discarding PDU %s from invalid origin %s", pdu.event_id, origin
)
return
else:
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point # We've already checked that we know the room version by this point
room_version = await self.store.get_room_version(pdu.room_id) room_version = await self.store.get_room_version(pdu.room_id)

View file

@ -31,25 +31,39 @@ Events are replicated via a separate events stream.
import logging import logging
from collections import namedtuple from collections import namedtuple
from typing import Dict, List, Tuple, Type from typing import (
TYPE_CHECKING,
Dict,
Hashable,
Iterable,
List,
Optional,
Sized,
Tuple,
Type,
)
from sortedcontainers import SortedDict from sortedcontainers import SortedDict
from twisted.internet import defer
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.federation.sender import AbstractFederationSender, FederationSender
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.replication.tcp.streams.federation import FederationStream
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from .units import Edu from .units import Edu
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class FederationRemoteSendQueue: class FederationRemoteSendQueue(AbstractFederationSender):
"""A drop in replacement for FederationSender""" """A drop in replacement for FederationSender"""
def __init__(self, hs): def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname self.server_name = hs.hostname
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
@ -58,7 +72,7 @@ class FederationRemoteSendQueue:
# We may have multiple federation sender instances, so we need to track # We may have multiple federation sender instances, so we need to track
# their positions separately. # their positions separately.
self._sender_instances = hs.config.worker.federation_shard_config.instances self._sender_instances = hs.config.worker.federation_shard_config.instances
self._sender_positions = {} self._sender_positions = {} # type: Dict[str, int]
# Pending presence map user_id -> UserPresenceState # Pending presence map user_id -> UserPresenceState
self.presence_map = {} # type: Dict[str, UserPresenceState] self.presence_map = {} # type: Dict[str, UserPresenceState]
@ -71,7 +85,7 @@ class FederationRemoteSendQueue:
# Stream position -> (user_id, destinations) # Stream position -> (user_id, destinations)
self.presence_destinations = ( self.presence_destinations = (
SortedDict() SortedDict()
) # type: SortedDict[int, Tuple[str, List[str]]] ) # type: SortedDict[int, Tuple[str, Iterable[str]]]
# (destination, key) -> EDU # (destination, key) -> EDU
self.keyed_edu = {} # type: Dict[Tuple[str, tuple], Edu] self.keyed_edu = {} # type: Dict[Tuple[str, tuple], Edu]
@ -94,7 +108,7 @@ class FederationRemoteSendQueue:
# we make a new function, so we need to make a new function so the inner # we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which # lambda binds to the queue rather than to the name of the queue which
# changes. ARGH. # changes. ARGH.
def register(name, queue): def register(name: str, queue: Sized) -> None:
LaterGauge( LaterGauge(
"synapse_federation_send_queue_%s_size" % (queue_name,), "synapse_federation_send_queue_%s_size" % (queue_name,),
"", "",
@ -115,13 +129,13 @@ class FederationRemoteSendQueue:
self.clock.looping_call(self._clear_queue, 30 * 1000) self.clock.looping_call(self._clear_queue, 30 * 1000)
def _next_pos(self): def _next_pos(self) -> int:
pos = self.pos pos = self.pos
self.pos += 1 self.pos += 1
self.pos_time[self.clock.time_msec()] = pos self.pos_time[self.clock.time_msec()] = pos
return pos return pos
def _clear_queue(self): def _clear_queue(self) -> None:
"""Clear the queues for anything older than N minutes""" """Clear the queues for anything older than N minutes"""
FIVE_MINUTES_AGO = 5 * 60 * 1000 FIVE_MINUTES_AGO = 5 * 60 * 1000
@ -138,7 +152,7 @@ class FederationRemoteSendQueue:
self._clear_queue_before_pos(position_to_delete) self._clear_queue_before_pos(position_to_delete)
def _clear_queue_before_pos(self, position_to_delete): def _clear_queue_before_pos(self, position_to_delete: int) -> None:
"""Clear all the queues from before a given position""" """Clear all the queues from before a given position"""
with Measure(self.clock, "send_queue._clear"): with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps # Delete things out of presence maps
@ -188,13 +202,18 @@ class FederationRemoteSendQueue:
for key in keys[:i]: for key in keys[:i]:
del self.edus[key] del self.edus[key]
def notify_new_events(self, max_token): def notify_new_events(self, max_token: RoomStreamToken) -> None:
"""As per FederationSender""" """As per FederationSender"""
# We don't need to replicate this as it gets sent down a different # This should never get called.
# stream. raise NotImplementedError()
pass
def build_and_send_edu(self, destination, edu_type, content, key=None): def build_and_send_edu(
self,
destination: str,
edu_type: str,
content: JsonDict,
key: Optional[Hashable] = None,
) -> None:
"""As per FederationSender""" """As per FederationSender"""
if destination == self.server_name: if destination == self.server_name:
logger.info("Not sending EDU to ourselves") logger.info("Not sending EDU to ourselves")
@ -218,38 +237,39 @@ class FederationRemoteSendQueue:
self.notifier.on_new_replication_data() self.notifier.on_new_replication_data()
def send_read_receipt(self, receipt): async def send_read_receipt(self, receipt: ReadReceipt) -> None:
"""As per FederationSender """As per FederationSender
Args: Args:
receipt (synapse.types.ReadReceipt): receipt:
""" """
# nothing to do here: the replication listener will handle it. # nothing to do here: the replication listener will handle it.
return defer.succeed(None)
def send_presence(self, states): def send_presence(self, states: List[UserPresenceState]) -> None:
"""As per FederationSender """As per FederationSender
Args: Args:
states (list(UserPresenceState)) states
""" """
pos = self._next_pos() pos = self._next_pos()
# We only want to send presence for our own users, so lets always just # We only want to send presence for our own users, so lets always just
# filter here just in case. # filter here just in case.
local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states)) local_states = [s for s in states if self.is_mine_id(s.user_id)]
self.presence_map.update({state.user_id: state for state in local_states}) self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states] self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data() self.notifier.on_new_replication_data()
def send_presence_to_destinations(self, states, destinations): def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""As per FederationSender """As per FederationSender
Args: Args:
states (list[UserPresenceState]) states
destinations (list[str]) destinations
""" """
for state in states: for state in states:
pos = self._next_pos() pos = self._next_pos()
@ -258,15 +278,18 @@ class FederationRemoteSendQueue:
self.notifier.on_new_replication_data() self.notifier.on_new_replication_data()
def send_device_messages(self, destination): def send_device_messages(self, destination: str) -> None:
"""As per FederationSender""" """As per FederationSender"""
# We don't need to replicate this as it gets sent down a different # We don't need to replicate this as it gets sent down a different
# stream. # stream.
def get_current_token(self): def wake_destination(self, server: str) -> None:
pass
def get_current_token(self) -> int:
return self.pos - 1 return self.pos - 1
def federation_ack(self, instance_name, token): def federation_ack(self, instance_name: str, token: int) -> None:
if self._sender_instances: if self._sender_instances:
# If we have configured multiple federation sender instances we need # If we have configured multiple federation sender instances we need
# to track their positions separately, and only clear the queue up # to track their positions separately, and only clear the queue up
@ -504,13 +527,16 @@ ParsedFederationStreamData = namedtuple(
) )
def process_rows_for_federation(transaction_queue, rows): def process_rows_for_federation(
transaction_queue: FederationSender,
rows: List[FederationStream.FederationStreamRow],
) -> None:
"""Parse a list of rows from the federation stream and put them in the """Parse a list of rows from the federation stream and put them in the
transaction queue ready for sending to the relevant homeservers. transaction queue ready for sending to the relevant homeservers.
Args: Args:
transaction_queue (FederationSender) transaction_queue
rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow)) rows
""" """
# The federation stream contains a bunch of different types of # The federation stream contains a bunch of different types of

View file

@ -13,14 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import abc
import logging import logging
from typing import Dict, Hashable, Iterable, List, Optional, Set, Tuple from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple
from prometheus_client import Counter from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
import synapse
import synapse.metrics import synapse.metrics
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.events import EventBase from synapse.events import EventBase
@ -40,9 +40,12 @@ from synapse.metrics import (
events_processed_counter, events_processed_counter,
) )
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import ReadReceipt, RoomStreamToken from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.util.metrics import Measure, measure_func from synapse.util.metrics import Measure, measure_func
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
sent_pdus_destination_dist_count = Counter( sent_pdus_destination_dist_count = Counter(
@ -65,8 +68,91 @@ CATCH_UP_STARTUP_DELAY_SEC = 15
CATCH_UP_STARTUP_INTERVAL_SEC = 5 CATCH_UP_STARTUP_INTERVAL_SEC = 5
class FederationSender: class AbstractFederationSender(metaclass=abc.ABCMeta):
def __init__(self, hs: "synapse.server.HomeServer"): @abc.abstractmethod
def notify_new_events(self, max_token: RoomStreamToken) -> None:
"""This gets called when we have some new events we might want to
send out to other servers.
"""
raise NotImplementedError()
@abc.abstractmethod
async def send_read_receipt(self, receipt: ReadReceipt) -> None:
"""Send a RR to any other servers in the room
Args:
receipt: receipt to be sent
"""
raise NotImplementedError()
@abc.abstractmethod
def send_presence(self, states: List[UserPresenceState]) -> None:
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
"""
raise NotImplementedError()
@abc.abstractmethod
def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""Send the given presence states to the given destinations.
Args:
destinations:
"""
raise NotImplementedError()
@abc.abstractmethod
def build_and_send_edu(
self,
destination: str,
edu_type: str,
content: JsonDict,
key: Optional[Hashable] = None,
) -> None:
"""Construct an Edu object, and queue it for sending
Args:
destination: name of server to send to
edu_type: type of EDU to send
content: content of EDU
key: clobbering key for this edu
"""
raise NotImplementedError()
@abc.abstractmethod
def send_device_messages(self, destination: str) -> None:
raise NotImplementedError()
@abc.abstractmethod
def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it
might have come back.
"""
raise NotImplementedError()
@abc.abstractmethod
def get_current_token(self) -> int:
raise NotImplementedError()
@abc.abstractmethod
def federation_ack(self, instance_name: str, token: int) -> None:
raise NotImplementedError()
@abc.abstractmethod
async def get_replication_rows(
self, instance_name: str, from_token: int, to_token: int, target_row_count: int
) -> Tuple[List[Tuple[int, Tuple]], int, bool]:
raise NotImplementedError()
class FederationSender(AbstractFederationSender):
def __init__(self, hs: "HomeServer"):
self.hs = hs self.hs = hs
self.server_name = hs.hostname self.server_name = hs.hostname
@ -432,7 +518,7 @@ class FederationSender:
queue.flush_read_receipts_for_room(room_id) queue.flush_read_receipts_for_room(room_id)
@preserve_fn # the caller should not yield on this @preserve_fn # the caller should not yield on this
async def send_presence(self, states: List[UserPresenceState]): async def send_presence(self, states: List[UserPresenceState]) -> None:
"""Send the new presence states to the appropriate destinations. """Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and This actually queues up the presence states ready for sending and
@ -494,7 +580,7 @@ class FederationSender:
self._get_per_destination_queue(destination).send_presence(states) self._get_per_destination_queue(destination).send_presence(states)
@measure_func("txnqueue._process_presence") @measure_func("txnqueue._process_presence")
async def _process_presence_inner(self, states: List[UserPresenceState]): async def _process_presence_inner(self, states: List[UserPresenceState]) -> None:
"""Given a list of states populate self.pending_presence_by_dest and """Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination poke to send a new transaction to each destination
""" """
@ -516,9 +602,9 @@ class FederationSender:
self, self,
destination: str, destination: str,
edu_type: str, edu_type: str,
content: dict, content: JsonDict,
key: Optional[Hashable] = None, key: Optional[Hashable] = None,
): ) -> None:
"""Construct an Edu object, and queue it for sending """Construct an Edu object, and queue it for sending
Args: Args:
@ -545,7 +631,7 @@ class FederationSender:
self.send_edu(edu, key) self.send_edu(edu, key)
def send_edu(self, edu: Edu, key: Optional[Hashable]): def send_edu(self, edu: Edu, key: Optional[Hashable]) -> None:
"""Queue an EDU for sending """Queue an EDU for sending
Args: Args:
@ -563,7 +649,7 @@ class FederationSender:
else: else:
queue.send_edu(edu) queue.send_edu(edu)
def send_device_messages(self, destination: str): def send_device_messages(self, destination: str) -> None:
if destination == self.server_name: if destination == self.server_name:
logger.warning("Not sending device update to ourselves") logger.warning("Not sending device update to ourselves")
return return
@ -575,7 +661,7 @@ class FederationSender:
self._get_per_destination_queue(destination).attempt_new_transaction() self._get_per_destination_queue(destination).attempt_new_transaction()
def wake_destination(self, destination: str): def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote. """Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it This is mainly useful if the remote server has been down and we think it
@ -599,6 +685,10 @@ class FederationSender:
# to a worker. # to a worker.
return 0 return 0
def federation_ack(self, instance_name: str, token: int) -> None:
# It is not expected that this gets called on FederationSender.
raise NotImplementedError()
@staticmethod @staticmethod
async def get_replication_rows( async def get_replication_rows(
instance_name: str, from_token: int, to_token: int, target_row_count: int instance_name: str, from_token: int, to_token: int, target_row_count: int
@ -607,7 +697,7 @@ class FederationSender:
# to a worker. # to a worker.
return [], 0, False return [], 0, False
async def _wake_destinations_needing_catchup(self): async def _wake_destinations_needing_catchup(self) -> None:
""" """
Wakes up destinations that need catch-up and are not currently being Wakes up destinations that need catch-up and are not currently being
backed off from. backed off from.

View file

@ -15,7 +15,7 @@
# limitations under the License. # limitations under the License.
import datetime import datetime
import logging import logging
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, cast from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple
import attr import attr
from prometheus_client import Counter from prometheus_client import Counter
@ -77,6 +77,7 @@ class PerDestinationQueue:
self._transaction_manager = transaction_manager self._transaction_manager = transaction_manager
self._instance_name = hs.get_instance_name() self._instance_name = hs.get_instance_name()
self._federation_shard_config = hs.config.worker.federation_shard_config self._federation_shard_config = hs.config.worker.federation_shard_config
self._state = hs.get_state_handler()
self._should_send_on_this_instance = True self._should_send_on_this_instance = True
if not self._federation_shard_config.should_handle( if not self._federation_shard_config.should_handle(
@ -415,22 +416,97 @@ class PerDestinationQueue:
"This should not happen." % event_ids "This should not happen." % event_ids
) )
if logger.isEnabledFor(logging.INFO): # We send transactions with events from one room only, as its likely
rooms = [p.room_id for p in catchup_pdus] # that the remote will have to do additional processing, which may
logger.info("Catching up rooms to %s: %r", self._destination, rooms) # take some time. It's better to give it small amounts of work
# rather than risk the request timing out and repeatedly being
# retried, and not making any progress.
#
# Note: `catchup_pdus` will have exactly one PDU per room.
for pdu in catchup_pdus:
# The PDU from the DB will be the last PDU in the room from
# *this server* that wasn't sent to the remote. However, other
# servers may have sent lots of events since then, and we want
# to try and tell the remote only about the *latest* events in
# the room. This is so that it doesn't get inundated by events
# from various parts of the DAG, which all need to be processed.
#
# Note: this does mean that in large rooms a server coming back
# online will get sent the same events from all the different
# servers, but the remote will correctly deduplicate them and
# handle it only once.
await self._transaction_manager.send_new_transaction( # Step 1, fetch the current extremities
self._destination, catchup_pdus, [] extrems = await self._store.get_prev_events_for_room(pdu.room_id)
)
sent_transactions_counter.inc() if pdu.event_id in extrems:
final_pdu = catchup_pdus[-1] # If the event is in the extremities, then great! We can just
self._last_successful_stream_ordering = cast( # use that without having to do further checks.
int, final_pdu.internal_metadata.stream_ordering room_catchup_pdus = [pdu]
) else:
await self._store.set_destination_last_successful_stream_ordering( # If not, fetch the extremities and figure out which we can
self._destination, self._last_successful_stream_ordering # send.
) extrem_events = await self._store.get_events_as_list(extrems)
new_pdus = []
for p in extrem_events:
# We pulled this from the DB, so it'll be non-null
assert p.internal_metadata.stream_ordering
# Filter out events that happened before the remote went
# offline
if (
p.internal_metadata.stream_ordering
< self._last_successful_stream_ordering
):
continue
# Filter out events where the server is not in the room,
# e.g. it may have left/been kicked. *Ideally* we'd pull
# out the kick and send that, but it's a rare edge case
# so we don't bother for now (the server that sent the
# kick should send it out if its online).
hosts = await self._state.get_hosts_in_room_at_events(
p.room_id, [p.event_id]
)
if self._destination not in hosts:
continue
new_pdus.append(p)
# If we've filtered out all the extremities, fall back to
# sending the original event. This should ensure that the
# server gets at least some of missed events (especially if
# the other sending servers are up).
if new_pdus:
room_catchup_pdus = new_pdus
else:
room_catchup_pdus = [pdu]
logger.info(
"Catching up rooms to %s: %r", self._destination, pdu.room_id
)
await self._transaction_manager.send_new_transaction(
self._destination, room_catchup_pdus, []
)
sent_transactions_counter.inc()
# We pulled this from the DB, so it'll be non-null
assert pdu.internal_metadata.stream_ordering
# Note that we mark the last successful stream ordering as that
# from the *original* PDU, rather than the PDU(s) we actually
# send. This is because we use it to mark our position in the
# queue of missed PDUs to process.
self._last_successful_stream_ordering = (
pdu.internal_metadata.stream_ordering
)
await self._store.set_destination_last_successful_stream_ordering(
self._destination, self._last_successful_stream_ordering
)
def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]: def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
if not self._pending_rrs: if not self._pending_rrs:

View file

@ -16,7 +16,7 @@
import logging import logging
import urllib import urllib
from typing import Any, Dict, Optional from typing import Any, Dict, List, Optional
from synapse.api.constants import Membership from synapse.api.constants import Membership
from synapse.api.errors import Codes, HttpResponseException, SynapseError from synapse.api.errors import Codes, HttpResponseException, SynapseError
@ -26,6 +26,7 @@ from synapse.api.urls import (
FEDERATION_V2_PREFIX, FEDERATION_V2_PREFIX,
) )
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
from synapse.types import JsonDict
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -978,6 +979,38 @@ class TransportLayerClient:
return self.client.get_json(destination=destination, path=path) return self.client.get_json(destination=destination, path=path)
async def get_space_summary(
self,
destination: str,
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> JsonDict:
"""
Args:
destination: The remote server
room_id: The room ID to ask about.
suggested_only: if True, only suggested rooms will be returned
max_rooms_per_space: an optional limit to the number of children to be
returned per space
exclude_rooms: a list of any rooms we can skip
"""
path = _create_path(
FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc2946/spaces/%s", room_id
)
params = {
"suggested_only": suggested_only,
"exclude_rooms": exclude_rooms,
}
if max_rooms_per_space is not None:
params["max_rooms_per_space"] = max_rooms_per_space
return await self.client.post_json(
destination=destination, path=path, data=params
)
def _create_path(federation_prefix, path, *args): def _create_path(federation_prefix, path, *args):
""" """

View file

@ -18,7 +18,7 @@
import functools import functools
import logging import logging
import re import re
from typing import Optional, Tuple, Type from typing import Container, Mapping, Optional, Sequence, Tuple, Type
import synapse import synapse
from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH
@ -29,7 +29,7 @@ from synapse.api.urls import (
FEDERATION_V1_PREFIX, FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX, FEDERATION_V2_PREFIX,
) )
from synapse.http.server import JsonResource from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import ( from synapse.http.servlet import (
parse_boolean_from_args, parse_boolean_from_args,
parse_integer_from_args, parse_integer_from_args,
@ -44,7 +44,8 @@ from synapse.logging.opentracing import (
whitelisted_homeserver, whitelisted_homeserver,
) )
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.types import ThirdPartyInstanceID, get_domain_from_id from synapse.types import JsonDict, ThirdPartyInstanceID, get_domain_from_id
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import parse_and_validate_server_name from synapse.util.stringutils import parse_and_validate_server_name
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
@ -1376,6 +1377,40 @@ class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
return 200, new_content return 200, new_content
class FederationSpaceSummaryServlet(BaseFederationServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
PATH = "/spaces/(?P<room_id>[^/]*)"
async def on_POST(
self,
origin: str,
content: JsonDict,
query: Mapping[bytes, Sequence[bytes]],
room_id: str,
) -> Tuple[int, JsonDict]:
suggested_only = content.get("suggested_only", False)
if not isinstance(suggested_only, bool):
raise SynapseError(
400, "'suggested_only' must be a boolean", Codes.BAD_JSON
)
exclude_rooms = content.get("exclude_rooms", [])
if not isinstance(exclude_rooms, list) or any(
not isinstance(x, str) for x in exclude_rooms
):
raise SynapseError(400, "bad value for 'exclude_rooms'", Codes.BAD_JSON)
max_rooms_per_space = content.get("max_rooms_per_space")
if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int):
raise SynapseError(
400, "bad value for 'max_rooms_per_space'", Codes.BAD_JSON
)
return 200, await self.handler.federation_space_summary(
room_id, suggested_only, max_rooms_per_space, exclude_rooms
)
class RoomComplexityServlet(BaseFederationServlet): class RoomComplexityServlet(BaseFederationServlet):
""" """
Indicates to other servers how complex (and therefore likely Indicates to other servers how complex (and therefore likely
@ -1474,18 +1509,24 @@ DEFAULT_SERVLET_GROUPS = (
) )
def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=None): def register_servlets(
hs: HomeServer,
resource: HttpServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
servlet_groups: Optional[Container[str]] = None,
):
"""Initialize and register servlet classes. """Initialize and register servlet classes.
Will by default register all servlets. For custom behaviour, pass in Will by default register all servlets. For custom behaviour, pass in
a list of servlet_groups to register. a list of servlet_groups to register.
Args: Args:
hs (synapse.server.HomeServer): homeserver hs: homeserver
resource (JsonResource): resource class to register to resource: resource class to register to
authenticator (Authenticator): authenticator to use authenticator: authenticator to use
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use ratelimiter: ratelimiter to use
servlet_groups (list[str], optional): List of servlet groups to register. servlet_groups: List of servlet groups to register.
Defaults to ``DEFAULT_SERVLET_GROUPS``. Defaults to ``DEFAULT_SERVLET_GROUPS``.
""" """
if not servlet_groups: if not servlet_groups:
@ -1500,6 +1541,14 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
server_name=hs.hostname, server_name=hs.hostname,
).register(resource) ).register(resource)
if hs.config.experimental.spaces_enabled:
FederationSpaceSummaryServlet(
handler=hs.get_space_summary_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
if "openid" in servlet_groups: if "openid" in servlet_groups:
for servletclass in OPENID_SERVLET_CLASSES: for servletclass in OPENID_SERVLET_CLASSES:
servletclass( servletclass(

View file

@ -46,7 +46,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, get_domain_from_id from synapse.types import JsonDict, get_domain_from_id
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -25,7 +25,7 @@ from synapse.types import GroupID, JsonDict, RoomID, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute from synapse.util.async_helpers import concurrently_execute
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -24,7 +24,7 @@ from synapse.api.ratelimiting import Ratelimiter
from synapse.types import UserID from synapse.types import UserID
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -25,7 +25,7 @@ from synapse.replication.http.account_data import (
from synapse.types import JsonDict, UserID from synapse.types import JsonDict, UserID
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
class AccountDataHandler: class AccountDataHandler:

View file

@ -27,7 +27,7 @@ from synapse.types import UserID
from synapse.util import stringutils from synapse.util import stringutils
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -24,7 +24,7 @@ from twisted.web.resource import Resource
from synapse.app import check_bind_error from synapse.app import check_bind_error
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -25,7 +25,7 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -38,7 +38,7 @@ from synapse.types import Collection, JsonDict, RoomAlias, RoomStreamToken, User
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -70,7 +70,7 @@ from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.threepids import canonicalise_email from synapse.util.threepids import canonicalise_email
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -886,6 +886,19 @@ class AuthHandler(BaseHandler):
) )
return result return result
def can_change_password(self) -> bool:
"""Get whether users on this server are allowed to change or set a password.
Both `config.password_enabled` and `config.password_localdb_enabled` must be true.
Note that any account (even SSO accounts) are allowed to add passwords if the above
is true.
Returns:
Whether users on this server are allowed to change or set a password
"""
return self._password_enabled and self._password_localdb_enabled
def get_supported_login_types(self) -> Iterable[str]: def get_supported_login_types(self) -> Iterable[str]:
"""Get a the login types supported for the /login API """Get a the login types supported for the /login API

View file

@ -27,7 +27,7 @@ from synapse.http.site import SynapseRequest
from synapse.types import UserID, map_username_to_mxid_localpart from synapse.types import UserID, map_username_to_mxid_localpart
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -23,7 +23,7 @@ from synapse.types import Requester, UserID, create_requester
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -45,7 +45,7 @@ from synapse.util.retryutils import NotRetryingDestination
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -166,7 +166,7 @@ class DeviceWorkerHandler(BaseHandler):
# Fetch the current state at the time. # Fetch the current state at the time.
try: try:
event_ids = await self.store.get_forward_extremeties_for_room( event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
room_id, stream_ordering=stream_ordering room_id, stream_ordering=stream_ordering
) )
except errors.StoreError: except errors.StoreError:
@ -907,6 +907,7 @@ class DeviceListUpdater:
master_key = result.get("master_key") master_key = result.get("master_key")
self_signing_key = result.get("self_signing_key") self_signing_key = result.get("self_signing_key")
ignore_devices = False
# If the remote server has more than ~1000 devices for this user # If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot # we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to # that logs in and creates a new device every time it tries to
@ -925,6 +926,12 @@ class DeviceListUpdater:
len(devices), len(devices),
) )
devices = [] devices = []
ignore_devices = True
else:
cached_devices = await self.store.get_cached_devices_for_user(user_id)
if cached_devices == {d["device_id"]: d for d in devices}:
devices = []
ignore_devices = True
for device in devices: for device in devices:
logger.debug( logger.debug(
@ -934,7 +941,10 @@ class DeviceListUpdater:
stream_id, stream_id,
) )
await self.store.update_remote_device_list_cache(user_id, devices, stream_id) if not ignore_devices:
await self.store.update_remote_device_list_cache(
user_id, devices, stream_id
)
device_ids = [device["device_id"] for device in devices] device_ids = [device["device_id"] for device in devices]
# Handle cross-signing keys. # Handle cross-signing keys.
@ -945,7 +955,8 @@ class DeviceListUpdater:
) )
device_ids = device_ids + cross_signing_device_ids device_ids = device_ids + cross_signing_device_ids
await self.device_handler.notify_device_update(user_id, device_ids) if device_ids:
await self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given # We clobber the seen updates since we've re-synced from a given
# point. # point.
@ -973,14 +984,17 @@ class DeviceListUpdater:
""" """
device_ids = [] device_ids = []
if master_key: current_keys_map = await self.store.get_e2e_cross_signing_keys_bulk([user_id])
current_keys = current_keys_map.get(user_id) or {}
if master_key and master_key != current_keys.get("master"):
await self.store.set_e2e_cross_signing_key(user_id, "master", master_key) await self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
_, verify_key = get_verify_key_from_cross_signing_key(master_key) _, verify_key = get_verify_key_from_cross_signing_key(master_key)
# verify_key is a VerifyKey from signedjson, which uses # verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the # .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID # algorithm and colon, which is the device ID
device_ids.append(verify_key.version) device_ids.append(verify_key.version)
if self_signing_key: if self_signing_key and self_signing_key != current_keys.get("self_signing"):
await self.store.set_e2e_cross_signing_key( await self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key user_id, "self_signing", self_signing_key
) )

View file

@ -32,7 +32,7 @@ from synapse.util import json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -42,7 +42,7 @@ from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -29,7 +29,7 @@ from synapse.types import JsonDict
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -21,7 +21,7 @@ from synapse.api.errors import HttpResponseException, RequestSendFailed, Synapse
from synapse.types import GroupID, JsonDict, get_domain_from_id from synapse.types import GroupID, JsonDict, get_domain_from_id
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -149,6 +149,9 @@ class OidcHandler:
Args: Args:
request: the incoming request from the browser. request: the incoming request from the browser.
""" """
# This will always be set by the time Twisted calls us.
assert request.args is not None
# The provider might redirect with an error. # The provider might redirect with an error.
# In that case, just display it as-is. # In that case, just display it as-is.
if b"error" in request.args: if b"error" in request.args:
@ -280,6 +283,7 @@ class OidcProvider:
self._config = provider self._config = provider
self._callback_url = hs.config.oidc_callback_url # type: str self._callback_url = hs.config.oidc_callback_url # type: str
self._oidc_attribute_requirements = provider.attribute_requirements
self._scopes = provider.scopes self._scopes = provider.scopes
self._user_profile_method = provider.user_profile_method self._user_profile_method = provider.user_profile_method
@ -859,6 +863,18 @@ class OidcProvider:
) )
# otherwise, it's a login # otherwise, it's a login
logger.debug("Userinfo for OIDC login: %s", userinfo)
# Ensure that the attributes of the logged in user meet the required
# attributes by checking the userinfo against attribute_requirements
# In order to deal with the fact that OIDC userinfo can contain many
# types of data, we wrap non-list values in lists.
if not self._sso_handler.check_required_attributes(
request,
{k: v if isinstance(v, list) else [v] for k, v in userinfo.items()},
self._oidc_attribute_requirements,
):
return
# Call the mapper to register/login the user # Call the mapper to register/login the user
try: try:

View file

@ -21,7 +21,7 @@ from typing import TYPE_CHECKING
from synapse.api.errors import Codes, PasswordRefusedError from synapse.api.errors import Codes, PasswordRefusedError
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -104,6 +104,8 @@ class BasePresenceHandler(abc.ABC):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastore() self.store = hs.get_datastore()
self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
active_presence = self.store.take_presence_startup_info() active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence} self.user_to_current_state = {state.user_id: state for state in active_presence}
@ -730,8 +732,12 @@ class PresenceHandler(BasePresenceHandler):
PresenceState.ONLINE, PresenceState.ONLINE,
PresenceState.UNAVAILABLE, PresenceState.UNAVAILABLE,
PresenceState.OFFLINE, PresenceState.OFFLINE,
PresenceState.BUSY,
) )
if presence not in valid_presence:
if presence not in valid_presence or (
presence == PresenceState.BUSY and not self._busy_presence_enabled
):
raise SynapseError(400, "Invalid presence state") raise SynapseError(400, "Invalid presence state")
user_id = target_user.to_string() user_id = target_user.to_string()
@ -744,7 +750,9 @@ class PresenceHandler(BasePresenceHandler):
msg = status_msg if presence != PresenceState.OFFLINE else None msg = status_msg if presence != PresenceState.OFFLINE else None
new_fields["status_msg"] = msg new_fields["status_msg"] = msg
if presence == PresenceState.ONLINE: if presence == PresenceState.ONLINE or (
presence == PresenceState.BUSY and self._busy_presence_enabled
):
new_fields["last_active_ts"] = self.clock.time_msec() new_fields["last_active_ts"] = self.clock.time_msec()
await self._update_states([prev_state.copy_and_replace(**new_fields)]) await self._update_states([prev_state.copy_and_replace(**new_fields)])

View file

@ -36,7 +36,7 @@ from synapse.types import (
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -21,7 +21,7 @@ from synapse.util.async_helpers import Linearizer
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -20,7 +20,7 @@ from synapse.handlers._base import BaseHandler
from synapse.types import JsonDict, ReadReceipt, get_domain_from_id from synapse.types import JsonDict, ReadReceipt, get_domain_from_id
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -38,7 +38,7 @@ from synapse.types import RoomAlias, UserID, create_requester
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -444,10 +444,10 @@ class RegistrationHandler(BaseHandler):
if RoomAlias.is_valid(r): if RoomAlias.is_valid(r):
( (
room_id, room,
remote_room_hosts, remote_room_hosts,
) = await room_member_handler.lookup_room_alias(room_alias) ) = await room_member_handler.lookup_room_alias(room_alias)
room_id = room_id.to_string() room_id = room.to_string()
else: else:
raise SynapseError( raise SynapseError(
400, "%s was not legal room ID or room alias" % (r,) 400, "%s was not legal room ID or room alias" % (r,)

View file

@ -29,7 +29,7 @@ from synapse.util.caches.response_cache import ResponseCache
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -155,6 +155,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
""" """
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod
async def forget(self, user: UserID, room_id: str) -> None:
raise NotImplementedError()
def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str): def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str):
"""Ratelimit invites by room and by target user. """Ratelimit invites by room and by target user.

View file

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import List, Optional, Tuple from typing import TYPE_CHECKING, List, Optional, Tuple
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.handlers.room_member import RoomMemberHandler from synapse.handlers.room_member import RoomMemberHandler
@ -25,11 +25,14 @@ from synapse.replication.http.membership import (
) )
from synapse.types import Requester, UserID from synapse.types import Requester, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class RoomMemberWorkerHandler(RoomMemberHandler): class RoomMemberWorkerHandler(RoomMemberHandler):
def __init__(self, hs): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self._remote_join_client = ReplRemoteJoin.make_client(hs) self._remote_join_client = ReplRemoteJoin.make_client(hs)
@ -83,3 +86,6 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
await self._notify_change_client( await self._notify_change_client(
user_id=target.to_string(), room_id=room_id, change="left" user_id=target.to_string(), room_id=room_id, change="left"
) )
async def forget(self, target: UserID, room_id: str) -> None:
raise RuntimeError("Cannot forget rooms on workers.")

View file

@ -30,7 +30,7 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -21,7 +21,7 @@ from synapse.types import Requester
from ._base import BaseHandler from ._base import BaseHandler
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -41,7 +41,7 @@ class SetPasswordHandler(BaseHandler):
logout_devices: bool, logout_devices: bool,
requester: Optional[Requester] = None, requester: Optional[Requester] = None,
) -> None: ) -> None:
if not self.hs.config.password_localdb_enabled: if not self._auth_handler.can_change_password():
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN) raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
try: try:

View file

@ -0,0 +1,395 @@
# -*- coding: utf-8 -*-
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import logging
from collections import deque
from typing import TYPE_CHECKING, Iterable, List, Optional, Sequence, Set, Tuple, cast
import attr
from synapse.api.constants import EventContentFields, EventTypes, HistoryVisibility
from synapse.api.errors import AuthError
from synapse.events import EventBase
from synapse.events.utils import format_event_for_client_v2
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
# number of rooms to return. We'll stop once we hit this limit.
# TODO: allow clients to reduce this with a request param.
MAX_ROOMS = 50
# max number of events to return per room.
MAX_ROOMS_PER_SPACE = 50
# max number of federation servers to hit per room
MAX_SERVERS_PER_SPACE = 3
class SpaceSummaryHandler:
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._auth = hs.get_auth()
self._room_list_handler = hs.get_room_list_handler()
self._state_handler = hs.get_state_handler()
self._store = hs.get_datastore()
self._event_serializer = hs.get_event_client_serializer()
self._server_name = hs.hostname
self._federation_client = hs.get_federation_client()
async def get_space_summary(
self,
requester: str,
room_id: str,
suggested_only: bool = False,
max_rooms_per_space: Optional[int] = None,
) -> JsonDict:
"""
Implementation of the space summary C-S API
Args:
requester: user id of the user making this request
room_id: room id to start the summary at
suggested_only: whether we should only return children with the "suggested"
flag set.
max_rooms_per_space: an optional limit on the number of child rooms we will
return. This does not apply to the root room (ie, room_id), and
is overridden by MAX_ROOMS_PER_SPACE.
Returns:
summary dict to return
"""
# first of all, check that the user is in the room in question (or it's
# world-readable)
await self._auth.check_user_in_room_or_world_readable(room_id, requester)
# the queue of rooms to process
room_queue = deque((_RoomQueueEntry(room_id, ()),))
# rooms we have already processed
processed_rooms = set() # type: Set[str]
# events we have already processed. We don't necessarily have their event ids,
# so instead we key on (room id, state key)
processed_events = set() # type: Set[Tuple[str, str]]
rooms_result = [] # type: List[JsonDict]
events_result = [] # type: List[JsonDict]
while room_queue and len(rooms_result) < MAX_ROOMS:
queue_entry = room_queue.popleft()
room_id = queue_entry.room_id
if room_id in processed_rooms:
# already done this room
continue
logger.debug("Processing room %s", room_id)
is_in_room = await self._store.is_host_joined(room_id, self._server_name)
# The client-specified max_rooms_per_space limit doesn't apply to the
# room_id specified in the request, so we ignore it if this is the
# first room we are processing.
max_children = max_rooms_per_space if processed_rooms else None
if is_in_room:
rooms, events = await self._summarize_local_room(
requester, room_id, suggested_only, max_children
)
else:
rooms, events = await self._summarize_remote_room(
queue_entry,
suggested_only,
max_children,
exclude_rooms=processed_rooms,
)
logger.debug(
"Query of %s returned rooms %s, events %s",
queue_entry.room_id,
[room.get("room_id") for room in rooms],
["%s->%s" % (ev["room_id"], ev["state_key"]) for ev in events],
)
rooms_result.extend(rooms)
# any rooms returned don't need visiting again
processed_rooms.update(cast(str, room.get("room_id")) for room in rooms)
# the room we queried may or may not have been returned, but don't process
# it again, anyway.
processed_rooms.add(room_id)
# XXX: is it ok that we blindly iterate through any events returned by
# a remote server, whether or not they actually link to any rooms in our
# tree?
for ev in events:
# remote servers might return events we have already processed
# (eg, Dendrite returns inward pointers as well as outward ones), so
# we need to filter them out, to avoid returning duplicate links to the
# client.
ev_key = (ev["room_id"], ev["state_key"])
if ev_key in processed_events:
continue
events_result.append(ev)
# add the child to the queue. we have already validated
# that the vias are a list of server names.
room_queue.append(
_RoomQueueEntry(ev["state_key"], ev["content"]["via"])
)
processed_events.add(ev_key)
return {"rooms": rooms_result, "events": events_result}
async def federation_space_summary(
self,
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: Iterable[str],
) -> JsonDict:
"""
Implementation of the space summary Federation API
Args:
room_id: room id to start the summary at
suggested_only: whether we should only return children with the "suggested"
flag set.
max_rooms_per_space: an optional limit on the number of child rooms we will
return. Unlike the C-S API, this applies to the root room (room_id).
It is clipped to MAX_ROOMS_PER_SPACE.
exclude_rooms: a list of rooms to skip over (presumably because the
calling server has already seen them).
Returns:
summary dict to return
"""
# the queue of rooms to process
room_queue = deque((room_id,))
# the set of rooms that we should not walk further. Initialise it with the
# excluded-rooms list; we will add other rooms as we process them so that
# we do not loop.
processed_rooms = set(exclude_rooms) # type: Set[str]
rooms_result = [] # type: List[JsonDict]
events_result = [] # type: List[JsonDict]
while room_queue and len(rooms_result) < MAX_ROOMS:
room_id = room_queue.popleft()
if room_id in processed_rooms:
# already done this room
continue
logger.debug("Processing room %s", room_id)
rooms, events = await self._summarize_local_room(
None, room_id, suggested_only, max_rooms_per_space
)
processed_rooms.add(room_id)
rooms_result.extend(rooms)
events_result.extend(events)
# add any children to the queue
room_queue.extend(edge_event["state_key"] for edge_event in events)
return {"rooms": rooms_result, "events": events_result}
async def _summarize_local_room(
self,
requester: Optional[str],
room_id: str,
suggested_only: bool,
max_children: Optional[int],
) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]:
if not await self._is_room_accessible(room_id, requester):
return (), ()
room_entry = await self._build_room_entry(room_id)
# look for child rooms/spaces.
child_events = await self._get_child_events(room_id)
if suggested_only:
# we only care about suggested children
child_events = filter(_is_suggested_child_event, child_events)
if max_children is None or max_children > MAX_ROOMS_PER_SPACE:
max_children = MAX_ROOMS_PER_SPACE
now = self._clock.time_msec()
events_result = [] # type: List[JsonDict]
for edge_event in itertools.islice(child_events, max_children):
events_result.append(
await self._event_serializer.serialize_event(
edge_event,
time_now=now,
event_format=format_event_for_client_v2,
)
)
return (room_entry,), events_result
async def _summarize_remote_room(
self,
room: "_RoomQueueEntry",
suggested_only: bool,
max_children: Optional[int],
exclude_rooms: Iterable[str],
) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]:
room_id = room.room_id
logger.info("Requesting summary for %s via %s", room_id, room.via)
# we need to make the exclusion list json-serialisable
exclude_rooms = list(exclude_rooms)
via = itertools.islice(room.via, MAX_SERVERS_PER_SPACE)
try:
res = await self._federation_client.get_space_summary(
via,
room_id,
suggested_only=suggested_only,
max_rooms_per_space=max_children,
exclude_rooms=exclude_rooms,
)
except Exception as e:
logger.warning(
"Unable to get summary of %s via federation: %s",
room_id,
e,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
return (), ()
return res.rooms, tuple(
ev.data
for ev in res.events
if ev.event_type == EventTypes.MSC1772_SPACE_CHILD
)
async def _is_room_accessible(self, room_id: str, requester: Optional[str]) -> bool:
# if we have an authenticated requesting user, first check if they are in the
# room
if requester:
try:
await self._auth.check_user_in_room(room_id, requester)
return True
except AuthError:
pass
# otherwise, check if the room is peekable
hist_vis_ev = await self._state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if hist_vis_ev:
hist_vis = hist_vis_ev.content.get("history_visibility")
if hist_vis == HistoryVisibility.WORLD_READABLE:
return True
logger.info(
"room %s is unpeekable and user %s is not a member, omitting from summary",
room_id,
requester,
)
return False
async def _build_room_entry(self, room_id: str) -> JsonDict:
"""Generate en entry suitable for the 'rooms' list in the summary response"""
stats = await self._store.get_room_with_stats(room_id)
# currently this should be impossible because we call
# check_user_in_room_or_world_readable on the room before we get here, so
# there should always be an entry
assert stats is not None, "unable to retrieve stats for %s" % (room_id,)
current_state_ids = await self._store.get_current_state_ids(room_id)
create_event = await self._store.get_event(
current_state_ids[(EventTypes.Create, "")]
)
# TODO: update once MSC1772 lands
room_type = create_event.content.get(EventContentFields.MSC1772_ROOM_TYPE)
entry = {
"room_id": stats["room_id"],
"name": stats["name"],
"topic": stats["topic"],
"canonical_alias": stats["canonical_alias"],
"num_joined_members": stats["joined_members"],
"avatar_url": stats["avatar"],
"world_readable": (
stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
),
"guest_can_join": stats["guest_access"] == "can_join",
"room_type": room_type,
}
# Filter out Nones rather omit the field altogether
room_entry = {k: v for k, v in entry.items() if v is not None}
return room_entry
async def _get_child_events(self, room_id: str) -> Iterable[EventBase]:
# look for child rooms/spaces.
current_state_ids = await self._store.get_current_state_ids(room_id)
events = await self._store.get_events_as_list(
[
event_id
for key, event_id in current_state_ids.items()
# TODO: update once MSC1772 lands
if key[0] == EventTypes.MSC1772_SPACE_CHILD
]
)
# filter out any events without a "via" (which implies it has been redacted)
return (e for e in events if _has_valid_via(e))
@attr.s(frozen=True, slots=True)
class _RoomQueueEntry:
room_id = attr.ib(type=str)
via = attr.ib(type=Sequence[str])
def _has_valid_via(e: EventBase) -> bool:
via = e.content.get("via")
if not via or not isinstance(via, Sequence):
return False
for v in via:
if not isinstance(v, str):
logger.debug("Ignoring edge event %s with invalid via entry", e.event_id)
return False
return True
def _is_suggested_child_event(edge_event: EventBase) -> bool:
suggested = edge_event.content.get("suggested")
if isinstance(suggested, bool) and suggested:
return True
logger.debug("Ignorning not-suggested child %s", edge_event.state_key)
return False

View file

@ -17,7 +17,7 @@ import logging
from typing import TYPE_CHECKING, Optional from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -24,7 +24,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict from synapse.types import JsonDict
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -80,7 +80,7 @@ class SyncConfig:
filter_collection = attr.ib(type=FilterCollection) filter_collection = attr.ib(type=FilterCollection)
is_guest = attr.ib(type=bool) is_guest = attr.ib(type=bool)
request_key = attr.ib(type=Tuple[Any, ...]) request_key = attr.ib(type=Tuple[Any, ...])
device_id = attr.ib(type=str) device_id = attr.ib(type=Optional[str])
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True)
@ -723,7 +723,9 @@ class SyncHandler:
return summary return summary
def get_lazy_loaded_members_cache(self, cache_key: Tuple[str, str]) -> LruCache: def get_lazy_loaded_members_cache(
self, cache_key: Tuple[str, Optional[str]]
) -> LruCache:
cache = self.lazy_loaded_members_cache.get(cache_key) cache = self.lazy_loaded_members_cache.get(cache_key)
if cache is None: if cache is None:
logger.debug("creating LruCache for %r", cache_key) logger.debug("creating LruCache for %r", cache_key)
@ -1978,8 +1980,10 @@ class SyncHandler:
logger.info("User joined room after current token: %s", room_id) logger.info("User joined room after current token: %s", room_id)
extrems = await self.store.get_forward_extremeties_for_room( extrems = (
room_id, event_pos.stream await self.store.get_forward_extremities_for_room_at_stream_ordering(
room_id, event_pos.stream
)
) )
users_in_room = await self.state.get_current_users_in_room(room_id, extrems) users_in_room = await self.state.get_current_users_in_room(room_id, extrems)
if user_id in users_in_room: if user_id in users_in_room:

View file

@ -25,7 +25,7 @@ from synapse.types import JsonDict
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -77,7 +77,7 @@ from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred from synapse.util.async_helpers import timeout_deferred
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -19,9 +19,10 @@ from zope.interface import implementer
from twisted.internet import defer, protocol from twisted.internet import defer, protocol
from twisted.internet.error import ConnectError from twisted.internet.error import ConnectError
from twisted.internet.interfaces import IStreamClientEndpoint from twisted.internet.interfaces import IReactorCore, IStreamClientEndpoint
from twisted.internet.protocol import connectionDone from twisted.internet.protocol import ClientFactory, Protocol, connectionDone
from twisted.web import http from twisted.web import http
from twisted.web.http_headers import Headers
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -43,23 +44,33 @@ class HTTPConnectProxyEndpoint:
Args: Args:
reactor: the Twisted reactor to use for the connection reactor: the Twisted reactor to use for the connection
proxy_endpoint (IStreamClientEndpoint): the endpoint to use to connect to the proxy_endpoint: the endpoint to use to connect to the proxy
proxy host: hostname that we want to CONNECT to
host (bytes): hostname that we want to CONNECT to port: port that we want to connect to
port (int): port that we want to connect to headers: Extra HTTP headers to include in the CONNECT request
""" """
def __init__(self, reactor, proxy_endpoint, host, port): def __init__(
self,
reactor: IReactorCore,
proxy_endpoint: IStreamClientEndpoint,
host: bytes,
port: int,
headers: Headers,
):
self._reactor = reactor self._reactor = reactor
self._proxy_endpoint = proxy_endpoint self._proxy_endpoint = proxy_endpoint
self._host = host self._host = host
self._port = port self._port = port
self._headers = headers
def __repr__(self): def __repr__(self):
return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,) return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,)
def connect(self, protocolFactory): def connect(self, protocolFactory: ClientFactory):
f = HTTPProxiedClientFactory(self._host, self._port, protocolFactory) f = HTTPProxiedClientFactory(
self._host, self._port, protocolFactory, self._headers
)
d = self._proxy_endpoint.connect(f) d = self._proxy_endpoint.connect(f)
# once the tcp socket connects successfully, we need to wait for the # once the tcp socket connects successfully, we need to wait for the
# CONNECT to complete. # CONNECT to complete.
@ -74,15 +85,23 @@ class HTTPProxiedClientFactory(protocol.ClientFactory):
HTTP Protocol object and run the rest of the connection. HTTP Protocol object and run the rest of the connection.
Args: Args:
dst_host (bytes): hostname that we want to CONNECT to dst_host: hostname that we want to CONNECT to
dst_port (int): port that we want to connect to dst_port: port that we want to connect to
wrapped_factory (protocol.ClientFactory): The original Factory wrapped_factory: The original Factory
headers: Extra HTTP headers to include in the CONNECT request
""" """
def __init__(self, dst_host, dst_port, wrapped_factory): def __init__(
self,
dst_host: bytes,
dst_port: int,
wrapped_factory: ClientFactory,
headers: Headers,
):
self.dst_host = dst_host self.dst_host = dst_host
self.dst_port = dst_port self.dst_port = dst_port
self.wrapped_factory = wrapped_factory self.wrapped_factory = wrapped_factory
self.headers = headers
self.on_connection = defer.Deferred() self.on_connection = defer.Deferred()
def startedConnecting(self, connector): def startedConnecting(self, connector):
@ -92,7 +111,11 @@ class HTTPProxiedClientFactory(protocol.ClientFactory):
wrapped_protocol = self.wrapped_factory.buildProtocol(addr) wrapped_protocol = self.wrapped_factory.buildProtocol(addr)
return HTTPConnectProtocol( return HTTPConnectProtocol(
self.dst_host, self.dst_port, wrapped_protocol, self.on_connection self.dst_host,
self.dst_port,
wrapped_protocol,
self.on_connection,
self.headers,
) )
def clientConnectionFailed(self, connector, reason): def clientConnectionFailed(self, connector, reason):
@ -112,24 +135,37 @@ class HTTPConnectProtocol(protocol.Protocol):
"""Protocol that wraps an existing Protocol to do a CONNECT handshake at connect """Protocol that wraps an existing Protocol to do a CONNECT handshake at connect
Args: Args:
host (bytes): The original HTTP(s) hostname or IPv4 or IPv6 address literal host: The original HTTP(s) hostname or IPv4 or IPv6 address literal
to put in the CONNECT request to put in the CONNECT request
port (int): The original HTTP(s) port to put in the CONNECT request port: The original HTTP(s) port to put in the CONNECT request
wrapped_protocol (interfaces.IProtocol): the original protocol (probably wrapped_protocol: the original protocol (probably HTTPChannel or
HTTPChannel or TLSMemoryBIOProtocol, but could be anything really) TLSMemoryBIOProtocol, but could be anything really)
connected_deferred (Deferred): a Deferred which will be callbacked with connected_deferred: a Deferred which will be callbacked with
wrapped_protocol when the CONNECT completes wrapped_protocol when the CONNECT completes
headers: Extra HTTP headers to include in the CONNECT request
""" """
def __init__(self, host, port, wrapped_protocol, connected_deferred): def __init__(
self,
host: bytes,
port: int,
wrapped_protocol: Protocol,
connected_deferred: defer.Deferred,
headers: Headers,
):
self.host = host self.host = host
self.port = port self.port = port
self.wrapped_protocol = wrapped_protocol self.wrapped_protocol = wrapped_protocol
self.connected_deferred = connected_deferred self.connected_deferred = connected_deferred
self.http_setup_client = HTTPConnectSetupClient(self.host, self.port) self.headers = headers
self.http_setup_client = HTTPConnectSetupClient(
self.host, self.port, self.headers
)
self.http_setup_client.on_connected.addCallback(self.proxyConnected) self.http_setup_client.on_connected.addCallback(self.proxyConnected)
def connectionMade(self): def connectionMade(self):
@ -154,7 +190,7 @@ class HTTPConnectProtocol(protocol.Protocol):
if buf: if buf:
self.wrapped_protocol.dataReceived(buf) self.wrapped_protocol.dataReceived(buf)
def dataReceived(self, data): def dataReceived(self, data: bytes):
# if we've set up the HTTP protocol, we can send the data there # if we've set up the HTTP protocol, we can send the data there
if self.wrapped_protocol.connected: if self.wrapped_protocol.connected:
return self.wrapped_protocol.dataReceived(data) return self.wrapped_protocol.dataReceived(data)
@ -168,21 +204,29 @@ class HTTPConnectSetupClient(http.HTTPClient):
"""HTTPClient protocol to send a CONNECT message for proxies and read the response. """HTTPClient protocol to send a CONNECT message for proxies and read the response.
Args: Args:
host (bytes): The hostname to send in the CONNECT message host: The hostname to send in the CONNECT message
port (int): The port to send in the CONNECT message port: The port to send in the CONNECT message
headers: Extra headers to send with the CONNECT message
""" """
def __init__(self, host, port): def __init__(self, host: bytes, port: int, headers: Headers):
self.host = host self.host = host
self.port = port self.port = port
self.headers = headers
self.on_connected = defer.Deferred() self.on_connected = defer.Deferred()
def connectionMade(self): def connectionMade(self):
logger.debug("Connected to proxy, sending CONNECT") logger.debug("Connected to proxy, sending CONNECT")
self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port)) self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port))
# Send any additional specified headers
for name, values in self.headers.getAllRawHeaders():
for value in values:
self.sendHeader(name, value)
self.endHeaders() self.endHeaders()
def handleStatus(self, version, status, message): def handleStatus(self, version: bytes, status: bytes, message: bytes):
logger.debug("Got Status: %s %s %s", status, message, version) logger.debug("Got Status: %s %s %s", status, message, version)
if status != b"200": if status != b"200":
raise ProxyConnectError("Unexpected status on CONNECT: %s" % status) raise ProxyConnectError("Unexpected status on CONNECT: %s" % status)

View file

@ -71,8 +71,10 @@ WELL_KNOWN_RETRY_ATTEMPTS = 3
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_well_known_cache = TTLCache("well-known") _well_known_cache = TTLCache("well-known") # type: TTLCache[bytes, Optional[bytes]]
_had_valid_well_known_cache = TTLCache("had-valid-well-known") _had_valid_well_known_cache = TTLCache(
"had-valid-well-known"
) # type: TTLCache[bytes, bool]
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True)
@ -88,8 +90,8 @@ class WellKnownResolver:
reactor: IReactorTime, reactor: IReactorTime,
agent: IAgent, agent: IAgent,
user_agent: bytes, user_agent: bytes,
well_known_cache: Optional[TTLCache] = None, well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None,
had_well_known_cache: Optional[TTLCache] = None, had_well_known_cache: Optional[TTLCache[bytes, bool]] = None,
): ):
self._reactor = reactor self._reactor = reactor
self._clock = Clock(reactor) self._clock = Clock(reactor)

View file

@ -12,10 +12,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import base64
import logging import logging
import re import re
from typing import Optional, Tuple
from urllib.request import getproxies_environment, proxy_bypass_environment from urllib.request import getproxies_environment, proxy_bypass_environment
import attr
from zope.interface import implementer from zope.interface import implementer
from twisted.internet import defer from twisted.internet import defer
@ -23,6 +26,7 @@ from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.python.failure import Failure from twisted.python.failure import Failure
from twisted.web.client import URI, BrowserLikePolicyForHTTPS, _AgentBase from twisted.web.client import URI, BrowserLikePolicyForHTTPS, _AgentBase
from twisted.web.error import SchemeNotSupported from twisted.web.error import SchemeNotSupported
from twisted.web.http_headers import Headers
from twisted.web.iweb import IAgent from twisted.web.iweb import IAgent
from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint
@ -32,6 +36,22 @@ logger = logging.getLogger(__name__)
_VALID_URI = re.compile(br"\A[\x21-\x7e]+\Z") _VALID_URI = re.compile(br"\A[\x21-\x7e]+\Z")
@attr.s
class ProxyCredentials:
username_password = attr.ib(type=bytes)
def as_proxy_authorization_value(self) -> bytes:
"""
Return the value for a Proxy-Authorization header (i.e. 'Basic abdef==').
Returns:
A transformation of the authentication string the encoded value for
a Proxy-Authorization header.
"""
# Encode as base64 and prepend the authorization type
return b"Basic " + base64.encodebytes(self.username_password)
@implementer(IAgent) @implementer(IAgent)
class ProxyAgent(_AgentBase): class ProxyAgent(_AgentBase):
"""An Agent implementation which will use an HTTP proxy if one was requested """An Agent implementation which will use an HTTP proxy if one was requested
@ -96,6 +116,9 @@ class ProxyAgent(_AgentBase):
https_proxy = proxies["https"].encode() if "https" in proxies else None https_proxy = proxies["https"].encode() if "https" in proxies else None
no_proxy = proxies["no"] if "no" in proxies else None no_proxy = proxies["no"] if "no" in proxies else None
# Parse credentials from https proxy connection string if present
self.https_proxy_creds, https_proxy = parse_username_password(https_proxy)
self.http_proxy_endpoint = _http_proxy_endpoint( self.http_proxy_endpoint = _http_proxy_endpoint(
http_proxy, self.proxy_reactor, **self._endpoint_kwargs http_proxy, self.proxy_reactor, **self._endpoint_kwargs
) )
@ -175,11 +198,22 @@ class ProxyAgent(_AgentBase):
and self.https_proxy_endpoint and self.https_proxy_endpoint
and not should_skip_proxy and not should_skip_proxy
): ):
connect_headers = Headers()
# Determine whether we need to set Proxy-Authorization headers
if self.https_proxy_creds:
# Set a Proxy-Authorization header
connect_headers.addRawHeader(
b"Proxy-Authorization",
self.https_proxy_creds.as_proxy_authorization_value(),
)
endpoint = HTTPConnectProxyEndpoint( endpoint = HTTPConnectProxyEndpoint(
self.proxy_reactor, self.proxy_reactor,
self.https_proxy_endpoint, self.https_proxy_endpoint,
parsed_uri.host, parsed_uri.host,
parsed_uri.port, parsed_uri.port,
headers=connect_headers,
) )
else: else:
# not using a proxy # not using a proxy
@ -208,12 +242,16 @@ class ProxyAgent(_AgentBase):
) )
def _http_proxy_endpoint(proxy, reactor, **kwargs): def _http_proxy_endpoint(proxy: Optional[bytes], reactor, **kwargs):
"""Parses an http proxy setting and returns an endpoint for the proxy """Parses an http proxy setting and returns an endpoint for the proxy
Args: Args:
proxy (bytes|None): the proxy setting proxy: the proxy setting in the form: [<username>:<password>@]<host>[:<port>]
Note that compared to other apps, this function currently lacks support
for specifying a protocol schema (i.e. protocol://...).
reactor: reactor to be used to connect to the proxy reactor: reactor to be used to connect to the proxy
kwargs: other args to be passed to HostnameEndpoint kwargs: other args to be passed to HostnameEndpoint
Returns: Returns:
@ -223,16 +261,43 @@ def _http_proxy_endpoint(proxy, reactor, **kwargs):
if proxy is None: if proxy is None:
return None return None
# currently we only support hostname:port. Some apps also support # Parse the connection string
# protocol://<host>[:port], which allows a way of requiring a TLS connection to the
# proxy.
host, port = parse_host_port(proxy, default_port=1080) host, port = parse_host_port(proxy, default_port=1080)
return HostnameEndpoint(reactor, host, port, **kwargs) return HostnameEndpoint(reactor, host, port, **kwargs)
def parse_host_port(hostport, default_port=None): def parse_username_password(proxy: bytes) -> Tuple[Optional[ProxyCredentials], bytes]:
# could have sworn we had one of these somewhere else... """
Parses the username and password from a proxy declaration e.g
username:password@hostname:port.
Args:
proxy: The proxy connection string.
Returns
An instance of ProxyCredentials and the proxy connection string with any credentials
stripped, i.e u:p@host:port -> host:port. If no credentials were found, the
ProxyCredentials instance is replaced with None.
"""
if proxy and b"@" in proxy:
# We use rsplit here as the password could contain an @ character
credentials, proxy_without_credentials = proxy.rsplit(b"@", 1)
return ProxyCredentials(credentials), proxy_without_credentials
return None, proxy
def parse_host_port(hostport: bytes, default_port: int = None) -> Tuple[bytes, int]:
"""
Parse the hostname and port from a proxy connection byte string.
Args:
hostport: The proxy connection string. Must be in the form 'host[:port]'.
default_port: The default port to return if one is not found in `hostport`.
Returns:
A tuple containing the hostname and port. Uses `default_port` if one was not found.
"""
if b":" in hostport: if b":" in hostport:
host, port = hostport.rsplit(b":", 1) host, port = hostport.rsplit(b":", 1)
try: try:

View file

@ -689,7 +689,7 @@ def run_in_background(f, *args, **kwargs) -> defer.Deferred:
current = current_context() current = current_context()
try: try:
res = f(*args, **kwargs) res = f(*args, **kwargs)
except: # noqa: E722 except Exception:
# the assumption here is that the caller doesn't want to be disturbed # the assumption here is that the caller doesn't want to be disturbed
# by synchronous exceptions, so let's turn them into Failures. # by synchronous exceptions, so let's turn them into Failures.
return defer.fail() return defer.fail()

View file

@ -169,7 +169,7 @@ import inspect
import logging import logging
import re import re
from functools import wraps from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Type from typing import TYPE_CHECKING, Dict, Optional, Pattern, Type
import attr import attr
@ -262,7 +262,7 @@ logger = logging.getLogger(__name__)
# Block everything by default # Block everything by default
# A regex which matches the server_names to expose traces for. # A regex which matches the server_names to expose traces for.
# None means 'block everything'. # None means 'block everything'.
_homeserver_whitelist = None _homeserver_whitelist = None # type: Optional[Pattern[str]]
# Util methods # Util methods

View file

@ -21,7 +21,7 @@ import attr
from synapse.types import JsonDict, RoomStreamToken from synapse.types import JsonDict, RoomStreamToken
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
@attr.s(slots=True) @attr.s(slots=True)

View file

@ -22,7 +22,7 @@ from synapse.push.bulk_push_rule_evaluator import BulkPushRuleEvaluator
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -33,7 +33,7 @@ from synapse.util.caches.lrucache import LruCache
from .push_rule_evaluator import PushRuleEvaluatorForEvent from .push_rule_evaluator import PushRuleEvaluatorForEvent
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -24,7 +24,7 @@ from synapse.push import Pusher, PusherConfig, ThrottleParams
from synapse.push.mailer import Mailer from synapse.push.mailer import Mailer
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -31,7 +31,7 @@ from synapse.push import Pusher, PusherConfig, PusherConfigException
from . import push_rule_evaluator, push_tools from . import push_rule_evaluator, push_tools
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -283,7 +283,7 @@ class HttpPusher(Pusher):
if rejected is False: if rejected is False:
return False return False
if isinstance(rejected, list) or isinstance(rejected, tuple): if isinstance(rejected, (list, tuple)):
for pk in rejected: for pk in rejected:
if pk != self.pushkey: if pk != self.pushkey:
# for sanity, we only remove the pushkey if it # for sanity, we only remove the pushkey if it

View file

@ -40,7 +40,7 @@ from synapse.util.async_helpers import concurrently_execute
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

Some files were not shown because too many files have changed in this diff Show more