Merge remote-tracking branch 'upstream/release-v1.107'

This commit is contained in:
Tulir Asokan 2024-05-10 19:19:53 +03:00
commit 6f07fc4e00
68 changed files with 1466 additions and 997 deletions

View File

@ -85,33 +85,3 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book publish_dir: ./book
destination_dir: ./${{ needs.pre.outputs.branch-version }} destination_dir: ./${{ needs.pre.outputs.branch-version }}
################################################################################
pages-devdocs:
name: GitHub Pages (developer docs)
runs-on: ubuntu-latest
needs:
- pre
steps:
- uses: actions/checkout@v4
- name: "Set up Sphinx"
uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.3.2"
groups: "dev-docs"
extras: ""
- name: Build the documentation
run: |
cd dev-docs
poetry run make html
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./dev-docs/_build/html
destination_dir: ./dev-docs/${{ needs.pre.outputs.branch-version }}

View File

@ -1,3 +1,62 @@
# Synapse 1.107.0rc1 (2024-05-07)
### Features
- Add preliminary support for [MSC3823: Account Suspension](https://github.com/matrix-org/matrix-spec-proposals/pull/3823). ([\#17051](https://github.com/element-hq/synapse/issues/17051))
- Declare support for [Matrix v1.10](https://matrix.org/blog/2024/03/22/matrix-v1.10-release/). Contributed by @clokep. ([\#17082](https://github.com/element-hq/synapse/issues/17082))
- Add support for [MSC4115: membership metadata on events](https://github.com/matrix-org/matrix-spec-proposals/pull/4115). ([\#17104](https://github.com/element-hq/synapse/issues/17104), [\#17137](https://github.com/element-hq/synapse/issues/17137))
### Bugfixes
- Fixed search feature of Element Android on homesevers using SQLite by returning search terms as search highlights. ([\#17000](https://github.com/element-hq/synapse/issues/17000))
- Fixes a bug introduced in v1.52.0 where the `destination` query parameter for the [Destination Rooms Admin API](https://element-hq.github.io/synapse/v1.105/usage/administration/admin_api/federation.html#destination-rooms) failed to actually filter returned rooms. ([\#17077](https://github.com/element-hq/synapse/issues/17077))
- For MSC3266 room summaries, support queries at the recommended endpoint of `/_matrix/client/unstable/im.nheko.summary/summary/{roomIdOrAlias}`. The existing endpoint of `/_matrix/client/unstable/im.nheko.summary/rooms/{roomIdOrAlias}/summary` is deprecated. ([\#17078](https://github.com/element-hq/synapse/issues/17078))
- Apply user email & picture during OIDC registration if present & selected. ([\#17120](https://github.com/element-hq/synapse/issues/17120))
- Improve error message for cross signing reset with [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) enabled. ([\#17121](https://github.com/element-hq/synapse/issues/17121))
- Fix a bug which meant that to-device messages received over federation could be dropped when the server was under load or networking problems caused problems between Synapse processes or the database. ([\#17127](https://github.com/element-hq/synapse/issues/17127))
- Fix bug where `StreamChangeCache` would not respect configured cache factors. ([\#17152](https://github.com/element-hq/synapse/issues/17152))
### Updates to the Docker image
- Correct licensing metadata on Docker image. ([\#17141](https://github.com/element-hq/synapse/issues/17141))
### Improved Documentation
- Update the `event_cache_size` and `global_factor` configuration options' documentation. ([\#17071](https://github.com/element-hq/synapse/issues/17071))
- Remove broken sphinx docs. ([\#17073](https://github.com/element-hq/synapse/issues/17073), [\#17148](https://github.com/element-hq/synapse/issues/17148))
- Add RuntimeDirectory to example matrix-synapse.service systemd unit. ([\#17084](https://github.com/element-hq/synapse/issues/17084))
- Fix various small typos throughout the docs. ([\#17114](https://github.com/element-hq/synapse/issues/17114))
- Update enable_notifs configuration documentation. ([\#17116](https://github.com/element-hq/synapse/issues/17116))
- Update the Upgrade Notes with the latest minimum supported Rust version of 1.66.0. Contributed by @jahway603. ([\#17140](https://github.com/element-hq/synapse/issues/17140))
### Internal Changes
- Enable [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266) by default in the Synapse Complement image. ([\#17105](https://github.com/element-hq/synapse/issues/17105))
- Add optimisation to `StreamChangeCache.get_entities_changed(..)`. ([\#17130](https://github.com/element-hq/synapse/issues/17130))
### Updates to locked dependencies
* Bump furo from 2024.1.29 to 2024.4.27. ([\#17133](https://github.com/element-hq/synapse/issues/17133))
* Bump idna from 3.6 to 3.7. ([\#17136](https://github.com/element-hq/synapse/issues/17136))
* Bump jsonschema from 4.21.1 to 4.22.0. ([\#17157](https://github.com/element-hq/synapse/issues/17157))
* Bump lxml from 5.1.0 to 5.2.1. ([\#17158](https://github.com/element-hq/synapse/issues/17158))
* Bump phonenumbers from 8.13.29 to 8.13.35. ([\#17106](https://github.com/element-hq/synapse/issues/17106))
- Bump pillow from 10.2.0 to 10.3.0. ([\#17146](https://github.com/element-hq/synapse/issues/17146))
* Bump pydantic from 2.6.4 to 2.7.0. ([\#17107](https://github.com/element-hq/synapse/issues/17107))
* Bump pydantic from 2.7.0 to 2.7.1. ([\#17160](https://github.com/element-hq/synapse/issues/17160))
* Bump pyicu from 2.12 to 2.13. ([\#17109](https://github.com/element-hq/synapse/issues/17109))
* Bump serde from 1.0.197 to 1.0.198. ([\#17111](https://github.com/element-hq/synapse/issues/17111))
* Bump serde from 1.0.198 to 1.0.199. ([\#17132](https://github.com/element-hq/synapse/issues/17132))
* Bump serde from 1.0.199 to 1.0.200. ([\#17161](https://github.com/element-hq/synapse/issues/17161))
* Bump serde_json from 1.0.115 to 1.0.116. ([\#17112](https://github.com/element-hq/synapse/issues/17112))
- Update `tornado` Python dependency from 6.2 to 6.4. ([\#17131](https://github.com/element-hq/synapse/issues/17131))
* Bump twisted from 23.10.0 to 24.3.0. ([\#17135](https://github.com/element-hq/synapse/issues/17135))
* Bump types-bleach from 6.1.0.1 to 6.1.0.20240331. ([\#17110](https://github.com/element-hq/synapse/issues/17110))
* Bump types-pillow from 10.2.0.20240415 to 10.2.0.20240423. ([\#17159](https://github.com/element-hq/synapse/issues/17159))
* Bump types-setuptools from 69.0.0.20240125 to 69.5.0.20240423. ([\#17134](https://github.com/element-hq/synapse/issues/17134))
# Synapse 1.106.0 (2024-04-30) # Synapse 1.106.0 (2024-04-30)
No significant changes since 1.106.0rc1. No significant changes since 1.106.0rc1.

12
Cargo.lock generated
View File

@ -485,18 +485,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.197" version = "1.0.200"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fb1c873e1b9b056a4dc4c0c198b24c3ffa059243875552b2bd0933b1aee4ce2" checksum = "ddc6f9cc94d67c0e21aaf7eda3a010fd3af78ebf6e096aa6e2e13c79749cce4f"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.197" version = "1.0.200"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7eb0b34b42edc17f6b7cac84a52a1c5f0e1bb2227e997ca9011ea3dd34e8610b" checksum = "856f046b9400cee3c8c94ed572ecdb752444c24528c035cd35882aad6f492bcb"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -505,9 +505,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.115" version = "1.0.116"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "12dc5c46daa8e9fdf4f5e71b6cf9a53f2487da0e86e55808e2d35539666497dd" checksum = "3e17db7126d17feb94eb3fad46bf1a96b034e8aacbc2e775fe81505f8b0b2813"
dependencies = [ dependencies = [
"itoa", "itoa",
"ryu", "ryu",

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.107.0~rc1) stable; urgency=medium
* New Synapse release 1.107.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 May 2024 16:26:26 +0100
matrix-synapse-py3 (1.106.0) stable; urgency=medium matrix-synapse-py3 (1.106.0) stable; urgency=medium
* New Synapse release 1.106.0. * New Synapse release 1.106.0.

View File

@ -1,20 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@ -1,50 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = "Synapse development"
copyright = "2023, The Matrix.org Foundation C.I.C."
author = "The Synapse Maintainers and Community"
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
"autodoc2",
"myst_parser",
]
templates_path = ["_templates"]
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# -- Options for Autodoc2 ----------------------------------------------------
autodoc2_docstring_parser_regexes = [
# this will render all docstrings as 'MyST' Markdown
(r".*", "myst"),
]
autodoc2_packages = [
{
"path": "../synapse",
# Don't render documentation for everything as a matter of course
"auto_mode": False,
},
]
# -- Options for MyST (Markdown) ---------------------------------------------
# myst_heading_anchors = 2
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = "furo"
html_static_path = ["_static"]

View File

@ -1,22 +0,0 @@
.. Synapse Developer Documentation documentation master file, created by
sphinx-quickstart on Mon Mar 13 08:59:51 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to the Synapse Developer Documentation!
===========================================================
.. toctree::
:maxdepth: 2
:caption: Contents:
modules/federation_sender
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,5 +0,0 @@
Federation Sender
=================
```{autodoc2-docstring} synapse.federation.sender
```

View File

@ -163,7 +163,7 @@ FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse' LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md' LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git' LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0' LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later'
RUN \ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \ --mount=type=cache,target=/var/cache/apt,sharing=locked \

View File

@ -92,8 +92,6 @@ allow_device_name_lookup_over_federation: true
## Experimental Features ## ## Experimental Features ##
experimental_features: experimental_features:
# client-side support for partial state in /send_join responses
faster_joins: true
# Enable support for polls # Enable support for polls
msc3381_polls_enabled: true msc3381_polls_enabled: true
# Enable deleting device-specific notification settings stored in account data # Enable deleting device-specific notification settings stored in account data
@ -104,6 +102,10 @@ experimental_features:
msc3874_enabled: true msc3874_enabled: true
# no UIA for x-signing upload for the first time # no UIA for x-signing upload for the first time
msc3967_enabled: true msc3967_enabled: true
# Expose a room summary for public rooms
msc3266_enabled: true
msc4115_membership_on_events: true
server_notices: server_notices:
system_mxid_localpart: _server system_mxid_localpart: _server

View File

@ -1,6 +1,6 @@
# Edit Room Membership API # Edit Room Membership API
This API allows an administrator to join an user account with a given `user_id` This API allows an administrator to join a user account with a given `user_id`
to a room with a given `room_id_or_alias`. You can only modify the membership of to a room with a given `room_id_or_alias`. You can only modify the membership of
local users. The server administrator must be in the room and have permission to local users. The server administrator must be in the room and have permission to
invite users. invite users.

View File

@ -51,8 +51,8 @@ clients.
## Server configuration ## Server configuration
Support for this feature can be enabled and configured by adding a the Support for this feature can be enabled and configured by adding the
`retention` in the Synapse configuration file (see `retention` option in the Synapse configuration file (see
[configuration manual](usage/configuration/config_documentation.md#retention)). [configuration manual](usage/configuration/config_documentation.md#retention)).
To enable support for message retention policies, set the setting To enable support for message retention policies, set the setting
@ -117,7 +117,7 @@ In this example, we define three jobs:
policy's `max_lifetime` is greater than a week. policy's `max_lifetime` is greater than a week.
Note that this example is tailored to show different configurations and Note that this example is tailored to show different configurations and
features slightly more jobs than it's probably necessary (in practice, a features slightly more jobs than is probably necessary (in practice, a
server admin would probably consider it better to replace the two last server admin would probably consider it better to replace the two last
jobs with one that runs once a day and handles rooms which jobs with one that runs once a day and handles rooms which
policy's `max_lifetime` is greater than 3 days). policy's `max_lifetime` is greater than 3 days).

View File

@ -128,7 +128,7 @@ can read more about that [here](https://www.postgresql.org/docs/10/kernel-resour
### Overview ### Overview
The script `synapse_port_db` allows porting an existing synapse server The script `synapse_port_db` allows porting an existing synapse server
backed by SQLite to using PostgreSQL. This is done in as a two phase backed by SQLite to using PostgreSQL. This is done as a two phase
process: process:
1. Copy the existing SQLite database to a separate location and run 1. Copy the existing SQLite database to a separate location and run

View File

@ -259,9 +259,9 @@ users, etc.) to the developers via the `--report-stats` argument.
This command will generate you a config file that you can then customise, but it will This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your homeserver to also generate a set of keys for you. These keys will allow your homeserver to
identify itself to other homeserver, so don't lose or delete them. It would be identify itself to other homeservers, so don't lose or delete them. It would be
wise to back them up somewhere safe. (If, for whatever reason, you do need to wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your homeserver's keys, you may find that other homeserver have the change your homeserver's keys, you may find that other homeservers have the
old key cached. If you update the signing key, you should change the name of the old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something key in the `<server name>.signing.key` file (the second word) to something
different. See the [spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys) for more information on key management). different. See the [spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys) for more information on key management).

View File

@ -98,6 +98,7 @@ A custom mapping provider must specify the following methods:
either accept this localpart or pick their own username. Otherwise this either accept this localpart or pick their own username. Otherwise this
option has no effect. If omitted, defaults to `False`. option has no effect. If omitted, defaults to `False`.
- `display_name`: An optional string, the display name for the user. - `display_name`: An optional string, the display name for the user.
- `picture`: An optional string, the avatar url for the user.
- `emails`: A list of strings, the email address(es) to associate with - `emails`: A list of strings, the email address(es) to associate with
this user. If omitted, defaults to an empty list. this user. If omitted, defaults to an empty list.
* `async def get_extra_attributes(self, userinfo, token)` * `async def get_extra_attributes(self, userinfo, token)`

View File

@ -9,6 +9,7 @@ ReloadPropagatedFrom=matrix-synapse.target
Type=notify Type=notify
NotifyAccess=main NotifyAccess=main
User=matrix-synapse User=matrix-synapse
RuntimeDirectory=synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=-/etc/default/matrix-synapse EnvironmentFile=-/etc/default/matrix-synapse
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys

View File

@ -117,6 +117,14 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status). [the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.106.0
## Minimum supported Rust version
The minimum supported Rust version has been increased from v1.65.0 to v1.66.0.
Users building from source will need to ensure their `rustc` version is up to
date.
# Upgrading to v1.100.0 # Upgrading to v1.100.0
## Minimum supported Rust version ## Minimum supported Rust version

View File

@ -44,7 +44,7 @@ For each update:
## Enabled ## Enabled
This API allow pausing background updates. This API allows pausing background updates.
Background updates should *not* be paused for significant periods of time, as Background updates should *not* be paused for significant periods of time, as
this can affect the performance of Synapse. this can affect the performance of Synapse.

View File

@ -241,7 +241,7 @@ in memory constrained environments, or increased if performance starts to
degrade. degrade.
However, degraded performance due to a low cache factor, common on However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due machines with slow disks, often leads to explosions in memory use due to
backlogged requests. In this case, reducing the cache factor will make backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good things worse. Instead, try increasing it drastically. 2.0 is a good
starting value. starting value.

View File

@ -676,8 +676,8 @@ This setting has the following sub-options:
trailing 's'. trailing 's'.
* `app_name`: `app_name` defines the default value for '%(app)s' in `notif_from` and email * `app_name`: `app_name` defines the default value for '%(app)s' in `notif_from` and email
subjects. It defaults to 'Matrix'. subjects. It defaults to 'Matrix'.
* `enable_notifs`: Set to true to enable sending emails for messages that the user * `enable_notifs`: Set to true to allow users to receive e-mail notifications. If this is not set,
has missed. Disabled by default. users can configure e-mail notifications but will not receive them. Disabled by default.
* `notif_for_new_users`: Set to false to disable automatic subscription to email * `notif_for_new_users`: Set to false to disable automatic subscription to email
notifications for new users. Enabled by default. notifications for new users. Enabled by default.
* `notif_delay_before_mail`: The time to wait before emailing about a notification. * `notif_delay_before_mail`: The time to wait before emailing about a notification.
@ -1317,6 +1317,12 @@ Options related to caching.
The number of events to cache in memory. Defaults to 10K. Like other caches, The number of events to cache in memory. Defaults to 10K. Like other caches,
this is affected by `caches.global_factor` (see below). this is affected by `caches.global_factor` (see below).
For example, the default is 10K and the global_factor default is 0.5.
Since 10K * 0.5 is 5K then the event cache size will be 5K.
The cache affected by this configuration is named as "*getEvent*".
Note that this option is not part of the `caches` section. Note that this option is not part of the `caches` section.
Example configuration: Example configuration:
@ -1342,6 +1348,8 @@ number of entries that can be stored.
Defaults to 0.5, which will halve the size of all caches. Defaults to 0.5, which will halve the size of all caches.
Note that changing this value also affects the HTTP connection pool.
* `per_cache_factors`: A dictionary of cache name to cache factor for that individual * `per_cache_factors`: A dictionary of cache name to cache factor for that individual
cache. Overrides the global cache factor for a given cache. cache. Overrides the global cache factor for a given cache.

View File

@ -86,9 +86,9 @@ The search term is then split into words:
* If unavailable, then runs of ASCII characters, numbers, underscores, and hyphens * If unavailable, then runs of ASCII characters, numbers, underscores, and hyphens
are considered words. are considered words.
The queries for PostgreSQL and SQLite are detailed below, by their overall goal The queries for PostgreSQL and SQLite are detailed below, but their overall goal
is to find matching users, preferring users who are "real" (e.g. not bots, is to find matching users, preferring users who are "real" (e.g. not bots,
not deactivated). It is assumed that real users will have an display name and not deactivated). It is assumed that real users will have a display name and
avatar set. avatar set.
### PostgreSQL ### PostgreSQL

View File

@ -232,7 +232,7 @@ information.
^/_matrix/client/v1/rooms/.*/hierarchy$ ^/_matrix/client/v1/rooms/.*/hierarchy$
^/_matrix/client/(v1|unstable)/rooms/.*/relations/ ^/_matrix/client/(v1|unstable)/rooms/.*/relations/
^/_matrix/client/v1/rooms/.*/threads$ ^/_matrix/client/v1/rooms/.*/threads$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$ ^/_matrix/client/unstable/im.nheko.summary/summary/.*$
^/_matrix/client/(r0|v3|unstable)/account/3pid$ ^/_matrix/client/(r0|v3|unstable)/account/3pid$
^/_matrix/client/(r0|v3|unstable)/account/whoami$ ^/_matrix/client/(r0|v3|unstable)/account/whoami$
^/_matrix/client/(r0|v3|unstable)/devices$ ^/_matrix/client/(r0|v3|unstable)/devices$
@ -634,7 +634,7 @@ worker application type.
#### Push Notifications #### Push Notifications
You can designate generic worker to sending push notifications to You can designate generic workers to send push notifications to
a [push gateway](https://spec.matrix.org/v1.5/push-gateway-api/) such as a [push gateway](https://spec.matrix.org/v1.5/push-gateway-api/) such as
[sygnal](https://github.com/matrix-org/sygnal) and email. [sygnal](https://github.com/matrix-org/sygnal) and email.

1005
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.106.0" version = "1.107.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -364,17 +364,6 @@ towncrier = ">=18.6.0rc1"
tomli = ">=1.2.3" tomli = ">=1.2.3"
# Dependencies for building the development documentation
[tool.poetry.group.dev-docs]
optional = true
[tool.poetry.group.dev-docs.dependencies]
sphinx = {version = "^6.1", python = "^3.8"}
sphinx-autodoc2 = {version = ">=0.4.2,<0.6.0", python = "^3.8"}
myst-parser = {version = "^1.0.0", python = "^3.8"}
furo = ">=2022.12.7,<2025.0.0"
[build-system] [build-system]
# The upper bounds here are defensive, intended to prevent situations like # The upper bounds here are defensive, intended to prevent situations like
# https://github.com/matrix-org/synapse/issues/13849 and # https://github.com/matrix-org/synapse/issues/13849 and

View File

@ -20,8 +20,10 @@
//! Implements the internal metadata class attached to events. //! Implements the internal metadata class attached to events.
//! //!
//! The internal metadata is a bit like a `TypedDict`, in that it is stored as a //! The internal metadata is a bit like a `TypedDict`, in that most of
//! JSON dict in the DB. Most events have zero, or only a few, of these keys //! it is stored as a JSON dict in the DB (the exceptions being `outlier`
//! and `stream_ordering` which have their own columns in the database).
//! Most events have zero, or only a few, of these keys
//! set. Therefore, since we care more about memory size than performance here, //! set. Therefore, since we care more about memory size than performance here,
//! we store these fields in a mapping. //! we store these fields in a mapping.
//! //!
@ -234,6 +236,9 @@ impl EventInternalMetadata {
self.clone() self.clone()
} }
/// Get a dict holding the data stored in the `internal_metadata` column in the database.
///
/// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here.
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> { fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
let dict = PyDict::new(py); let dict = PyDict::new(py);

View File

@ -214,7 +214,17 @@ fi
extra_test_args=() extra_test_args=()
test_packages="./tests/csapi ./tests ./tests/msc3874 ./tests/msc3890 ./tests/msc3391 ./tests/msc3930 ./tests/msc3902 ./tests/msc3967" test_packages=(
./tests/csapi
./tests
./tests/msc3874
./tests/msc3890
./tests/msc3391
./tests/msc3930
./tests/msc3902
./tests/msc3967
./tests/msc4115
)
# Enable dirty runs, so tests will reuse the same container where possible. # Enable dirty runs, so tests will reuse the same container where possible.
# This significantly speeds up tests, but increases the possibility of test pollution. # This significantly speeds up tests, but increases the possibility of test pollution.
@ -278,7 +288,7 @@ fi
export PASS_SYNAPSE_LOG_TESTING=1 export PASS_SYNAPSE_LOG_TESTING=1
# Run the tests! # Run the tests!
echo "Images built; running complement with ${extra_test_args[@]} $@ $test_packages" echo "Images built; running complement with ${extra_test_args[@]} $@ ${test_packages[@]}"
cd "$COMPLEMENT_DIR" cd "$COMPLEMENT_DIR"
go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" $test_packages go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" "${test_packages[@]}"

View File

@ -91,7 +91,6 @@ else
"synapse" "docker" "tests" "synapse" "docker" "tests"
"scripts-dev" "scripts-dev"
"contrib" "synmark" "stubs" ".ci" "contrib" "synmark" "stubs" ".ci"
"dev-docs"
) )
fi fi
fi fi

View File

@ -127,7 +127,7 @@ BOOLEAN_COLUMNS = {
"redactions": ["have_censored"], "redactions": ["have_censored"],
"room_stats_state": ["is_federatable"], "room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"], "rooms": ["is_public", "has_auth_chain_index"],
"users": ["shadow_banned", "approved", "locked"], "users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"], "un_partial_stated_event_stream": ["rejection_status_changed"],
"users_who_share_rooms": ["share_private"], "users_who_share_rooms": ["share_private"],
"per_user_experimental_features": ["enabled"], "per_user_experimental_features": ["enabled"],

View File

@ -234,6 +234,13 @@ class EventContentFields:
TO_DEVICE_MSGID: Final = "org.matrix.msgid" TO_DEVICE_MSGID: Final = "org.matrix.msgid"
class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events"""
# Requesting user's membership, per MSC4115
MSC4115_MEMBERSHIP: Final = "io.element.msc4115.membership"
class RoomTypes: class RoomTypes:
"""Understood values of the room_type field of m.room.create events.""" """Understood values of the room_type field of m.room.create events."""

View File

@ -432,3 +432,7 @@ class ExperimentalConfig(Config):
"You cannot have MSC4108 both enabled and delegated at the same time", "You cannot have MSC4108 both enabled and delegated at the same time",
("experimental", "msc4108_delegation_endpoint"), ("experimental", "msc4108_delegation_endpoint"),
) )
self.msc4115_membership_on_events = experimental.get(
"msc4115_membership_on_events", False
)

View File

@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import RoomVersion
from synapse.types import JsonDict, Requester from synapse.types import JsonDict, Requester
from . import EventBase from . import EventBase, make_event_from_dict
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations from synapse.handlers.relations import BundledAggregations
@ -82,17 +82,14 @@ def prune_event(event: EventBase) -> EventBase:
""" """
pruned_event_dict = prune_event_dict(event.room_version, event.get_dict()) pruned_event_dict = prune_event_dict(event.room_version, event.get_dict())
from . import make_event_from_dict
pruned_event = make_event_from_dict( pruned_event = make_event_from_dict(
pruned_event_dict, event.room_version, event.internal_metadata.get_dict() pruned_event_dict, event.room_version, event.internal_metadata.get_dict()
) )
# copy the internal fields # Copy the bits of `internal_metadata` that aren't returned by `get_dict`
pruned_event.internal_metadata.stream_ordering = ( pruned_event.internal_metadata.stream_ordering = (
event.internal_metadata.stream_ordering event.internal_metadata.stream_ordering
) )
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
# Mark the event as redacted # Mark the event as redacted
@ -101,6 +98,29 @@ def prune_event(event: EventBase) -> EventBase:
return pruned_event return pruned_event
def clone_event(event: EventBase) -> EventBase:
"""Take a copy of the event.
This is mostly useful because it does a *shallow* copy of the `unsigned` data,
which means it can then be updated without corrupting the in-memory cache. Note that
other properties of the event, such as `content`, are *not* (currently) copied here.
"""
# XXX: We rely on at least one of `event.get_dict()` and `make_event_from_dict()`
# making a copy of `unsigned`. Currently, both do, though I don't really know why.
# Still, as long as they do, there's not much point doing yet another copy here.
new_event = make_event_from_dict(
event.get_dict(), event.room_version, event.internal_metadata.get_dict()
)
# Copy the bits of `internal_metadata` that aren't returned by `get_dict`.
new_event.internal_metadata.stream_ordering = (
event.internal_metadata.stream_ordering
)
new_event.internal_metadata.outlier = event.internal_metadata.outlier
return new_event
def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDict: def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDict:
"""Redacts the event_dict in the same way as `prune_event`, except it """Redacts the event_dict in the same way as `prune_event`, except it
operates on dicts rather than event objects operates on dicts rather than event objects

View File

@ -546,7 +546,25 @@ class FederationServer(FederationBase):
edu_type=edu_dict["edu_type"], edu_type=edu_dict["edu_type"],
content=edu_dict["content"], content=edu_dict["content"],
) )
try:
await self.registry.on_edu(edu.edu_type, origin, edu.content) await self.registry.on_edu(edu.edu_type, origin, edu.content)
except Exception:
# If there was an error handling the EDU, we must reject the
# transaction.
#
# Some EDU types (notably, to-device messages) are, despite their name,
# expected to be reliable; if we weren't able to do something with it,
# we have to tell the sender that, and the only way the protocol gives
# us to do so is by sending an HTTP error back on the transaction.
#
# We log the exception now, and then raise a new SynapseError to cause
# the transaction to be failed.
logger.exception("Error handling EDU of type %s", edu.edu_type)
raise SynapseError(500, f"Error handing EDU of type {edu.edu_type}")
# TODO: if the first EDU fails, we should probably abort the whole
# thing rather than carrying on with the rest of them. That would
# probably be best done inside `concurrently_execute`.
await concurrently_execute( await concurrently_execute(
_process_edu, _process_edu,
@ -1414,12 +1432,7 @@ class FederationHandlerRegistry:
handler = self.edu_handlers.get(edu_type) handler = self.edu_handlers.get(edu_type)
if handler: if handler:
with start_active_span_from_edu(content, "handle_edu"): with start_active_span_from_edu(content, "handle_edu"):
try:
await handler(origin, content) await handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception:
logger.exception("Failed to handle edu %r", edu_type)
return return
# Check if we can route it somewhere else that isn't us # Check if we can route it somewhere else that isn't us
@ -1428,17 +1441,12 @@ class FederationHandlerRegistry:
# Pick an instance randomly so that we don't overload one. # Pick an instance randomly so that we don't overload one.
route_to = random.choice(instances) route_to = random.choice(instances)
try:
await self._send_edu( await self._send_edu(
instance_name=route_to, instance_name=route_to,
edu_type=edu_type, edu_type=edu_type,
origin=origin, origin=origin,
content=content, content=content,
) )
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception:
logger.exception("Failed to handle edu %r", edu_type)
return return
# Oh well, let's just log and move on. # Oh well, let's just log and move on.

View File

@ -42,6 +42,7 @@ class AdminHandler:
self._device_handler = hs.get_device_handler() self._device_handler = hs.get_device_handler()
self._storage_controllers = hs.get_storage_controllers() self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state self._state_storage_controller = self._storage_controllers.state
self._hs_config = hs.config
self._msc3866_enabled = hs.config.experimental.msc3866.enabled self._msc3866_enabled = hs.config.experimental.msc3866.enabled
async def get_whois(self, user: UserID) -> JsonMapping: async def get_whois(self, user: UserID) -> JsonMapping:
@ -217,7 +218,10 @@ class AdminHandler:
) )
events = await filter_events_for_client( events = await filter_events_for_client(
self._storage_controllers, user_id, events self._storage_controllers,
user_id,
events,
msc4115_membership_on_events=self._hs_config.experimental.msc4115_membership_on_events,
) )
writer.write_events(room_id, events) writer.write_events(room_id, events)

View File

@ -104,6 +104,9 @@ class DeviceMessageHandler:
""" """
Handle receiving to-device messages from remote homeservers. Handle receiving to-device messages from remote homeservers.
Note that any errors thrown from this method will cause the federation /send
request to receive an error response.
Args: Args:
origin: The remote homeserver. origin: The remote homeserver.
content: The JSON dictionary containing the to-device messages. content: The JSON dictionary containing the to-device messages.

View File

@ -148,6 +148,7 @@ class EventHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers() self._storage_controllers = hs.get_storage_controllers()
self._config = hs.config
async def get_event( async def get_event(
self, self,
@ -189,7 +190,11 @@ class EventHandler:
is_peeking = not is_user_in_room is_peeking = not is_user_in_room
filtered = await filter_events_for_client( filtered = await filter_events_for_client(
self._storage_controllers, user.to_string(), [event], is_peeking=is_peeking self._storage_controllers,
user.to_string(),
[event],
is_peeking=is_peeking,
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
) )
if not filtered: if not filtered:

View File

@ -221,7 +221,10 @@ class InitialSyncHandler:
).addErrback(unwrapFirstError) ).addErrback(unwrapFirstError)
messages = await filter_events_for_client( messages = await filter_events_for_client(
self._storage_controllers, user_id, messages self._storage_controllers,
user_id,
messages,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token) start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
@ -380,6 +383,7 @@ class InitialSyncHandler:
requester.user.to_string(), requester.user.to_string(),
messages, messages,
is_peeking=is_peeking, is_peeking=is_peeking,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token) start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token)
@ -494,6 +498,7 @@ class InitialSyncHandler:
requester.user.to_string(), requester.user.to_string(),
messages, messages,
is_peeking=is_peeking, is_peeking=is_peeking,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token) start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)

View File

@ -623,6 +623,7 @@ class PaginationHandler:
user_id, user_id,
events, events,
is_peeking=(member_event_id is None), is_peeking=(member_event_id is None),
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
# if after the filter applied there are no more events # if after the filter applied there are no more events

View File

@ -95,6 +95,7 @@ class RelationsHandler:
self._event_handler = hs.get_event_handler() self._event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer() self._event_serializer = hs.get_event_client_serializer()
self._event_creation_handler = hs.get_event_creation_handler() self._event_creation_handler = hs.get_event_creation_handler()
self._config = hs.config
async def get_relations( async def get_relations(
self, self,
@ -163,6 +164,7 @@ class RelationsHandler:
user_id, user_id,
events, events,
is_peeking=(member_event_id is None), is_peeking=(member_event_id is None),
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
) )
# The relations returned for the requested event do include their # The relations returned for the requested event do include their
@ -608,6 +610,7 @@ class RelationsHandler:
user_id, user_id,
events, events,
is_peeking=(member_event_id is None), is_peeking=(member_event_id is None),
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
) )
aggregations = await self.get_bundled_aggregations( aggregations = await self.get_bundled_aggregations(

View File

@ -1488,6 +1488,7 @@ class RoomContextHandler:
user.to_string(), user.to_string(),
events, events,
is_peeking=is_peeking, is_peeking=is_peeking,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
event = await self.store.get_event( event = await self.store.get_event(

View File

@ -752,6 +752,36 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
and requester.user.to_string() == self._server_notices_mxid and requester.user.to_string() == self._server_notices_mxid
) )
requester_suspended = await self.store.get_user_suspended_status(
requester.user.to_string()
)
if action == Membership.INVITE and requester_suspended:
raise SynapseError(
403,
"Sending invites while account is suspended is not allowed.",
Codes.USER_ACCOUNT_SUSPENDED,
)
if target.to_string() != requester.user.to_string():
target_suspended = await self.store.get_user_suspended_status(
target.to_string()
)
else:
target_suspended = requester_suspended
if action == Membership.JOIN and target_suspended:
raise SynapseError(
403,
"Joining rooms while account is suspended is not allowed.",
Codes.USER_ACCOUNT_SUSPENDED,
)
if action == Membership.KNOCK and target_suspended:
raise SynapseError(
403,
"Knocking on rooms while account is suspended is not allowed.",
Codes.USER_ACCOUNT_SUSPENDED,
)
if ( if (
not self.allow_per_room_profiles and not is_requester_server_notices_user not self.allow_per_room_profiles and not is_requester_server_notices_user
) or requester.shadow_banned: ) or requester.shadow_banned:

View File

@ -480,7 +480,10 @@ class SearchHandler:
filtered_events = await search_filter.filter([r["event"] for r in results]) filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client( events = await filter_events_for_client(
self._storage_controllers, user.to_string(), filtered_events self._storage_controllers,
user.to_string(),
filtered_events,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
events.sort(key=lambda e: -rank_map[e.event_id]) events.sort(key=lambda e: -rank_map[e.event_id])
@ -579,7 +582,10 @@ class SearchHandler:
filtered_events = await search_filter.filter([r["event"] for r in results]) filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client( events = await filter_events_for_client(
self._storage_controllers, user.to_string(), filtered_events self._storage_controllers,
user.to_string(),
filtered_events,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
room_events.extend(events) room_events.extend(events)
@ -664,11 +670,17 @@ class SearchHandler:
) )
events_before = await filter_events_for_client( events_before = await filter_events_for_client(
self._storage_controllers, user.to_string(), res.events_before self._storage_controllers,
user.to_string(),
res.events_before,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
events_after = await filter_events_for_client( events_after = await filter_events_for_client(
self._storage_controllers, user.to_string(), res.events_after self._storage_controllers,
user.to_string(),
res.events_after,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
context: JsonDict = { context: JsonDict = {

View File

@ -169,6 +169,7 @@ class UsernameMappingSession:
# attributes returned by the ID mapper # attributes returned by the ID mapper
display_name: Optional[str] display_name: Optional[str]
emails: StrCollection emails: StrCollection
avatar_url: Optional[str]
# An optional dictionary of extra attributes to be provided to the client in the # An optional dictionary of extra attributes to be provided to the client in the
# login response. # login response.
@ -183,6 +184,7 @@ class UsernameMappingSession:
# choices made by the user # choices made by the user
chosen_localpart: Optional[str] = None chosen_localpart: Optional[str] = None
use_display_name: bool = True use_display_name: bool = True
use_avatar: bool = True
emails_to_use: StrCollection = () emails_to_use: StrCollection = ()
terms_accepted_version: Optional[str] = None terms_accepted_version: Optional[str] = None
@ -660,6 +662,9 @@ class SsoHandler:
remote_user_id=remote_user_id, remote_user_id=remote_user_id,
display_name=attributes.display_name, display_name=attributes.display_name,
emails=attributes.emails, emails=attributes.emails,
avatar_url=attributes.picture,
# Default to using all mapped emails. Will be overwritten in handle_submit_username_request.
emails_to_use=attributes.emails,
client_redirect_url=client_redirect_url, client_redirect_url=client_redirect_url,
expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS, expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS,
extra_login_attributes=extra_login_attributes, extra_login_attributes=extra_login_attributes,
@ -966,6 +971,7 @@ class SsoHandler:
session_id: str, session_id: str,
localpart: str, localpart: str,
use_display_name: bool, use_display_name: bool,
use_avatar: bool,
emails_to_use: Iterable[str], emails_to_use: Iterable[str],
) -> None: ) -> None:
"""Handle a request to the username-picker 'submit' endpoint """Handle a request to the username-picker 'submit' endpoint
@ -988,6 +994,7 @@ class SsoHandler:
# update the session with the user's choices # update the session with the user's choices
session.chosen_localpart = localpart session.chosen_localpart = localpart
session.use_display_name = use_display_name session.use_display_name = use_display_name
session.use_avatar = use_avatar
emails_from_idp = set(session.emails) emails_from_idp = set(session.emails)
filtered_emails: Set[str] = set() filtered_emails: Set[str] = set()
@ -1068,6 +1075,9 @@ class SsoHandler:
if session.use_display_name: if session.use_display_name:
attributes.display_name = session.display_name attributes.display_name = session.display_name
if session.use_avatar:
attributes.picture = session.avatar_url
# the following will raise a 400 error if the username has been taken in the # the following will raise a 400 error if the username has been taken in the
# meantime. # meantime.
user_id = await self._register_mapped_user( user_id = await self._register_mapped_user(

View File

@ -596,6 +596,7 @@ class SyncHandler:
sync_config.user.to_string(), sync_config.user.to_string(),
recents, recents,
always_include_ids=current_state_ids, always_include_ids=current_state_ids,
msc4115_membership_on_events=self.hs_config.experimental.msc4115_membership_on_events,
) )
log_kv({"recents_after_visibility_filtering": len(recents)}) log_kv({"recents_after_visibility_filtering": len(recents)})
else: else:
@ -681,6 +682,7 @@ class SyncHandler:
sync_config.user.to_string(), sync_config.user.to_string(),
loaded_recents, loaded_recents,
always_include_ids=current_state_ids, always_include_ids=current_state_ids,
msc4115_membership_on_events=self.hs_config.experimental.msc4115_membership_on_events,
) )
loaded_recents = [] loaded_recents = []

View File

@ -721,6 +721,7 @@ class Notifier:
user.to_string(), user.to_string(),
new_events, new_events,
is_peeking=is_peeking, is_peeking=is_peeking,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
elif keyname == StreamKeyType.PRESENCE: elif keyname == StreamKeyType.PRESENCE:
now = self.clock.time_msec() now = self.clock.time_msec()

View File

@ -529,7 +529,10 @@ class Mailer:
} }
the_events = await filter_events_for_client( the_events = await filter_events_for_client(
self._storage_controllers, user_id, results.events_before self._storage_controllers,
user_id,
results.events_before,
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
) )
the_events.append(notif_event) the_events.append(notif_event)

View File

@ -393,17 +393,20 @@ class SigningKeyUploadServlet(RestServlet):
# time. Because there is no UIA in MSC3861, for now we throw an error if the # time. Because there is no UIA in MSC3861, for now we throw an error if the
# user tries to reset the device signing key when MSC3861 is enabled, but allow # user tries to reset the device signing key when MSC3861 is enabled, but allow
# first-time setup. # first-time setup.
#
# XXX: We now have a get-out clause by which MAS can temporarily mark the master
# key as replaceable. It should do its own equivalent of user interactive auth
# before doing so.
if self.hs.config.experimental.msc3861.enabled: if self.hs.config.experimental.msc3861.enabled:
# The auth service has to explicitly mark the master key as replaceable # The auth service has to explicitly mark the master key as replaceable
# without UIA to reset the device signing key with MSC3861. # without UIA to reset the device signing key with MSC3861.
if is_cross_signing_setup and not master_key_updatable_without_uia: if is_cross_signing_setup and not master_key_updatable_without_uia:
config = self.hs.config.experimental.msc3861
if config.account_management_url is not None:
url = f"{config.account_management_url}?action=org.matrix.cross_signing_reset"
else:
url = config.issuer
raise SynapseError( raise SynapseError(
HTTPStatus.NOT_IMPLEMENTED, HTTPStatus.NOT_IMPLEMENTED,
"Resetting cross signing keys is not yet supported with MSC3861", "To reset your end-to-end encryption cross-signing identity, "
f"you first need to approve it at {url} and then try again.",
Codes.UNRECOGNIZED, Codes.UNRECOGNIZED,
) )
# But first-time setup is fine # But first-time setup is fine

View File

@ -1446,10 +1446,16 @@ class RoomHierarchyRestServlet(RestServlet):
class RoomSummaryRestServlet(ResolveRoomIdMixin, RestServlet): class RoomSummaryRestServlet(ResolveRoomIdMixin, RestServlet):
PATTERNS = ( PATTERNS = (
# deprecated endpoint, to be removed
re.compile( re.compile(
"^/_matrix/client/unstable/im.nheko.summary" "^/_matrix/client/unstable/im.nheko.summary"
"/rooms/(?P<room_identifier>[^/]*)/summary$" "/rooms/(?P<room_identifier>[^/]*)/summary$"
), ),
# recommended endpoint
re.compile(
"^/_matrix/client/unstable/im.nheko.summary"
"/summary/(?P<room_identifier>[^/]*)$"
),
) )
CATEGORY = "Client API requests" CATEGORY = "Client API requests"

View File

@ -89,6 +89,7 @@ class VersionsRestServlet(RestServlet):
"v1.7", "v1.7",
"v1.8", "v1.8",
"v1.9", "v1.9",
"v1.10",
], ],
# as per MSC1497: # as per MSC1497:
"unstable_features": { "unstable_features": {

View File

@ -113,6 +113,7 @@ class AccountDetailsResource(DirectServeHtmlResource):
"display_name": session.display_name, "display_name": session.display_name,
"emails": session.emails, "emails": session.emails,
"localpart": localpart, "localpart": localpart,
"avatar_url": session.avatar_url,
}, },
} }
@ -134,6 +135,7 @@ class AccountDetailsResource(DirectServeHtmlResource):
try: try:
localpart = parse_string(request, "username", required=True) localpart = parse_string(request, "username", required=True)
use_display_name = parse_boolean(request, "use_display_name", default=False) use_display_name = parse_boolean(request, "use_display_name", default=False)
use_avatar = parse_boolean(request, "use_avatar", default=False)
try: try:
emails_to_use: List[str] = [ emails_to_use: List[str] = [
@ -147,5 +149,5 @@ class AccountDetailsResource(DirectServeHtmlResource):
return return
await self._sso_handler.handle_submit_username_request( await self._sso_handler.handle_submit_username_request(
request, session_id, localpart, use_display_name, emails_to_use request, session_id, localpart, use_display_name, use_avatar, emails_to_use
) )

View File

@ -236,7 +236,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
consent_server_notice_sent, appservice_id, creation_ts, user_type, consent_server_notice_sent, appservice_id, creation_ts, user_type,
deactivated, COALESCE(shadow_banned, FALSE) AS shadow_banned, deactivated, COALESCE(shadow_banned, FALSE) AS shadow_banned,
COALESCE(approved, TRUE) AS approved, COALESCE(approved, TRUE) AS approved,
COALESCE(locked, FALSE) AS locked COALESCE(locked, FALSE) AS locked,
suspended
FROM users FROM users
WHERE name = ? WHERE name = ?
""", """,
@ -261,6 +262,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
shadow_banned, shadow_banned,
approved, approved,
locked, locked,
suspended,
) = row ) = row
return UserInfo( return UserInfo(
@ -277,6 +279,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
user_type=user_type, user_type=user_type,
approved=bool(approved), approved=bool(approved),
locked=bool(locked), locked=bool(locked),
suspended=bool(suspended),
) )
return await self.db_pool.runInteraction( return await self.db_pool.runInteraction(
@ -1180,6 +1183,27 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
# Convert the potential integer into a boolean. # Convert the potential integer into a boolean.
return bool(res) return bool(res)
@cached()
async def get_user_suspended_status(self, user_id: str) -> bool:
"""
Determine whether the user's account is suspended.
Args:
user_id: The user ID of the user in question
Returns:
True if the user's account is suspended, false if it is not suspended or
if the user ID cannot be found.
"""
res = await self.db_pool.simple_select_one_onecol(
table="users",
keyvalues={"name": user_id},
retcol="suspended",
allow_none=True,
desc="get_user_suspended",
)
return bool(res)
async def get_threepid_validation_session( async def get_threepid_validation_session(
self, self,
medium: Optional[str], medium: Optional[str],
@ -2213,6 +2237,35 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,)) self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
txn.call_after(self.is_guest.invalidate, (user_id,)) txn.call_after(self.is_guest.invalidate, (user_id,))
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
"""
Set whether the user's account is suspended in the `users` table.
Args:
user_id: The user ID of the user in question
suspended: True if the user is suspended, false if not
"""
await self.db_pool.runInteraction(
"set_user_suspended_status",
self.set_user_suspended_status_txn,
user_id,
suspended,
)
def set_user_suspended_status_txn(
self, txn: LoggingTransaction, user_id: str, suspended: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"suspended": suspended},
)
self._invalidate_cache_and_stream(
txn, self.get_user_suspended_status, (user_id,)
)
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
async def set_user_locked_status(self, user_id: str, locked: bool) -> None: async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
"""Set the `locked` property for the provided user to the provided value. """Set the `locked` property for the provided user to the provided value.

View File

@ -470,6 +470,8 @@ class SearchStore(SearchBackgroundUpdateStore):
count_args = args count_args = args
count_clauses = clauses count_clauses = clauses
sqlite_highlights: List[str] = []
if isinstance(self.database_engine, PostgresEngine): if isinstance(self.database_engine, PostgresEngine):
search_query = search_term search_query = search_term
sql = """ sql = """
@ -486,7 +488,7 @@ class SearchStore(SearchBackgroundUpdateStore):
""" """
count_args = [search_query] + count_args count_args = [search_query] + count_args
elif isinstance(self.database_engine, Sqlite3Engine): elif isinstance(self.database_engine, Sqlite3Engine):
search_query = _parse_query_for_sqlite(search_term) search_query, sqlite_highlights = _parse_query_for_sqlite(search_term)
sql = """ sql = """
SELECT rank(matchinfo(event_search)) as rank, room_id, event_id SELECT rank(matchinfo(event_search)) as rank, room_id, event_id
@ -531,9 +533,11 @@ class SearchStore(SearchBackgroundUpdateStore):
event_map = {ev.event_id: ev for ev in events} event_map = {ev.event_id: ev for ev in events}
highlights = None highlights: Collection[str] = []
if isinstance(self.database_engine, PostgresEngine): if isinstance(self.database_engine, PostgresEngine):
highlights = await self._find_highlights_in_postgres(search_query, events) highlights = await self._find_highlights_in_postgres(search_query, events)
else:
highlights = sqlite_highlights
count_sql += " GROUP BY room_id" count_sql += " GROUP BY room_id"
@ -597,6 +601,8 @@ class SearchStore(SearchBackgroundUpdateStore):
count_args = list(args) count_args = list(args)
count_clauses = list(clauses) count_clauses = list(clauses)
sqlite_highlights: List[str] = []
if pagination_token: if pagination_token:
try: try:
origin_server_ts_str, stream_str = pagination_token.split(",") origin_server_ts_str, stream_str = pagination_token.split(",")
@ -647,7 +653,7 @@ class SearchStore(SearchBackgroundUpdateStore):
CROSS JOIN events USING (event_id) CROSS JOIN events USING (event_id)
WHERE WHERE
""" """
search_query = _parse_query_for_sqlite(search_term) search_query, sqlite_highlights = _parse_query_for_sqlite(search_term)
args = [search_query] + args args = [search_query] + args
count_sql = """ count_sql = """
@ -694,9 +700,11 @@ class SearchStore(SearchBackgroundUpdateStore):
event_map = {ev.event_id: ev for ev in events} event_map = {ev.event_id: ev for ev in events}
highlights = None highlights: Collection[str] = []
if isinstance(self.database_engine, PostgresEngine): if isinstance(self.database_engine, PostgresEngine):
highlights = await self._find_highlights_in_postgres(search_query, events) highlights = await self._find_highlights_in_postgres(search_query, events)
else:
highlights = sqlite_highlights
count_sql += " GROUP BY room_id" count_sql += " GROUP BY room_id"
@ -892,19 +900,25 @@ def _tokenize_query(query: str) -> TokenList:
return tokens return tokens
def _tokens_to_sqlite_match_query(tokens: TokenList) -> str: def _tokens_to_sqlite_match_query(tokens: TokenList) -> Tuple[str, List[str]]:
""" """
Convert the list of tokens to a string suitable for passing to sqlite's MATCH. Convert the list of tokens to a string suitable for passing to sqlite's MATCH.
Assume sqlite was compiled with enhanced query syntax. Assume sqlite was compiled with enhanced query syntax.
Returns the sqlite-formatted query string and the tokenized search terms
that can be used as highlights.
Ref: https://www.sqlite.org/fts3.html#full_text_index_queries Ref: https://www.sqlite.org/fts3.html#full_text_index_queries
""" """
match_query = [] match_query = []
highlights = []
for token in tokens: for token in tokens:
if isinstance(token, str): if isinstance(token, str):
match_query.append(token) match_query.append(token)
highlights.append(token)
elif isinstance(token, Phrase): elif isinstance(token, Phrase):
match_query.append('"' + " ".join(token.phrase) + '"') match_query.append('"' + " ".join(token.phrase) + '"')
highlights.append(" ".join(token.phrase))
elif token == SearchToken.Not: elif token == SearchToken.Not:
# TODO: SQLite treats NOT as a *binary* operator. Hopefully a search # TODO: SQLite treats NOT as a *binary* operator. Hopefully a search
# term has already been added before this. # term has already been added before this.
@ -916,11 +930,14 @@ def _tokens_to_sqlite_match_query(tokens: TokenList) -> str:
else: else:
raise ValueError(f"unknown token {token}") raise ValueError(f"unknown token {token}")
return "".join(match_query) return "".join(match_query), highlights
def _parse_query_for_sqlite(search_term: str) -> str: def _parse_query_for_sqlite(search_term: str) -> Tuple[str, List[str]]:
"""Takes a plain unicode string from the user and converts it into a form """Takes a plain unicode string from the user and converts it into a form
that can be passed to sqllite's matchinfo(). that can be passed to sqllite's matchinfo().
Returns the converted query string and the tokenized search terms
that can be used as highlights.
""" """
return _tokens_to_sqlite_match_query(_tokenize_query(search_term)) return _tokens_to_sqlite_match_query(_tokenize_query(search_term))

View File

@ -660,6 +660,7 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
limit=limit, limit=limit,
retcols=("room_id", "stream_ordering"), retcols=("room_id", "stream_ordering"),
order_direction=order, order_direction=order,
keyvalues={"destination": destination},
), ),
) )
return rooms, count return rooms, count

View File

@ -19,7 +19,7 @@
# #
# #
SCHEMA_VERSION = 84 # remember to update the list below when updating SCHEMA_VERSION = 85 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema """Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the This should be incremented whenever the codebase changes its requirements on the
@ -136,6 +136,9 @@ Changes in SCHEMA_VERSION = 83
Changes in SCHEMA_VERSION = 84 Changes in SCHEMA_VERSION = 84
- No longer assumes that `event_auth_chain_links` holds transitive links, and - No longer assumes that `event_auth_chain_links` holds transitive links, and
so read operations must do graph traversal. so read operations must do graph traversal.
Changes in SCHEMA_VERSION = 85
- Add a column `suspended` to the `users` table
""" """

View File

@ -0,0 +1,14 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
ALTER TABLE users ADD COLUMN suspended BOOLEAN DEFAULT FALSE NOT NULL;

View File

@ -1156,6 +1156,7 @@ class UserInfo:
user_type: User type (None for normal user, 'support' and 'bot' other options). user_type: User type (None for normal user, 'support' and 'bot' other options).
approved: If the user has been "approved" to register on the server. approved: If the user has been "approved" to register on the server.
locked: Whether the user's account has been locked locked: Whether the user's account has been locked
suspended: Whether the user's account is currently suspended
""" """
user_id: UserID user_id: UserID
@ -1171,6 +1172,7 @@ class UserInfo:
is_shadow_banned: bool is_shadow_banned: bool
approved: bool approved: bool
locked: bool locked: bool
suspended: bool
class UserProfile(TypedDict): class UserProfile(TypedDict):

View File

@ -115,7 +115,7 @@ class StreamChangeCache:
""" """
new_size = math.floor(self._original_max_size * factor) new_size = math.floor(self._original_max_size * factor)
if new_size != self._max_size: if new_size != self._max_size:
self.max_size = new_size self._max_size = new_size
self._evict() self._evict()
return True return True
return False return False
@ -165,7 +165,7 @@ class StreamChangeCache:
return False return False
def get_entities_changed( def get_entities_changed(
self, entities: Collection[EntityType], stream_pos: int self, entities: Collection[EntityType], stream_pos: int, _perf_factor: int = 1
) -> Union[Set[EntityType], FrozenSet[EntityType]]: ) -> Union[Set[EntityType], FrozenSet[EntityType]]:
""" """
Returns the subset of the given entities that have had changes after the given position. Returns the subset of the given entities that have had changes after the given position.
@ -177,6 +177,8 @@ class StreamChangeCache:
Args: Args:
entities: Entities to check for changes. entities: Entities to check for changes.
stream_pos: The stream position to check for changes after. stream_pos: The stream position to check for changes after.
_perf_factor: Used by unit tests to choose when to use each
optimisation.
Return: Return:
A subset of entities which have changed after the given stream position. A subset of entities which have changed after the given stream position.
@ -184,6 +186,22 @@ class StreamChangeCache:
This will be all entities if the given stream position is at or earlier This will be all entities if the given stream position is at or earlier
than the earliest known stream position. than the earliest known stream position.
""" """
if not self._cache or stream_pos <= self._earliest_known_stream_pos:
self.metrics.inc_misses()
return set(entities)
# If there have been tonnes of changes compared with the number of
# entities, it is faster to check each entities stream ordering
# one-by-one.
max_stream_pos, _ = self._cache.peekitem()
if max_stream_pos - stream_pos > _perf_factor * len(entities):
self.metrics.inc_hits()
return {
entity
for entity in entities
if self._entity_to_key.get(entity, -1) > stream_pos
}
cache_result = self.get_all_entities_changed(stream_pos) cache_result = self.get_all_entities_changed(stream_pos)
if cache_result.hit: if cache_result.hit:
# We now do an intersection, trying to do so in the most efficient # We now do an intersection, trying to do so in the most efficient

View File

@ -36,10 +36,15 @@ from typing import (
import attr import attr
from synapse.api.constants import EventTypes, HistoryVisibility, Membership from synapse.api.constants import (
EventTypes,
EventUnsignedContentFields,
HistoryVisibility,
Membership,
)
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.events.utils import prune_event from synapse.events.utils import clone_event, prune_event
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace
from synapse.storage.controllers import StorageControllers from synapse.storage.controllers import StorageControllers
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
@ -77,6 +82,7 @@ async def filter_events_for_client(
is_peeking: bool = False, is_peeking: bool = False,
always_include_ids: FrozenSet[str] = frozenset(), always_include_ids: FrozenSet[str] = frozenset(),
filter_send_to_client: bool = True, filter_send_to_client: bool = True,
msc4115_membership_on_events: bool = False,
) -> List[EventBase]: ) -> List[EventBase]:
""" """
Check which events a user is allowed to see. If the user can see the event but its Check which events a user is allowed to see. If the user can see the event but its
@ -95,9 +101,12 @@ async def filter_events_for_client(
filter_send_to_client: Whether we're checking an event that's going to be filter_send_to_client: Whether we're checking an event that's going to be
sent to a client. This might not always be the case since this function can sent to a client. This might not always be the case since this function can
also be called to check whether a user can see the state at a given point. also be called to check whether a user can see the state at a given point.
msc4115_membership_on_events: Whether to include the requesting user's
membership in the "unsigned" data, per MSC4115.
Returns: Returns:
The filtered events. The filtered events. If `msc4115_membership_on_events` is true, the `unsigned`
data is annotated with the membership state of `user_id` at each event.
""" """
# Filter out events that have been soft failed so that we don't relay them # Filter out events that have been soft failed so that we don't relay them
# to clients. # to clients.
@ -138,7 +147,8 @@ async def filter_events_for_client(
filter_override = user_id in storage.hs.config.meow.filter_override filter_override = user_id in storage.hs.config.meow.filter_override
def allowed(event: EventBase) -> Optional[EventBase]: def allowed(event: EventBase) -> Optional[EventBase]:
return _check_client_allowed_to_see_event( state_after_event = event_id_to_state.get(event.event_id)
filtered = _check_client_allowed_to_see_event(
user_id=user_id, user_id=user_id,
event=event, event=event,
clock=storage.main.clock, clock=storage.main.clock,
@ -146,14 +156,46 @@ async def filter_events_for_client(
sender_ignored=event.sender in ignore_list, sender_ignored=event.sender in ignore_list,
always_include_ids=always_include_ids, always_include_ids=always_include_ids,
retention_policy=retention_policies[room_id], retention_policy=retention_policies[room_id],
state=event_id_to_state.get(event.event_id), state=state_after_event,
is_peeking=is_peeking, is_peeking=is_peeking,
sender_erased=erased_senders.get(event.sender, False), sender_erased=erased_senders.get(event.sender, False),
filter_override=filter_override, filter_override=filter_override,
) )
if filtered is None:
return None
# Check each event: gives an iterable of None or (a potentially modified) if not msc4115_membership_on_events:
# EventBase. return filtered
# Annotate the event with the user's membership after the event.
#
# Normally we just look in `state_after_event`, but if the event is an outlier
# we won't have such a state. The only outliers that are returned here are the
# user's own membership event, so we can just inspect that.
user_membership_event: Optional[EventBase]
if event.type == EventTypes.Member and event.state_key == user_id:
user_membership_event = event
elif state_after_event is not None:
user_membership_event = state_after_event.get((EventTypes.Member, user_id))
else:
# unreachable!
raise Exception("Missing state for event that is not user's own membership")
user_membership = (
user_membership_event.membership
if user_membership_event
else Membership.LEAVE
)
# Copy the event before updating the unsigned data: this shouldn't be persisted
# to the cache!
cloned = clone_event(filtered)
cloned.unsigned[EventUnsignedContentFields.MSC4115_MEMBERSHIP] = user_membership
return cloned
# Check each event: gives an iterable of None or (a modified) EventBase.
filtered_events = map(allowed, events) filtered_events = map(allowed, events)
# Turn it into a list and remove None entries before returning. # Turn it into a list and remove None entries before returning.
@ -406,7 +448,13 @@ def _check_client_allowed_to_see_event(
@attr.s(frozen=True, slots=True, auto_attribs=True) @attr.s(frozen=True, slots=True, auto_attribs=True)
class _CheckMembershipReturn: class _CheckMembershipReturn:
"Return value of _check_membership" """Return value of `_check_membership`.
Attributes:
allowed: Whether the user should be allowed to see the event.
joined: Whether the user was joined to the room at the event.
"""
allowed: bool allowed: bool
joined: bool joined: bool
@ -418,12 +466,7 @@ def _check_membership(
state: StateMap[EventBase], state: StateMap[EventBase],
is_peeking: bool, is_peeking: bool,
) -> _CheckMembershipReturn: ) -> _CheckMembershipReturn:
"""Check whether the user can see the event due to their membership """Check whether the user can see the event due to their membership"""
Returns:
True if they can, False if they can't, plus the membership of the user
at the event.
"""
# If the event is the user's own membership event, use the 'most joined' # If the event is the user's own membership event, use the 'most joined'
# membership # membership
membership = None membership = None
@ -445,7 +488,7 @@ def _check_membership(
if membership == "leave" and ( if membership == "leave" and (
prev_membership == "join" or prev_membership == "invite" prev_membership == "join" or prev_membership == "invite"
): ):
return _CheckMembershipReturn(True, membership == Membership.JOIN) return _CheckMembershipReturn(True, False)
new_priority = MEMBERSHIP_PRIORITY.index(membership) new_priority = MEMBERSHIP_PRIORITY.index(membership)
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership) old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)

View File

@ -32,6 +32,7 @@ from synapse.events.utils import (
PowerLevelsContent, PowerLevelsContent,
SerializeEventConfig, SerializeEventConfig,
_split_field, _split_field,
clone_event,
copy_and_fixup_power_levels_contents, copy_and_fixup_power_levels_contents,
maybe_upsert_event_field, maybe_upsert_event_field,
prune_event, prune_event,
@ -611,6 +612,29 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
) )
class CloneEventTestCase(stdlib_unittest.TestCase):
def test_unsigned_is_copied(self) -> None:
original = make_event_from_dict(
{
"type": "A",
"event_id": "$test:domain",
"unsigned": {"a": 1, "b": 2},
},
RoomVersions.V1,
{"txn_id": "txn"},
)
original.internal_metadata.stream_ordering = 1234
self.assertEqual(original.internal_metadata.stream_ordering, 1234)
cloned = clone_event(original)
cloned.unsigned["b"] = 3
self.assertEqual(original.unsigned, {"a": 1, "b": 2})
self.assertEqual(cloned.unsigned, {"a": 1, "b": 3})
self.assertEqual(cloned.internal_metadata.stream_ordering, 1234)
self.assertEqual(cloned.internal_metadata.txn_id, "txn")
class SerializeEventTestCase(stdlib_unittest.TestCase): class SerializeEventTestCase(stdlib_unittest.TestCase):
def serialize(self, ev: EventBase, fields: Optional[List[str]]) -> JsonDict: def serialize(self, ev: EventBase, fields: Optional[List[str]]) -> JsonDict:
return serialize_event( return serialize_event(

View File

@ -67,6 +67,23 @@ class FederationServerTests(unittest.FederatingHomeserverTestCase):
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result) self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(channel.json_body["errcode"], "M_NOT_JSON") self.assertEqual(channel.json_body["errcode"], "M_NOT_JSON")
def test_failed_edu_causes_500(self) -> None:
"""If the EDU handler fails, /send should return a 500."""
async def failing_handler(_origin: str, _content: JsonDict) -> None:
raise Exception("bleh")
self.hs.get_federation_registry().register_edu_handler(
"FAIL_EDU_TYPE", failing_handler
)
channel = self.make_signed_federation_request(
"PUT",
"/_matrix/federation/v1/send/txn",
{"edus": [{"edu_type": "FAIL_EDU_TYPE", "content": {}}]},
)
self.assertEqual(500, channel.code, channel.result)
class ServerACLsTestCase(unittest.TestCase): class ServerACLsTestCase(unittest.TestCase):
def test_blocked_server(self) -> None: def test_blocked_server(self) -> None:

View File

@ -59,7 +59,14 @@ class RoomDirectoryFederationTests(unittest.FederatingHomeserverTestCase):
"/_matrix/federation/v1/send/txn_id_1234/", "/_matrix/federation/v1/send/txn_id_1234/",
content={ content={
"edus": [ "edus": [
{"edu_type": EduTypes.DEVICE_LIST_UPDATE, "content": {"foo": "bar"}} {
"edu_type": EduTypes.DEVICE_LIST_UPDATE,
"content": {
"device_id": "QBUAZIFURK",
"stream_id": 0,
"user_id": "@user:id",
},
},
], ],
"pdus": [], "pdus": [],
}, },

View File

@ -778,19 +778,80 @@ class DestinationMembershipTestCase(unittest.HomeserverTestCase):
self.assertEqual(number_rooms, len(channel.json_body["rooms"])) self.assertEqual(number_rooms, len(channel.json_body["rooms"]))
self._check_fields(channel.json_body["rooms"]) self._check_fields(channel.json_body["rooms"])
def _create_destination_rooms(self, number_rooms: int) -> None: def test_room_filtering(self) -> None:
"""Create a number rooms for destination """Tests that rooms are correctly filtered"""
# Create two rooms on the homeserver. Each has a different remote homeserver
# participating in it.
other_destination = "other.destination.org"
room_ids_self_dest = self._create_destination_rooms(2, destination=self.dest)
room_ids_other_dest = self._create_destination_rooms(
1, destination=other_destination
)
# Ask for the rooms that `self.dest` is participating in.
channel = self.make_request("GET", self.url, access_token=self.admin_user_tok)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Verify that we received only the rooms that `self.dest` is participating in.
# This assertion method name is a bit misleading. It does check that both lists
# contain the same items, and the same counts.
self.assertCountEqual(
[r["room_id"] for r in channel.json_body["rooms"]], room_ids_self_dest
)
self.assertEqual(channel.json_body["total"], len(room_ids_self_dest))
# Ask for the rooms that `other_destination` is participating in.
channel = self.make_request(
"GET",
self.url.replace(self.dest, other_destination),
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Verify that we received only the rooms that `other_destination` is
# participating in.
self.assertCountEqual(
[r["room_id"] for r in channel.json_body["rooms"]], room_ids_other_dest
)
self.assertEqual(channel.json_body["total"], len(room_ids_other_dest))
def _create_destination_rooms(
self,
number_rooms: int,
destination: Optional[str] = None,
) -> List[str]:
"""
Create the given number of rooms. The given `destination` homeserver will
be recorded as a participant.
Args: Args:
number_rooms: Number of rooms to be created number_rooms: Number of rooms to be created
destination: The domain of the homeserver that will be considered
as a participant in the rooms.
Returns:
The IDs of the rooms that have been created.
""" """
room_ids = []
# If no destination was provided, default to `self.dest`.
if destination is None:
destination = self.dest
for _ in range(number_rooms): for _ in range(number_rooms):
room_id = self.helper.create_room_as( room_id = self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok self.admin_user, tok=self.admin_user_tok
) )
room_ids.append(room_id)
self.get_success( self.get_success(
self.store.store_destination_rooms_entries((self.dest,), room_id, 1234) self.store.store_destination_rooms_entries(
(destination,), room_id, 1234
) )
)
return room_ids
def _check_fields(self, content: List[JsonDict]) -> None: def _check_fields(self, content: List[JsonDict]) -> None:
"""Checks that the expected room attributes are present in content """Checks that the expected room attributes are present in content

View File

@ -20,7 +20,17 @@
# #
import time import time
import urllib.parse import urllib.parse
from typing import Any, Collection, Dict, List, Optional, Tuple, Union from typing import (
Any,
BinaryIO,
Callable,
Collection,
Dict,
List,
Optional,
Tuple,
Union,
)
from unittest.mock import Mock from unittest.mock import Mock
from urllib.parse import urlencode from urllib.parse import urlencode
@ -34,8 +44,9 @@ import synapse.rest.admin
from synapse.api.constants import ApprovalNoticeMedium, LoginType from synapse.api.constants import ApprovalNoticeMedium, LoginType
from synapse.api.errors import Codes from synapse.api.errors import Codes
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.http.client import RawHeaders
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.rest.client import devices, login, logout, register from synapse.rest.client import account, devices, login, logout, profile, register
from synapse.rest.client.account import WhoamiRestServlet from synapse.rest.client.account import WhoamiRestServlet
from synapse.rest.synapse.client import build_synapse_client_resource_tree from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.server import HomeServer from synapse.server import HomeServer
@ -48,6 +59,7 @@ from tests.handlers.test_saml import has_saml2
from tests.rest.client.utils import TEST_OIDC_CONFIG from tests.rest.client.utils import TEST_OIDC_CONFIG
from tests.server import FakeChannel from tests.server import FakeChannel
from tests.test_utils.html_parsers import TestHtmlParser from tests.test_utils.html_parsers import TestHtmlParser
from tests.test_utils.oidc import FakeOidcServer
from tests.unittest import HomeserverTestCase, override_config, skip_unless from tests.unittest import HomeserverTestCase, override_config, skip_unless
try: try:
@ -1421,7 +1433,19 @@ class AppserviceLoginRestServletTestCase(unittest.HomeserverTestCase):
class UsernamePickerTestCase(HomeserverTestCase): class UsernamePickerTestCase(HomeserverTestCase):
"""Tests for the username picker flow of SSO login""" """Tests for the username picker flow of SSO login"""
servlets = [login.register_servlets] servlets = [
login.register_servlets,
profile.register_servlets,
account.register_servlets,
]
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.http_client = Mock(spec=["get_file"])
self.http_client.get_file.side_effect = mock_get_file
hs = self.setup_test_homeserver(
proxied_blocklisted_http_client=self.http_client
)
return hs
def default_config(self) -> Dict[str, Any]: def default_config(self) -> Dict[str, Any]:
config = super().default_config() config = super().default_config()
@ -1430,7 +1454,11 @@ class UsernamePickerTestCase(HomeserverTestCase):
config["oidc_config"] = {} config["oidc_config"] = {}
config["oidc_config"].update(TEST_OIDC_CONFIG) config["oidc_config"].update(TEST_OIDC_CONFIG)
config["oidc_config"]["user_mapping_provider"] = { config["oidc_config"]["user_mapping_provider"] = {
"config": {"display_name_template": "{{ user.displayname }}"} "config": {
"display_name_template": "{{ user.displayname }}",
"email_template": "{{ user.email }}",
"picture_template": "{{ user.picture }}",
}
} }
# whitelist this client URI so we redirect straight to it rather than # whitelist this client URI so we redirect straight to it rather than
@ -1443,15 +1471,22 @@ class UsernamePickerTestCase(HomeserverTestCase):
d.update(build_synapse_client_resource_tree(self.hs)) d.update(build_synapse_client_resource_tree(self.hs))
return d return d
def test_username_picker(self) -> None: def proceed_to_username_picker_page(
"""Test the happy path of a username picker flow.""" self,
fake_oidc_server: FakeOidcServer,
fake_oidc_server = self.helper.fake_oidc_server() displayname: str,
email: str,
picture: str,
) -> Tuple[str, str]:
# do the start of the login flow # do the start of the login flow
channel, _ = self.helper.auth_via_oidc( channel, _ = self.helper.auth_via_oidc(
fake_oidc_server, fake_oidc_server,
{"sub": "tester", "displayname": "Jonny"}, {
"sub": "tester",
"displayname": displayname,
"picture": picture,
"email": email,
},
TEST_CLIENT_REDIRECT_URL, TEST_CLIENT_REDIRECT_URL,
) )
@ -1478,16 +1513,42 @@ class UsernamePickerTestCase(HomeserverTestCase):
) )
session = username_mapping_sessions[session_id] session = username_mapping_sessions[session_id]
self.assertEqual(session.remote_user_id, "tester") self.assertEqual(session.remote_user_id, "tester")
self.assertEqual(session.display_name, "Jonny") self.assertEqual(session.display_name, displayname)
self.assertEqual(session.emails, [email])
self.assertEqual(session.avatar_url, picture)
self.assertEqual(session.client_redirect_url, TEST_CLIENT_REDIRECT_URL) self.assertEqual(session.client_redirect_url, TEST_CLIENT_REDIRECT_URL)
# the expiry time should be about 15 minutes away # the expiry time should be about 15 minutes away
expected_expiry = self.clock.time_msec() + (15 * 60 * 1000) expected_expiry = self.clock.time_msec() + (15 * 60 * 1000)
self.assertApproximates(session.expiry_time_ms, expected_expiry, tolerance=1000) self.assertApproximates(session.expiry_time_ms, expected_expiry, tolerance=1000)
return picker_url, session_id
def test_username_picker_use_displayname_avatar_and_email(self) -> None:
"""Test the happy path of a username picker flow with using displayname, avatar and email."""
fake_oidc_server = self.helper.fake_oidc_server()
mxid = "@bobby:test"
displayname = "Jonny"
email = "bobby@test.com"
picture = "mxc://test/avatar_url"
picker_url, session_id = self.proceed_to_username_picker_page(
fake_oidc_server, displayname, email, picture
)
# Now, submit a username to the username picker, which should serve a redirect # Now, submit a username to the username picker, which should serve a redirect
# to the completion page # to the completion page.
content = urlencode({b"username": b"bobby"}).encode("utf8") # Also specify that we should use the provided displayname, avatar and email.
content = urlencode(
{
b"username": b"bobby",
b"use_display_name": b"true",
b"use_avatar": b"true",
b"use_email": email,
}
).encode("utf8")
chan = self.make_request( chan = self.make_request(
"POST", "POST",
path=picker_url, path=picker_url,
@ -1536,4 +1597,119 @@ class UsernamePickerTestCase(HomeserverTestCase):
content={"type": "m.login.token", "token": login_token}, content={"type": "m.login.token", "token": login_token},
) )
self.assertEqual(chan.code, 200, chan.result) self.assertEqual(chan.code, 200, chan.result)
self.assertEqual(chan.json_body["user_id"], "@bobby:test") self.assertEqual(chan.json_body["user_id"], mxid)
# ensure the displayname and avatar from the OIDC response have been configured for the user.
channel = self.make_request(
"GET", "/profile/" + mxid, access_token=chan.json_body["access_token"]
)
self.assertEqual(channel.code, 200, channel.result)
self.assertIn("mxc://test", channel.json_body["avatar_url"])
self.assertEqual(displayname, channel.json_body["displayname"])
# ensure the email from the OIDC response has been configured for the user.
channel = self.make_request(
"GET", "/account/3pid", access_token=chan.json_body["access_token"]
)
self.assertEqual(channel.code, 200, channel.result)
self.assertEqual(email, channel.json_body["threepids"][0]["address"])
def test_username_picker_dont_use_displayname_avatar_or_email(self) -> None:
"""Test the happy path of a username picker flow without using displayname, avatar or email."""
fake_oidc_server = self.helper.fake_oidc_server()
mxid = "@bobby:test"
displayname = "Jonny"
email = "bobby@test.com"
picture = "mxc://test/avatar_url"
username = "bobby"
picker_url, session_id = self.proceed_to_username_picker_page(
fake_oidc_server, displayname, email, picture
)
# Now, submit a username to the username picker, which should serve a redirect
# to the completion page.
# Also specify that we should not use the provided displayname, avatar or email.
content = urlencode(
{
b"username": username,
b"use_display_name": b"false",
b"use_avatar": b"false",
}
).encode("utf8")
chan = self.make_request(
"POST",
path=picker_url,
content=content,
content_is_form=True,
custom_headers=[
("Cookie", "username_mapping_session=" + session_id),
# old versions of twisted don't do form-parsing without a valid
# content-length header.
("Content-Length", str(len(content))),
],
)
self.assertEqual(chan.code, 302, chan.result)
location_headers = chan.headers.getRawHeaders("Location")
assert location_headers
# send a request to the completion page, which should 302 to the client redirectUrl
chan = self.make_request(
"GET",
path=location_headers[0],
custom_headers=[("Cookie", "username_mapping_session=" + session_id)],
)
self.assertEqual(chan.code, 302, chan.result)
location_headers = chan.headers.getRawHeaders("Location")
assert location_headers
# ensure that the returned location matches the requested redirect URL
path, query = location_headers[0].split("?", 1)
self.assertEqual(path, "https://x")
# it will have url-encoded the params properly, so we'll have to parse them
params = urllib.parse.parse_qsl(
query, keep_blank_values=True, strict_parsing=True, errors="strict"
)
self.assertEqual(params[0:2], EXPECTED_CLIENT_REDIRECT_URL_PARAMS)
self.assertEqual(params[2][0], "loginToken")
# fish the login token out of the returned redirect uri
login_token = params[2][1]
# finally, submit the matrix login token to the login API, which gives us our
# matrix access token, mxid, and device id.
chan = self.make_request(
"POST",
"/login",
content={"type": "m.login.token", "token": login_token},
)
self.assertEqual(chan.code, 200, chan.result)
self.assertEqual(chan.json_body["user_id"], mxid)
# ensure the displayname and avatar from the OIDC response have not been configured for the user.
channel = self.make_request(
"GET", "/profile/" + mxid, access_token=chan.json_body["access_token"]
)
self.assertEqual(channel.code, 200, channel.result)
self.assertNotIn("avatar_url", channel.json_body)
self.assertEqual(username, channel.json_body["displayname"])
# ensure the email from the OIDC response has not been configured for the user.
channel = self.make_request(
"GET", "/account/3pid", access_token=chan.json_body["access_token"]
)
self.assertEqual(channel.code, 200, channel.result)
self.assertListEqual([], channel.json_body["threepids"])
async def mock_get_file(
url: str,
output_stream: BinaryIO,
max_size: Optional[int] = None,
headers: Optional[RawHeaders] = None,
is_allowed_content_type: Optional[Callable[[str], bool]] = None,
) -> Tuple[int, Dict[bytes, List[bytes]], str, int]:
return 0, {b"Content-Type": [b"image/png"]}, "", 200

View File

@ -163,7 +163,12 @@ class RetentionTestCase(unittest.HomeserverTestCase):
) )
self.assertEqual(2, len(events), "events retrieved from database") self.assertEqual(2, len(events), "events retrieved from database")
filtered_events = self.get_success( filtered_events = self.get_success(
filter_events_for_client(storage_controllers, self.user_id, events) filter_events_for_client(
storage_controllers,
self.user_id,
events,
msc4115_membership_on_events=True,
)
) )
# We should only get one event back. # We should only get one event back.

View File

@ -48,7 +48,16 @@ from synapse.appservice import ApplicationService
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.rest import admin from synapse.rest import admin
from synapse.rest.client import account, directory, login, profile, register, room, sync from synapse.rest.client import (
account,
directory,
knock,
login,
profile,
register,
room,
sync,
)
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.types import JsonDict, RoomAlias, UserID, create_requester from synapse.types import JsonDict, RoomAlias, UserID, create_requester
from synapse.util import Clock from synapse.util import Clock
@ -733,7 +742,7 @@ class RoomsCreateTestCase(RoomBase):
self.assertEqual(HTTPStatus.OK, channel.code, channel.result) self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertTrue("room_id" in channel.json_body) self.assertTrue("room_id" in channel.json_body)
assert channel.resource_usage is not None assert channel.resource_usage is not None
self.assertEqual(32, channel.resource_usage.db_txn_count) self.assertEqual(33, channel.resource_usage.db_txn_count)
def test_post_room_initial_state(self) -> None: def test_post_room_initial_state(self) -> None:
# POST with initial_state config key, expect new room id # POST with initial_state config key, expect new room id
@ -746,7 +755,7 @@ class RoomsCreateTestCase(RoomBase):
self.assertEqual(HTTPStatus.OK, channel.code, channel.result) self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertTrue("room_id" in channel.json_body) self.assertTrue("room_id" in channel.json_body)
assert channel.resource_usage is not None assert channel.resource_usage is not None
self.assertEqual(34, channel.resource_usage.db_txn_count) self.assertEqual(35, channel.resource_usage.db_txn_count)
def test_post_room_visibility_key(self) -> None: def test_post_room_visibility_key(self) -> None:
# POST with visibility config key, expect new room id # POST with visibility config key, expect new room id
@ -1154,6 +1163,7 @@ class RoomJoinTestCase(RoomBase):
admin.register_servlets, admin.register_servlets,
login.register_servlets, login.register_servlets,
room.register_servlets, room.register_servlets,
knock.register_servlets,
] ]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None: def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
@ -1167,6 +1177,8 @@ class RoomJoinTestCase(RoomBase):
self.room2 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1) self.room2 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1)
self.room3 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1) self.room3 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1)
self.store = hs.get_datastores().main
def test_spam_checker_may_join_room_deprecated(self) -> None: def test_spam_checker_may_join_room_deprecated(self) -> None:
"""Tests that the user_may_join_room spam checker callback is correctly called """Tests that the user_may_join_room spam checker callback is correctly called
and blocks room joins when needed. and blocks room joins when needed.
@ -1317,6 +1329,57 @@ class RoomJoinTestCase(RoomBase):
expect_additional_fields=return_value[1], expect_additional_fields=return_value[1],
) )
def test_suspended_user_cannot_join_room(self) -> None:
# set the user as suspended
self.get_success(self.store.set_user_suspended_status(self.user2, True))
channel = self.make_request(
"POST", f"/join/{self.room1}", access_token=self.tok2
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
channel = self.make_request(
"POST", f"/rooms/{self.room1}/join", access_token=self.tok2
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
def test_suspended_user_cannot_knock_on_room(self) -> None:
# set the user as suspended
self.get_success(self.store.set_user_suspended_status(self.user2, True))
channel = self.make_request(
"POST",
f"/_matrix/client/v3/knock/{self.room1}",
access_token=self.tok2,
content={},
shorthand=False,
)
self.assertEqual(channel.code, 403)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
def test_suspended_user_cannot_invite_to_room(self) -> None:
# set the user as suspended
self.get_success(self.store.set_user_suspended_status(self.user1, True))
# first user invites second user
channel = self.make_request(
"POST",
f"/rooms/{self.room1}/invite",
access_token=self.tok1,
content={"user_id": self.user2},
)
self.assertEqual(
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
)
class RoomAppserviceTsParamTestCase(unittest.HomeserverTestCase): class RoomAppserviceTsParamTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [

View File

@ -43,7 +43,6 @@ class RegistrationStoreTestCase(HomeserverTestCase):
self.assertEqual( self.assertEqual(
UserInfo( UserInfo(
# TODO(paul): Surely this field should be 'user_id', not 'name'
user_id=UserID.from_string(self.user_id), user_id=UserID.from_string(self.user_id),
is_admin=False, is_admin=False,
is_guest=False, is_guest=False,
@ -57,6 +56,7 @@ class RegistrationStoreTestCase(HomeserverTestCase):
locked=False, locked=False,
is_shadow_banned=False, is_shadow_banned=False,
approved=True, approved=True,
suspended=False,
), ),
(self.get_success(self.store.get_user_by_id(self.user_id))), (self.get_success(self.store.get_user_by_id(self.user_id))),
) )

View File

@ -71,7 +71,6 @@ class EventSearchInsertionTest(HomeserverTestCase):
store.search_msgs([room_id], "hi bob", ["content.body"]) store.search_msgs([room_id], "hi bob", ["content.body"])
) )
self.assertEqual(result.get("count"), 1) self.assertEqual(result.get("count"), 1)
if isinstance(store.database_engine, PostgresEngine):
self.assertIn("hi", result.get("highlights")) self.assertIn("hi", result.get("highlights"))
self.assertIn("bob", result.get("highlights")) self.assertIn("bob", result.get("highlights"))
@ -80,7 +79,7 @@ class EventSearchInsertionTest(HomeserverTestCase):
store.search_msgs([room_id], "another", ["content.body"]) store.search_msgs([room_id], "another", ["content.body"])
) )
self.assertEqual(result.get("count"), 1) self.assertEqual(result.get("count"), 1)
if isinstance(store.database_engine, PostgresEngine):
self.assertIn("another", result.get("highlights")) self.assertIn("another", result.get("highlights"))
# Check that search works for a search term that overlaps with the message # Check that search works for a search term that overlaps with the message
@ -90,7 +89,7 @@ class EventSearchInsertionTest(HomeserverTestCase):
result = self.get_success( result = self.get_success(
store.search_msgs([room_id], "hi alice", ["content.body"]) store.search_msgs([room_id], "hi alice", ["content.body"])
) )
if isinstance(store.database_engine, PostgresEngine):
self.assertIn("alice", result.get("highlights")) self.assertIn("alice", result.get("highlights"))
def test_non_string(self) -> None: def test_non_string(self) -> None:

View File

@ -21,13 +21,19 @@ import logging
from typing import Optional from typing import Optional
from unittest.mock import patch from unittest.mock import patch
from synapse.api.constants import EventUnsignedContentFields
from synapse.api.room_versions import RoomVersions from synapse.api.room_versions import RoomVersions
from synapse.events import EventBase, make_event_from_dict from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.types import JsonDict, create_requester from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.server import HomeServer
from synapse.types import create_requester
from synapse.visibility import filter_events_for_client, filter_events_for_server from synapse.visibility import filter_events_for_client, filter_events_for_server
from tests import unittest from tests import unittest
from tests.test_utils.event_injection import inject_event, inject_member_event
from tests.unittest import HomeserverTestCase
from tests.utils import create_room from tests.utils import create_room
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -56,15 +62,31 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
# #
# before we do that, we persist some other events to act as state. # before we do that, we persist some other events to act as state.
self._inject_visibility("@admin:hs", "joined") self.get_success(
inject_visibility_event(self.hs, TEST_ROOM_ID, "@admin:hs", "joined")
)
for i in range(10): for i in range(10):
self._inject_room_member("@resident%i:hs" % i) self.get_success(
inject_member_event(
self.hs,
TEST_ROOM_ID,
"@resident%i:hs" % i,
"join",
)
)
events_to_filter = [] events_to_filter = []
for i in range(10): for i in range(10):
user = "@user%i:%s" % (i, "test_server" if i == 5 else "other_server") evt = self.get_success(
evt = self._inject_room_member(user, extra_content={"a": "b"}) inject_member_event(
self.hs,
TEST_ROOM_ID,
"@user%i:%s" % (i, "test_server" if i == 5 else "other_server"),
"join",
extra_content={"a": "b"},
)
)
events_to_filter.append(evt) events_to_filter.append(evt)
filtered = self.get_success( filtered = self.get_success(
@ -90,8 +112,19 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
def test_filter_outlier(self) -> None: def test_filter_outlier(self) -> None:
# outlier events must be returned, for the good of the collective federation # outlier events must be returned, for the good of the collective federation
self._inject_room_member("@resident:remote_hs") self.get_success(
self._inject_visibility("@resident:remote_hs", "joined") inject_member_event(
self.hs,
TEST_ROOM_ID,
"@resident:remote_hs",
"join",
)
)
self.get_success(
inject_visibility_event(
self.hs, TEST_ROOM_ID, "@resident:remote_hs", "joined"
)
)
outlier = self._inject_outlier() outlier = self._inject_outlier()
self.assertEqual( self.assertEqual(
@ -110,7 +143,9 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
) )
# it should also work when there are other events in the list # it should also work when there are other events in the list
evt = self._inject_message("@unerased:local_hs") evt = self.get_success(
inject_message_event(self.hs, TEST_ROOM_ID, "@unerased:local_hs")
)
filtered = self.get_success( filtered = self.get_success(
filter_events_for_server( filter_events_for_server(
@ -150,19 +185,34 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
# change in the middle of them. # change in the middle of them.
events_to_filter = [] events_to_filter = []
evt = self._inject_message("@unerased:local_hs") evt = self.get_success(
inject_message_event(self.hs, TEST_ROOM_ID, "@unerased:local_hs")
)
events_to_filter.append(evt) events_to_filter.append(evt)
evt = self._inject_message("@erased:local_hs") evt = self.get_success(
inject_message_event(self.hs, TEST_ROOM_ID, "@erased:local_hs")
)
events_to_filter.append(evt) events_to_filter.append(evt)
evt = self._inject_room_member("@joiner:remote_hs") evt = self.get_success(
inject_member_event(
self.hs,
TEST_ROOM_ID,
"@joiner:remote_hs",
"join",
)
)
events_to_filter.append(evt) events_to_filter.append(evt)
evt = self._inject_message("@unerased:local_hs") evt = self.get_success(
inject_message_event(self.hs, TEST_ROOM_ID, "@unerased:local_hs")
)
events_to_filter.append(evt) events_to_filter.append(evt)
evt = self._inject_message("@erased:local_hs") evt = self.get_success(
inject_message_event(self.hs, TEST_ROOM_ID, "@erased:local_hs")
)
events_to_filter.append(evt) events_to_filter.append(evt)
# the erasey user gets erased # the erasey user gets erased
@ -200,76 +250,6 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
for i in (1, 4): for i in (1, 4):
self.assertNotIn("body", filtered[i].content) self.assertNotIn("body", filtered[i].content)
def _inject_visibility(self, user_id: str, visibility: str) -> EventBase:
content = {"history_visibility": visibility}
builder = self.event_builder_factory.for_room_version(
RoomVersions.V1,
{
"type": "m.room.history_visibility",
"sender": user_id,
"state_key": "",
"room_id": TEST_ROOM_ID,
"content": content,
},
)
event, unpersisted_context = self.get_success(
self.event_creation_handler.create_new_client_event(builder)
)
context = self.get_success(unpersisted_context.persist(event))
self.get_success(self._persistence.persist_event(event, context))
return event
def _inject_room_member(
self,
user_id: str,
membership: str = "join",
extra_content: Optional[JsonDict] = None,
) -> EventBase:
content = {"membership": membership}
content.update(extra_content or {})
builder = self.event_builder_factory.for_room_version(
RoomVersions.V1,
{
"type": "m.room.member",
"sender": user_id,
"state_key": user_id,
"room_id": TEST_ROOM_ID,
"content": content,
},
)
event, unpersisted_context = self.get_success(
self.event_creation_handler.create_new_client_event(builder)
)
context = self.get_success(unpersisted_context.persist(event))
self.get_success(self._persistence.persist_event(event, context))
return event
def _inject_message(
self, user_id: str, content: Optional[JsonDict] = None
) -> EventBase:
if content is None:
content = {"body": "testytest", "msgtype": "m.text"}
builder = self.event_builder_factory.for_room_version(
RoomVersions.V1,
{
"type": "m.room.message",
"sender": user_id,
"room_id": TEST_ROOM_ID,
"content": content,
},
)
event, unpersisted_context = self.get_success(
self.event_creation_handler.create_new_client_event(builder)
)
context = self.get_success(unpersisted_context.persist(event))
self.get_success(self._persistence.persist_event(event, context))
return event
def _inject_outlier(self) -> EventBase: def _inject_outlier(self) -> EventBase:
builder = self.event_builder_factory.for_room_version( builder = self.event_builder_factory.for_room_version(
RoomVersions.V1, RoomVersions.V1,
@ -292,7 +272,122 @@ class FilterEventsForServerTestCase(unittest.HomeserverTestCase):
return event return event
class FilterEventsForClientTestCase(unittest.FederatingHomeserverTestCase): class FilterEventsForClientTestCase(HomeserverTestCase):
servlets = [
admin.register_servlets,
login.register_servlets,
room.register_servlets,
]
def test_joined_history_visibility(self) -> None:
# User joins and leaves room. Should be able to see the join and leave,
# and messages sent between the two, but not before or after.
self.register_user("resident", "p1")
resident_token = self.login("resident", "p1")
room_id = self.helper.create_room_as("resident", tok=resident_token)
self.get_success(
inject_visibility_event(self.hs, room_id, "@resident:test", "joined")
)
before_event = self.get_success(
inject_message_event(self.hs, room_id, "@resident:test", body="before")
)
join_event = self.get_success(
inject_member_event(self.hs, room_id, "@joiner:test", "join")
)
during_event = self.get_success(
inject_message_event(self.hs, room_id, "@resident:test", body="during")
)
leave_event = self.get_success(
inject_member_event(self.hs, room_id, "@joiner:test", "leave")
)
after_event = self.get_success(
inject_message_event(self.hs, room_id, "@resident:test", body="after")
)
# We have to reload the events from the db, to ensure that prev_content is
# populated.
events_to_filter = [
self.get_success(
self.hs.get_storage_controllers().main.get_event(
e.event_id,
get_prev_content=True,
)
)
for e in [
before_event,
join_event,
during_event,
leave_event,
after_event,
]
]
# Now run the events through the filter, and check that we can see the events
# we expect, and that the membership prop is as expected.
#
# We deliberately do the queries for both users upfront; this simulates
# concurrent queries on the server, and helps ensure that we aren't
# accidentally serving the same event object (with the same unsigned.membership
# property) to both users.
joiner_filtered_events = self.get_success(
filter_events_for_client(
self.hs.get_storage_controllers(),
"@joiner:test",
events_to_filter,
msc4115_membership_on_events=True,
)
)
resident_filtered_events = self.get_success(
filter_events_for_client(
self.hs.get_storage_controllers(),
"@resident:test",
events_to_filter,
msc4115_membership_on_events=True,
)
)
# The joiner should be able to seem the join and leave,
# and messages sent between the two, but not before or after.
self.assertEqual(
[e.event_id for e in [join_event, during_event, leave_event]],
[e.event_id for e in joiner_filtered_events],
)
self.assertEqual(
["join", "join", "leave"],
[
e.unsigned[EventUnsignedContentFields.MSC4115_MEMBERSHIP]
for e in joiner_filtered_events
],
)
# The resident user should see all the events.
self.assertEqual(
[
e.event_id
for e in [
before_event,
join_event,
during_event,
leave_event,
after_event,
]
],
[e.event_id for e in resident_filtered_events],
)
self.assertEqual(
["join", "join", "join", "join", "join"],
[
e.unsigned[EventUnsignedContentFields.MSC4115_MEMBERSHIP]
for e in resident_filtered_events
],
)
class FilterEventsOutOfBandEventsForClientTestCase(
unittest.FederatingHomeserverTestCase
):
def test_out_of_band_invite_rejection(self) -> None: def test_out_of_band_invite_rejection(self) -> None:
# this is where we have received an invite event over federation, and then # this is where we have received an invite event over federation, and then
# rejected it. # rejected it.
@ -341,15 +436,24 @@ class FilterEventsForClientTestCase(unittest.FederatingHomeserverTestCase):
) )
# the invited user should be able to see both the invite and the rejection # the invited user should be able to see both the invite and the rejection
self.assertEqual( filtered_events = self.get_success(
self.get_success(
filter_events_for_client( filter_events_for_client(
self.hs.get_storage_controllers(), self.hs.get_storage_controllers(),
"@user:test", "@user:test",
[invite_event, reject_event], [invite_event, reject_event],
msc4115_membership_on_events=True,
) )
), )
[invite_event, reject_event], self.assertEqual(
[e.event_id for e in filtered_events],
[e.event_id for e in [invite_event, reject_event]],
)
self.assertEqual(
["invite", "leave"],
[
e.unsigned[EventUnsignedContentFields.MSC4115_MEMBERSHIP]
for e in filtered_events
],
) )
# other users should see neither # other users should see neither
@ -359,7 +463,39 @@ class FilterEventsForClientTestCase(unittest.FederatingHomeserverTestCase):
self.hs.get_storage_controllers(), self.hs.get_storage_controllers(),
"@other:test", "@other:test",
[invite_event, reject_event], [invite_event, reject_event],
msc4115_membership_on_events=True,
) )
), ),
[], [],
) )
async def inject_visibility_event(
hs: HomeServer,
room_id: str,
sender: str,
visibility: str,
) -> EventBase:
return await inject_event(
hs,
type="m.room.history_visibility",
sender=sender,
state_key="",
room_id=room_id,
content={"history_visibility": visibility},
)
async def inject_message_event(
hs: HomeServer,
room_id: str,
sender: str,
body: Optional[str] = "testytest",
) -> EventBase:
return await inject_event(
hs,
type="m.room.message",
sender=sender,
room_id=room_id,
content={"body": body, "msgtype": "m.text"},
)

View File

@ -1,3 +1,5 @@
from parameterized import parameterized
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from tests import unittest from tests import unittest
@ -161,7 +163,8 @@ class StreamChangeCacheTests(unittest.HomeserverTestCase):
self.assertFalse(cache.has_any_entity_changed(2)) self.assertFalse(cache.has_any_entity_changed(2))
self.assertFalse(cache.has_any_entity_changed(3)) self.assertFalse(cache.has_any_entity_changed(3))
def test_get_entities_changed(self) -> None: @parameterized.expand([(0,), (1000000000,)])
def test_get_entities_changed(self, perf_factor: int) -> None:
""" """
StreamChangeCache.get_entities_changed will return the entities in the StreamChangeCache.get_entities_changed will return the entities in the
given list that have changed since the provided stream ID. If the given list that have changed since the provided stream ID. If the
@ -178,7 +181,9 @@ class StreamChangeCacheTests(unittest.HomeserverTestCase):
# get the ones after that point. # get the ones after that point.
self.assertEqual( self.assertEqual(
cache.get_entities_changed( cache.get_entities_changed(
["user@foo.com", "bar@baz.net", "user@elsewhere.org"], stream_pos=2 ["user@foo.com", "bar@baz.net", "user@elsewhere.org"],
stream_pos=2,
_perf_factor=perf_factor,
), ),
{"bar@baz.net", "user@elsewhere.org"}, {"bar@baz.net", "user@elsewhere.org"},
) )
@ -195,6 +200,7 @@ class StreamChangeCacheTests(unittest.HomeserverTestCase):
"not@here.website", "not@here.website",
], ],
stream_pos=2, stream_pos=2,
_perf_factor=perf_factor,
), ),
{"bar@baz.net", "user@elsewhere.org"}, {"bar@baz.net", "user@elsewhere.org"},
) )
@ -210,6 +216,7 @@ class StreamChangeCacheTests(unittest.HomeserverTestCase):
"not@here.website", "not@here.website",
], ],
stream_pos=0, stream_pos=0,
_perf_factor=perf_factor,
), ),
{"user@foo.com", "bar@baz.net", "user@elsewhere.org", "not@here.website"}, {"user@foo.com", "bar@baz.net", "user@elsewhere.org", "not@here.website"},
) )
@ -217,7 +224,11 @@ class StreamChangeCacheTests(unittest.HomeserverTestCase):
# Query a subset of the entries mid-way through the stream. We should # Query a subset of the entries mid-way through the stream. We should
# only get back the subset. # only get back the subset.
self.assertEqual( self.assertEqual(
cache.get_entities_changed(["bar@baz.net"], stream_pos=2), cache.get_entities_changed(
["bar@baz.net"],
stream_pos=2,
_perf_factor=perf_factor,
),
{"bar@baz.net"}, {"bar@baz.net"},
) )